0% found this document useful (0 votes)
818 views270 pages

THI4012 Student Guide v1-0 Secured

This document provides an overview of the Hitachi Ops Center deployment and installation process. It discusses the key components of Ops Center including the Administrator, Analyzer, and Data Instance Director. It also covers licensing packages and options for deploying Ops Center as either a virtual appliance or installer. The document is intended to help students understand how to deploy and use Ops Center to manage Hitachi storage systems and perform tasks such as problem analysis, predictive analytics, and operational recovery of data.

Uploaded by

Yowan Solomun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
818 views270 pages

THI4012 Student Guide v1-0 Secured

This document provides an overview of the Hitachi Ops Center deployment and installation process. It discusses the key components of Ops Center including the Administrator, Analyzer, and Data Instance Director. It also covers licensing packages and options for deploying Ops Center as either a virtual appliance or installer. The document is intended to help students understand how to deploy and use Ops Center to manage Hitachi storage systems and perform tasks such as problem analysis, predictive analytics, and operational recovery of data.

Uploaded by

Yowan Solomun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 270

Student Guide for

Hitachi VSP 5000 Series and Hitachi


Ops Center Training for Global Delivery

THI4012

Courseware Version 1.0


Hitachi Vantara
Corporate Headquarters Regional Contact Information
2535 Augustine Drive Americas: +1 866 374 5822 or [email protected]
Santa Clara, CA 95054 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected]
www.HitachiVantara.com | community.HitachiVantara.com Asia Pacific: +852 3189 7900 or [email protected]

© Hitachi Vantara LLC 2020. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Content Platform Anywhere, Live Insight,
VSP, ShadowImage, TrueCopy and Hi-Track are trademarks or registered trademarks of Hitachi Vantara Corporation. IBM and FlashCopy are trademarks or
registered trademarks of International Business Machines Corporation. Microsoft and SQL Server are trademarks or registered trademarks of Microsoft Corporation.
All other trademarks, service marks and company names are properties of their respective owners.

ii
Table of Contents
Introduction ..............................................................................................................xvii
Welcome ................................................................................................................................................. xvii
Please Give Us Feedback .......................................................................................................................... xviii
Course Description ................................................................................................................................... xviii
Prerequisites ............................................................................................................................................. xix
Course Objectives ..................................................................................................................................... xix
Course Topics ............................................................................................................................................ xx
Learning Paths Overview ........................................................................................................................... xxi
Stay Connected During and After Your Training ..........................................................................................xxii

1a. Hitachi Ops Center Deployment and Installation - 1 ....................................... 1a-1


Module Objectives ................................................................................................................................... 1a-1
What Is Hitachi Ops Center? .................................................................................................................... 1a-2
Hitachi’s New, Modern Infrastructure Portfolio ..................................................................................... 1a-2
Hitachi Ops Center Introduction .......................................................................................................... 1a-2
New Product Names ........................................................................................................................... 1a-3
Licensing Packages ................................................................................................................................. 1a-3
VSP 5000 Series License Packages for Open and Mainframe ................................................................. 1a-3
Optional Software Contents for Ops Center.......................................................................................... 1a-4
Hitachi Ops Center Common Services ....................................................................................................... 1a-5
Ops Center Common Services ............................................................................................................. 1a-5
Common Login Screen ....................................................................................................................... 1a-6
Single Sign-On ................................................................................................................................... 1a-6
Hitachi Ops Center Administrator ............................................................................................................. 1a-7
Hitachi Ops Center Administrator Overview .......................................................................................... 1a-7
Hitachi Ops Center Administrator Functions ......................................................................................... 1a-8
Hitachi Ops Center Administrator ........................................................................................................ 1a-8
Instructor Demonstration ................................................................................................................... 1a-9
Hitachi Ops Center Analyzer................................................................................................................... 1a-10
Hitachi Ops Center Analyzer Overview ............................................................................................... 1a-10
Hitachi Ops Center Analyzer IT Analytics Delivered............................................................................. 1a-10
Problem Analysis .............................................................................................................................. 1a-11
Automation Management Integration ................................................................................................ 1a-11

iii
Table of Contents

Predictive Analytics .......................................................................................................................... 1a-12


Central Viewpoint ............................................................................................................................. 1a-12
On-Premises and SaaS Analytics ....................................................................................................... 1a-13
Hitachi Ops Center Analyzer – SaaS .................................................................................................. 1a-13
Analyzer Dashboard ......................................................................................................................... 1a-14
Customization .................................................................................................................................. 1a-14
Automated Root Cause Analysis and Resolution: How We Do It – Problem Analysis ............................. 1a-15
Hitachi Ops Center Analyzer: Dynamic or Static Thresholds ................................................................ 1a-15
Hitachi Ops Center Analyzer: Resource Optimization Planning ............................................................. 1a-16
Hitachi Ops Center Analyzer Summary............................................................................................... 1a-16
Active Learning Exercise: Group Discussion ....................................................................................... 1a-17
Hitachi Ops Center Data Instance Director .............................................................................................. 1a-18
Enterprise Copy Data Management ................................................................................................... 1a-18
Storage Configurations: Block Storage ............................................................................................... 1a-19
Operational Recovery ....................................................................................................................... 1a-19
Storage-Based Operational Recovery ................................................................................................. 1a-20
Host-Based Operational Recovery ..................................................................................................... 1a-21
Active Learning Exercise: Group Discussion ....................................................................................... 1a-22
Module Summary .................................................................................................................................. 1a-22

1b. Hitachi Ops Center Deployment and Installation - 2 ....................................... 1b-1


Module Objectives ................................................................................................................................... 1b-1
Deployment Options ................................................................................................................................ 1b-2
Virtual Appliance or Installer ............................................................................................................... 1b-2
Deployment Considerations ................................................................................................................ 1b-2
Ops Center Deployment Options ......................................................................................................... 1b-3
Ops Center Download Options ............................................................................................................ 1b-3
Ops Center Preconfigured Media ......................................................................................................... 1b-4
Ops Center Installation Media (Linux) .................................................................................................. 1b-4
Ops Center Installation Media (Windows) ............................................................................................ 1b-5
Single Common Service For Multiple OVAs ........................................................................................... 1b-5
Activate / Deactivate Products ............................................................................................................ 1b-6
Analyzer OVA Specification ................................................................................................................. 1b-7
Active Learning Exercise: What Do You Think ...................................................................................... 1b-7

iv
Table of Contents

Ops Center Deployment Overview ............................................................................................................ 1b-8


Ops Center Deployment ..................................................................................................................... 1b-8
Ops Center Deployment Overview ..................................................................................................... 1b-10
Ops Center Deployment Details.............................................................................................................. 1b-10
System Configuration (For Servers) ................................................................................................... 1b-10
Configuration #1: New installation .................................................................................................... 1b-11
Configuration #1: New Installation(2) Deploy Ops Center by “Ops Center OVA(Lin)” (1/2) ................... 1b-11
Configuration #1: New Installation(2) Deploy Ops Center by “Ops Center OVA(Lin)” (2/2) ................... 1b-12
Configuration #1: New Installation (3) Login to Ops Center CS and Apply License ............................... 1b-12
System Configuration (For Servers) ................................................................................................... 1b-13
Configuration #2: New Installation (more than one OVA) ................................................................... 1b-13
Configuration #2: New Installation (more than one OVA)(3) Register Each Product to Common Service 1b-14
System Configuration (For Servers) ................................................................................................... 1b-14
Configuration #3: Upgrade to Ops Center ......................................................................................... 1b-15
Configuration #3: Upgrade to Ops Center (2) Upgrade Automator/Analyzer by Installer ....................... 1b-15
Administrator SSO ............................................................................................................................ 1b-16
Hitachi Ops Center Upgrade Scenarios.................................................................................................... 1b-19
Ops Center Upgrade Scenarios.......................................................................................................... 1b-19
Module Summary .................................................................................................................................. 1b-20
Appendix .............................................................................................................................................. 1b-21
Hitachi Ops Center Backup and Restore Overview .............................................................................. 1b-21
Hitachi Ops Center Backup and Restore ............................................................................... 1b-21
Hitachi Ops Center Common Services................................................................................... 1b-22
Hitachi Ops Center Administrator ......................................................................................... 1b-22
Administrator VAM Tool: Backup .......................................................................................... 1b-23
Administrator VAM Tool: Restore ......................................................................................... 1b-24
Analyzer Backup Restore Procedure ..................................................................................... 1b-24
Automator – CLI commands ................................................................................................ 1b-25
Automator – backupsystem ................................................................................................. 1b-25
Automator – restoresystem ................................................................................................. 1b-26

2. VSP 5000 Series Models ..................................................................................... 2-1


Module Objectives .................................................................................................................................... 2-1
Controller Box (CBX) Components ............................................................................................................. 2-2
VSP 5000 Series Offering .......................................................................................................................... 2-2

v
Table of Contents

Controller Box (CBX) ................................................................................................................................ 2-3


VSP 5000 Series Offering .......................................................................................................................... 2-4
VSP 5x00 Max Hardware Configs............................................................................................................... 2-4
CPU and GUM Specs ................................................................................................................................ 2-5
VSP 5100 Block Diagram .......................................................................................................................... 2-6
VSP 5500 - Multi-node Scale-out ............................................................................................................... 2-6
VSP 5000 Portfolio Positioning .................................................................................................................. 2-7
Active Learning Exercise: What Do You Think?........................................................................................... 2-7
VSP E990 ............................................................................................................................................... 2-8
VSP E990 HW Specifications .................................................................................................................... 2-8
Drive Boxes ............................................................................................................................................ 2-8
Module Summary ..................................................................................................................................... 2-9

3a. VSP 5000 Series Architecture and Availability - 1 ............................................ 3a-1


Module Objectives ................................................................................................................................... 3a-1
Hardware ............................................................................................................................................... 3a-2
Naming Cross Reference .................................................................................................................... 3a-2
High Level Concept ............................................................................................................................ 3a-2
Module (CBX Pair) Component Location ............................................................................................... 3a-3
Connections ....................................................................................................................................... 3a-3
ISW (Interconnect Switch) / HIE ......................................................................................................... 3a-4
System Interconnect (HSNBX x 2) ....................................................................................................... 3a-4
Interconnection Architectures ............................................................................................................. 3a-5
VSP 5000 Series Logical System Connectivity ....................................................................................... 3a-5
Front End Ports .................................................................................................................................. 3a-7
VSP 5000 Series Rear – 5500 2Node ................................................................................................... 3a-8
VSP 5000 Series – 5500 6Nodes ......................................................................................................... 3a-8
Drive Boxes and RAID Configuration ........................................................................................................ 3a-9
Drive Boxes ....................................................................................................................................... 3a-9
PG and RAID Layout – Single Pair Controller Block ............................................................................... 3a-9
PG and RAID Layout – Multiple Pair Controller Block .......................................................................... 3a-10
Spare Drive Location ........................................................................................................................ 3a-10
Spare Drive Qty ............................................................................................................................... 3a-11

vi
Table of Contents

SAS Media Chassis Connectivity ........................................................................................................ 3a-11


SAS Media Chassis Connectivity Optimization ..................................................................................... 3a-12
VSP 5x00 Max Hardware Configs....................................................................................................... 3a-12
Active Learning Exercise: Raise Your Hands If You Know It! ............................................................... 3a-13
Architecture and Specifications .............................................................................................................. 3a-13
System Configuration (SAS Backend) ................................................................................................ 3a-13
System Configuration (NVMe Backend).............................................................................................. 3a-14
System Configuration (SAS/NVMe Mixed) .......................................................................................... 3a-14
Power .................................................................................................................................................. 3a-15
Power Resiliency .............................................................................................................................. 3a-15
Hitachi Interconnect Edge (HIE) ............................................................................................................ 3a-15
Offload By HIE ................................................................................................................................. 3a-15
VSP 5000 Series: Hardware Offload Design........................................................................................ 3a-17
Memory Read/Atomic Access on the Other Controller ......................................................................... 3a-17
Data Transfer to the Other Controller ................................................................................................ 3a-18
Service Processor .................................................................................................................................. 3a-18
SVP Unit .......................................................................................................................................... 3a-18
SVP LAN Cable Routing .................................................................................................................... 3a-19
SVP Connection Architecture ............................................................................................................. 3a-19
Proxy on SVP ................................................................................................................................... 3a-20
Active Learning Exercise: Raise Your Hands If You Know It! ............................................................... 3a-20
Module Summary .................................................................................................................................. 3a-21
Module Review ..................................................................................................................................... 3a-22

VSP 5000 Series Architecture and Availability - 2 ............................................ 3b-1


Module Objectives .................................................................................................................................. 3b-1
Cache and Shared Memory ...................................................................................................................... 3b-2
Shared Memory Allocation .................................................................................................................. 3b-2
SM/CM Resiliency Improvement .......................................................................................................... 3b-2
Shared Memory (SM) Design .............................................................................................................. 3b-3
Memory (DIMM) Architecture Comparison ........................................................................................... 3b-3
Global Cache Mirroring ....................................................................................................................... 3b-4
Shared Memory Caching Method ......................................................................................................... 3b-4
Shared Memory (SM) Resiliency(Compare/Contrast VSP 5000 versus G/F1x00) ...................................... 3b-5
Shared Memory Resiliency (VSP G1x00/ VSP F1x00) ............................................................................ 3b-5

vii
Table of Contents

Shared Memory Resiliency .................................................................................................................. 3b-6


VSP 5100 Shared Memory .................................................................................................................. 3b-7
Shared Memory – VSP 5100................................................................................................................ 3b-7
Optimization of Cache Access Logic – Simplifying the DIR Table Architecture ......................................... 3b-8
Optimization of Cache Access Logic Simplifying the DIR Table Architecture............................................ 3b-8
MP Failure .............................................................................................................................................. 3b-9
MPU Ownership in Failure Cases ......................................................................................................... 3b-9
Replace CPU or Memory .......................................................................................................................... 3b-9
Impact of Replacing CPU or Memory ................................................................................................... 3b-9
Active Learning Exercise: Raise Your Hands If You Know It! ............................................................... 3b-10
Major Technical Differences Between VSP 5000 Series and VSP G1500/VSP F1500.................................... 3b-10
Next-Gen High-End Storage .............................................................................................................. 3b-10
Summary of Major Differences .......................................................................................................... 3b-11
VSP 5000 Series vs. VSP G1500/VSP F1500 Summary (Major Changes Aligned to Value) ...................... 3b-12
Direct Command Transfer (DCT) Logic (Enhancement for Optimizing the ASIC Emulator) ..................... 3b-12
Hardware Independence (ASIC-less) ................................................................................................. 3b-13
Program Product Changes ..................................................................................................................... 3b-13
Program Product List (Differences Only) ............................................................................................ 3b-13
Volume Capacity Overview .................................................................................................................... 3b-14
Volumes/Capacity ............................................................................................................................ 3b-14
Key Features and Discussion Points ........................................................................................................ 3b-15
Key Features ................................................................................................................................... 3b-15
Rebuild Time Improvement .............................................................................................................. 3b-16
Bidirectional Port – Concept (Open CHB Option) ................................................................................ 3b-16
Bidirectional Port Option - Considerations .......................................................................................... 3b-17
Discussion Points ............................................................................................................................. 3b-17
Licensing for VSP 5000 .......................................................................................................................... 3b-18
VSP 5000 Packaging, Licensing and Pricing Framework ...................................................................... 3b-18
VSP 5000 Base Packages for Open and MF ........................................................................................ 3b-18
VSP 5000 Advanced Packages for Open and MF ................................................................................. 3b-19
Optional Software Contents and Licensing ......................................................................................... 3b-19
Active Learning Exercise: Raise Your Hands If You Know It! ............................................................... 3b-20
Module Summary .................................................................................................................................. 3b-20

viii
Table of Contents

4. VSP 5000 Series Adaptive Data Reduction ......................................................... 4-1


Module Objectives .................................................................................................................................... 4-1
ADR and It’s Functions ............................................................................................................................. 4-2
What is Adaptive Data Reduction ......................................................................................................... 4-2
What is Compression ........................................................................................................................... 4-2
What is Deduplication.......................................................................................................................... 4-3
ADR Supported Platform and Requirements Overview ................................................................................ 4-4
ADR Supported Platforms and Requirements ........................................................................................ 4-4
ADR Constraints .................................................................................................................................. 4-5
ADR Terminology Overview ...................................................................................................................... 4-5
ADR Terminology ................................................................................................................................ 4-5
ADR – DRD-VOL, FPT and DSD-Vol Distribution .................................................................................... 4-6
Industry Data Reduction Terms ........................................................................................................... 4-7
ADR Notes .......................................................................................................................................... 4-7
What is Effective Capacity ........................................................................................................................ 4-8
Raw vs Effective Capacity .................................................................................................................... 4-8
ADR Pool Requirements Overview ............................................................................................................. 4-9
ADR Pool Requirements ....................................................................................................................... 4-9
Storage Pool Usage With ADR Enabled ................................................................................................. 4-9
ADR Hitachi Dynamic Tiering Smart Tiers ................................................................................................. 4-10
ADR HDT Smart Tier .......................................................................................................................... 4-10
ADR HDT Smart Tiers ......................................................................................................................... 4-10
ADR Garbage Collection........................................................................................................................... 4-11
Garbage Collection ............................................................................................................................. 4-11
ADR Inline vs Post Process ...................................................................................................................... 4-12
Inline vs Post Process ......................................................................................................................... 4-12
ADR Monitoring....................................................................................................................................... 4-13
Monitoring ......................................................................................................................................... 4-13
Monitoring Pool Window ..................................................................................................................... 4-14
Raidcom LDEV Metrics ........................................................................................................................ 4-14
ADR Inline vs Post Process ...................................................................................................................... 4-15
Inline vs Post Process ......................................................................................................................... 4-15

ix
Table of Contents

ADR Sizing.............................................................................................................................................. 4-16


ADR Calculator................................................................................................................................... 4-16
ADR Input ......................................................................................................................................... 4-17
ADR Calculator - Results ..................................................................................................................... 4-17
Active Learning Exercise: Jigsaw Puzzle ............................................................................................... 4-18
Module Summary .................................................................................................................................... 4-18

5. VSP 5000 Series High Availability and Storage Navigator Differences From
G1x00 ........................................................................................................................ 5-1
Module Objectives .................................................................................................................................... 5-1
HA Differences From VSP G1000/ VSP G1500 ............................................................................................ 5-2
Difference From G1000/1500 ............................................................................................................... 5-2
Single Point Failure ............................................................................................................................. 5-2
Two Point Failure ................................................................................................................................ 5-3
VSP 5500 and VSP 5100 – Two Point Failure......................................................................................... 5-4
X-Path/HIE/ISW .................................................................................................................................. 5-5
X-Path/HIE/ISW .................................................................................................................................. 5-5
Active Learning Exercise: Jigsaw Puzzle ................................................................................................ 5-6
Storage Navigator Differences From VSP G1x00 ......................................................................................... 5-6
DKC ................................................................................................................................................... 5-6
Logical Devices – Column Settings ....................................................................................................... 5-7
Logical Devices ................................................................................................................................... 5-7
Pools – More Actions ........................................................................................................................... 5-8
Ports – Column Settings ...................................................................................................................... 5-8
Port Conditions ................................................................................................................................... 5-9
Module Summary ..................................................................................................................................... 5-9
Questions to IT PRO................................................................................................................................ 5-10

6. VSP 5000 Series Security and Encryption Enhancements .................................. 6-1


Module Objectives .................................................................................................................................... 6-1
Encryption ............................................................................................................................................... 6-2
Encryption Components............................................................................................................................ 6-2
Key Management Options ......................................................................................................................... 6-4
Support Specifications for Encryption License Key ...................................................................................... 6-5

x
Table of Contents

Encryption Comparison – EDKB’s to SED ................................................................................................... 6-5


Encryption Documentation ........................................................................................................................ 6-6
Sanitization Concepts ............................................................................................................................... 6-6
Shredder Operations ................................................................................................................................ 6-7
Enhanced Sanitization .............................................................................................................................. 6-7
Sanitation Documentation......................................................................................................................... 6-8
Audit Logging .......................................................................................................................................... 6-8
Additional Security Changes...................................................................................................................... 6-9
Active Learning Exercise: Whiteboard Drawing........................................................................................... 6-9
Module Summary .................................................................................................................................... 6-10
Questions ............................................................................................................................................... 6-10

7. VSP 5000 Series and Mainframes ....................................................................... 7-1


Module Objectives .................................................................................................................................... 7-1
Hitachi Vantara Solutions for Mainframe >40 Years Experience and 14 Generations of Solutions ................... 7-2
Mainframe and VSP 5000 Series................................................................................................................ 7-2
VSP 5000 Changes ................................................................................................................................... 7-3
Changes on VSP 5000 .............................................................................................................................. 7-3
Mainframe and VSP 5000 Series................................................................................................................ 7-4
Module Summary ..................................................................................................................................... 7-4

8. VSP 5000 Series HDP-HDT.................................................................................. 8-1


Module Objectives .................................................................................................................................... 8-1
Pools ....................................................................................................................................................... 8-2
Pool Definitions ................................................................................................................................... 8-2
Pool Configuration............................................................................................................................... 8-2
Hitachi Dynamic Tiering Overview ............................................................................................................. 8-3
HDT Tiers ........................................................................................................................................... 8-3
Smart Tier .......................................................................................................................................... 8-4
LU Ownership .......................................................................................................................................... 8-5
LU Ownership Assignment Range in Multi CBX ...................................................................................... 8-5
LU Assignments .................................................................................................................................. 8-7
Active Learning Exercise: Group Discussion .......................................................................................... 8-8
Front End or Back End Cross IO ................................................................................................................ 8-8
What is Front / Back End; Straight / Cross, I/O? ................................................................................... 8-8

xi
Table of Contents

Back End Optimization and DP Page Placement ......................................................................................... 8-9


HDP Data Intelligent Placement ........................................................................................................... 8-9
Back End Cross Optimisation (Flash Data Placement) ........................................................................... 8-10
Back End Cross Optimisation With HDT vs HDP.................................................................................... 8-11
Back End Cross Optimisation – No Pool Span ....................................................................................... 8-12
Back End Cross Optimisation – Pool Span ............................................................................................ 8-13
Page Placement ...................................................................................................................................... 8-13
Pool Rebalance Overview......................................................................................................................... 8-14
Pool Rebalance .................................................................................................................................. 8-14
Module Summary .................................................................................................................................... 8-15

9. VSP 5000 Series and Replication ........................................................................ 9-1


Module Objectives .................................................................................................................................... 9-1
Hitachi Thin Image Enhancements ............................................................................................................ 9-2
Thin Image Defrag .............................................................................................................................. 9-2
When to Perform Defrag ..................................................................................................................... 9-3
Defrag Operations ............................................................................................................................... 9-3
Remote Replication .................................................................................................................................. 9-4
HUR Replication Enhancements ........................................................................................................... 9-4
Replication Roadmap Overview ................................................................................................................. 9-4
Replication Roadmap........................................................................................................................... 9-4
Active Learning Exercise: Follow the Manual ......................................................................................... 9-5
Global-Active Device Enhancements Overview ........................................................................................... 9-5
Global-Active Device Enhancements ..................................................................................................... 9-5
GAD Enhancements ............................................................................................................................ 9-6
Module Review ........................................................................................................................................ 9-7

10. Hitachi Ops Center Replication ......................................................................... 10-1


Module Objectives ................................................................................................................................... 10-1
Ops Center Replication Overview ............................................................................................................. 10-2
Hitachi Ops Center Administrator Replication ............................................................................................ 10-3
Administrator Replication .................................................................................................................... 10-3
Launch Replication Page ..................................................................................................................... 10-3

xii
Table of Contents

Hitachi Ops Center Administrator Local Replication .................................................................................... 10-4


Administrator Local Replication Overview............................................................................................. 10-4
Administrator Local Replication ........................................................................................................... 10-4
Hitachi Ops Center Administrator Remote Replication ................................................................................ 10-7
Administrator Remote Replication Overview ......................................................................................... 10-7
Administrator High Availability Setup ................................................................................................... 10-8
Active Learning Exercise: Writing One-Minute-Paper ............................................................................ 10-9
Administrator High Availability Setup ................................................................................................. 10-10
Administrator Remote Replication ..................................................................................................... 10-11
Module Summary .................................................................................................................................. 10-14

11. Hitachi Ops Center Automator.......................................................................... 11-1


Module Objectives ................................................................................................................................... 11-1
Introducing Automator ............................................................................................................................ 11-2
Automator Features ................................................................................................................................ 11-3
DC Modernization With Advanced Management Software ..................................................................... 11-3
Orchestrated Resource Management ................................................................................................... 11-3
Service Catalog .................................................................................................................................. 11-4
Simplified Workflow With HTML5 ........................................................................................................ 11-4
From HDvM to Configuration Manager REST API Automator Transition .................................................. 11-5
Automator Configuration .................................................................................................................... 11-5
Active Learning Exercise: Group Discussion ......................................................................................... 11-6
Automator Architecture ........................................................................................................................... 11-7
Hitachi Ops Center Automator............................................................................................................. 11-7
Key Terms ......................................................................................................................................... 11-8
Service and Service Template ............................................................................................................. 11-9
Key Terms ......................................................................................................................................... 11-9
Grouping Infrastructure and Access Control ....................................................................................... 11-10
Automator Use Cases ............................................................................................................................ 11-11
Automator: Key Use Cases ............................................................................................................... 11-11
Smart Provisioning ........................................................................................................................... 11-11
ServiceNow Integration .................................................................................................................... 11-12
3rd Party Tools Integration ............................................................................................................... 11-12

xiii
Table of Contents

Cloud Environment Management....................................................................................................... 11-13


Online Migration .............................................................................................................................. 11-13
Active Learning Exercise: Brainstorming ............................................................................................ 11-14
GUI Overview ....................................................................................................................................... 11-14
Login Through Ops Center ................................................................................................................ 11-14
Login Directly to Automator .............................................................................................................. 11-15
GUI Components ............................................................................................................................. 11-16
Instructor Demonstration ................................................................................................................. 11-17
Services Management –Request (Run) a Service ..................................................................................... 11-18
Request a Service ............................................................................................................................ 11-18
Submit a Service .............................................................................................................................. 11-18
Review a Service .............................................................................................................................. 11-19
Manage Tasks.................................................................................................................................. 11-19
Services Management – Create a Service................................................................................................ 11-20
Service Creation ............................................................................................................................... 11-20
Create a Service............................................................................................................................... 11-21
Instructor Demonstration ................................................................................................................. 11-24
Service Management – Service Builder ................................................................................................... 11-24
Service Builder ................................................................................................................................. 11-24
Automator Video – Create Service Template ...................................................................................... 11-25
Module Summary .................................................................................................................................. 11-26
Appendix .............................................................................................................................................. 11-26
Smart Provisioning Overview ............................................................................................................ 11-26
Smart Provisioning .............................................................................................................. 11-26
Smart Provisioning Overview ............................................................................................... 11-27
Smart Provisioning (Allocate Volumes) ................................................................................. 11-28

12. Migration Capabilities ....................................................................................... 12-1


Module Objectives ................................................................................................................................... 12-1
Migration Capabilities NDM – UVM / GAD .................................................................................................. 12-2
Migration Capabilities ......................................................................................................................... 12-2
UVM NDM Migrations ......................................................................................................................... 12-3
GAD NDM Migrations .......................................................................................................................... 12-4
Module Summary .................................................................................................................................... 12-6

xiv
Table of Contents

13. VSP 5000 Series SOM Changes ......................................................................... 13-1


Module Objectives ................................................................................................................................... 13-1
SOM Changes ......................................................................................................................................... 13-2
New Default Settings on VSP 5000 ........................................................................................................... 13-2
New SOMs .............................................................................................................................................. 13-3
New SOM 1168 ....................................................................................................................................... 13-4
New SOM 1169 ....................................................................................................................................... 13-4
SOM 868 Meaning Changed SOMs............................................................................................................ 13-5
SOM 1115 Meaning Changed ................................................................................................................... 13-6
SOMs Removed....................................................................................................................................... 13-7
Removed SOMs....................................................................................................................................... 13-7
Advanced System Setting ...................................................................................................................... 13-14
SOM’s Converted to Advanced System Settings ....................................................................................... 13-15
SOM Changes ....................................................................................................................................... 13-15
Active Learning Exercise: One Minute Paper ........................................................................................... 13-16
Module Review ..................................................................................................................................... 13-16

14. Best Practices and Information Sources .......................................................... 14-1


Module Objectives ................................................................................................................................... 14-1
Best Practices ADR .................................................................................................................................. 14-2
ADR Notes ......................................................................................................................................... 14-2
Best Practices Pool Recommendations ...................................................................................................... 14-3
Pool Recommendations ...................................................................................................................... 14-3
Best Practices Parity Group and Spare Drive Recommendations ................................................................. 14-4
PG and RAID layout – Single Pair Controller Block ................................................................................ 14-4
Spare Drive Qty ................................................................................................................................. 14-4
Best Practices Replication ........................................................................................................................ 14-5
Local Replication ................................................................................................................................ 14-5
Remote Replication ............................................................................................................................ 14-5
Best Practices Encryption Recommendations............................................................................................. 14-6
Encryption Recommended .................................................................................................................. 14-6
Active Learning Exercise: Group Discussion ......................................................................................... 14-7

xv
Table of Contents

Information Sources ................................................................................................................................ 14-7


Hitachi Ops Center Information Sources .............................................................................................. 14-7
Hitachi Ops Center ............................................................................................................................. 14-9
Ops Center Common Challenges / Issues ............................................................................................ 14-9
Module Summary .................................................................................................................................. 14-10
Your Next Steps .................................................................................................................................... 14-11
We Value Your Feedback ....................................................................................................................... 14-12

Communicating in a Virtual Classroom………………………………………...……………..V-1

Evaluating This Course .............................................................................................. E-1

xvi
Introduction
Welcome

Welcome! Book you


Tell us about you. would
recommend

Name

+ Pick
Last movie
Title you saw

Experience One
Favorite
Expectations vacation spot

© Hitachi Vantara LLC 2020. All Rights Reserved.

xvii
Introduction
Please Give Us Feedback

Please Give Us Feedback

It’s never too


early to give us
feedback.
You can tell us any time during
the class or at the end.
Check your email for the class survey:

© Hitachi Vantara LLC 2020. All Rights Reserved.

Course Description

This course provides an overview of Hitachi Virtual Storage Platform


storage hardware and Hitachi Ops Center as well as associated functions
to enable Global Delivery staff to start engaging with customers on related
service offerings.

Numerous hands-on practices are included to build familiarity with Hitachi


Ops Center, its components and uses in managing storage in a Data
Center. Additionally, see how to navigate and customize the Ops Center
Analyzer dashboard and use the CM REST CLI for storage and host
provisioning.

© Hitachi Vantara LLC 2020. All Rights Reserved.

xviii
Introduction
Prerequisites

Prerequisites

 Supplemental courses
• TSI2690 – Managing Hitachi Ops Center Automator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Course Objectives

 When you complete this course, you should be able to:


• Describe the function, uses and licensing model of Hitachi Ops Center
• Describe the features and capabilities of VSP 5000 Series storage and
position against other Hitachi Storage
• Describe options for adaptive data reduction, security, mainframe
capabilities, dynamic provisioning, replication, and migration for VSP 5000
storage

© Hitachi Vantara LLC 2020. All Rights Reserved.

xix
Introduction
Course Topics

Course Topics
Modules Lab Activities
1. Hitachi Ops Center Deployment and Installation - 1 1. Hitachi Ops Center Features
2. Hitachi Ops Center Deployment and Installation - 2 2. Hitachi Administrator
Functions
3. Resource Monitoring from
Ops Center Analyzer
4. Hitachi Configuration
Manager REST API
3. VSP 5000 Series Models

4. VSP 5000 Series Architecture and Availability - 1


5. VSP 5000 Series Architecture and Availability - 2

© Hitachi Vantara LLC 2020. All Rights Reserved.

Modules Lab Activities

6. VSP 5000 Series Adaptive Data Reduction

7. VSP 5000 Series High Availability and


Storage Navigator Differences From G1x00

8. VSP 5000 Series Security and Encryption


Enhancements

9. VSP 5000 Series and Mainframes

10. VSP 5000 Series HDP-HDT

© Hitachi Vantara LLC 2020. All Rights Reserved.

xx
Introduction
Learning Paths Overview

Modules Lab Activities

11. VSP 5000 Series and Replication

12. Hitachi Ops Center Replication 5. Hitachi Ops Center Replication

13. Hitachi Ops Center Automator 6. Hitachi Ops Center Automator

14. Migration Capabilities

15. VSP 5000 Series SOM Changes

16. Best Practices and Information Sources

© Hitachi Vantara LLC 2020. All Rights Reserved.

Learning Paths Overview

 Boost your skills and advance your


career!

 Stay sharp and get ahead!

 Follow these paths to professional


certification.

 Learn more at:


• Hitachivantara.com (for customers)
• Partner Connect (for partners)
• Connect (for employees)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Customer Learning Paths: https://www.hitachivantara.com/en-us/pdf/training/global-


learning-catalog-customer.pdf

Partner Learning Paths: https://partner.hitachivantara.com/

Employee Learning Paths: https://connect.hitachivantara.com/en_us/user/employee-


center/my-learning-and-development/global-learning-catalogs.html
Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.

xxi
Introduction
Stay Connected During and After Your Training

Stay Connected During and After Your Training

Hitachi Vantara Support Connect Hitachi Self-Paced Learning Library


 Product information  Learning Library is a
and future updates subscription-based
are available on the learning platform that
Hitachi Vantara gives you access to
Support Connect Hitachi Vantara training;
to learn more about
subscribing start here

Remote Hands-on Labs with HALO Training on the Go


 Practice what you learn  Get training from
and gain hands-on Hitachi Vantara on
experience after your your mobile device by
training with our self- downloading
service, web-based, Cornerstone Mobile™
Hitachi Automated Labs
Online
© Hitachi Vantara LLC 2020. All Rights Reserved.

All customer-facing documentation is publicly available for download from:


https://support.hitachivantara.com/en_us/anonymous-dashboard.html
All field-facing docs are also available here for registered users.

Hitachi Vantara Community


On this Hitachi Vantara Community site where you can ask questions related to your Hitachi
products, find articles on popular products, and get insights from the thought leaders. You can
access it with: https://community.hitachivantara.com/s/

Self-Paced Learning Libraries


For information on how to subscribe, visit:
https://www.hitachivantara.com/en-us/services/training-certification.html
For information about Learning Libraries, search for “Self-paced learning” at:
https://hitachi.csod.com/LMS/catalog/Welcome.aspx?tab_page_id=-67&tab_id=-1

Hitachi Automated Labs Online


To access the labs, go to: https://labs.hitachivantara.com/

Support Connect
The site for Hitachi Vantara product documentation is accessed through:
https://support.hitachivantara.com/en_us/anonymous-dashboard.html

xxii
1a. Hitachi Ops Center Deployment and
Installation - 1
Module Objectives

 When you complete this module, you should be able to:


• Describe the function and uses of Hitachi Ops Center
• Describe the licensing model for Hitachi Ops Center

© Hitachi Vantara LLC 2020. All Rights Reserved.

Following are the expanded versions of the acronyms used for products:
• HDID – Hitachi Data Instance Director
• NVMe – non-volatile memory express
• HCS – Hitachi Command Suite
• HTnM – Hitachi Tuning Manager
• HRpM – Hitachi Replication Manager
• SVOS – Hitachi Storage Virtualization Operating Systems
• HDLM – Hitachi Dynamic Link Manager
• HGLM – Hitachi Global Link Manager

Page 1a-1
Hitachi Ops Center Deployment and Installation - 1
What Is Hitachi Ops Center?

What Is Hitachi Ops Center?


In this section you will learn about Hitachi Ops Center.

Hitachi’s New, Modern Infrastructure Portfolio

Ops Center
Foundation for a Modern, AI-Enhanced with Simple, Built on Legendary Hitachi
Enterprise Infrastructure Powerful, Federated, Resilience and Performance,
Management Optimized for NVMe

Setting New Standards

Agility Automate The Future Resiliency


© Hitachi Vantara LLC 2020. All Rights Reserved.

In the above slide, acronyms used have the following expanded names:
• VSP 5000 series – Hitachi Virtual Storage Platform 5000 series
• SVOS RF – Hitachi Storage Virtualization Operating Software RF

Hitachi Ops Center Introduction

 Hitachi Ops Center enables you to optimize your data center operations through
integrated configuration, analytics, automation, and copy data management

 Hitachi Ops Center consists of the following principal software products:


• Administrator – Configure and provision systems Hitachi Ops Center
Suite

• Analyzer – Monitor, optimize, plan and troubleshoot


Administrator
• Automator – Orchestrate automated provisioning workflows
• Data Protection (HDID) – Enterprise copy data management Analyzer

Automator

Data Protection
(HDID)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-2
Hitachi Ops Center Deployment and Installation - 1
New Product Names

New Product Names


Current Name New Name
HAD Hitachi Automation Director Automator Hitachi Ops Center Automator

HIAA Hitachi Infrastructure Analytics Advisor Analyzer Hitachi Ops Center Analyzer

Analyzer
- - Hitachi Ops Center Analyzer viewpoint
viewpoint
Analyzer detail
HDCA Hitachi Data Center Analytics Hitachi Ops Center Analyzer detail view
view
HSA Hitachi Storage Advisor Administrator Hitachi Ops Center Administrator

APIs
Hitachi Ops Center
HCM Hitachi Configuration Manager Configuration APIs Configuration Manager
Manager
Data Protection
HDID Hitachi Data Instance Director Hitachi Data Instance Director
(HDID)
Common
- - Hitachi Ops Center Common Services
Services
© Hitachi Vantara LLC 2020. All Rights Reserved.

Licensing Packages
In this section you will learn about licensing package.

VSP 5000 Series License Packages for Open and Mainframe

Hitachi Ops Center


Component VSP 5000 License Packages

Open Mainframe Open/MF Open Mainframe Open/MF


Base Base Base Advanced Advanced Advanced
Administrator incl. incl. incl. incl.

Analyzer incl. incl. incl. incl.

Data Protection ( HDID ) incl. incl. incl. incl.

Automator incl. incl. incl.


Analyzer Predictive incl. incl.
analytics

© Hitachi Vantara LLC 2020. All Rights Reserved.

Note: Hitachi Global Link Manager (HGLM) and Hitachi Dynamic Link Manager (HDLM) are part
of the Base package.

Page 1a-3
Hitachi Ops Center Deployment and Installation - 1
Optional Software Contents for Ops Center

Optional Software Contents for Ops Center


Add-on/Optional Package Included in Pricing/ Contents
Advanced Licensing
Package
Hitachi Ops Center Analyzer Yes Frame Hitachi Ops Center Analyzer predictive analytics
predictive analytics
Hitachi Ops Center Automator Yes Capacity Hitachi Ops Center Automator
Hitachi Ops Center Automator No Frame Hitachi Ops Center Automator Nodes(5 pack proxy nodes)
Nodes
Hitachi Ops Center Automator Nodes(25 pack direct nodes)
Hitachi Ops Center Analyzer No Frame Hitachi Ops Center Analyzer Nodes (25 pack)
Nodes
Hitachi Ops Center Analyzer No Frame Hitachi Ops Center Analyzer third party storage
third party storage
Hitachi Data Instance Director No Capacity HDID file protection
File Protection*
Hitachi Data Instance Director No Capacity HDID application awareness
Application awareness*
Legacy Management Package No Frame Hitachi Device Manager

Hitachi Tuning Manager


© Hitachi Vantara LLC 2020. All Rights Reserved.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-4
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Common Services

Hitachi Ops Center Common Services


This section helps you to learn about Hitachi Ops Center Common services.

Ops Center Common Services

 Single entry point for single sign on


• License management
• User management

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Hitachi Ops Center products that support the single sign-on functionality are as follows:

• Hitachi Ops Center Automator

• Hitachi Ops Center Analyzer

• Hitachi Ops Center Analyzer viewpoint

• Hitachi Data Instance Director (version 6.9 or later)

Page 1a-5
Hitachi Ops Center Deployment and Installation - 1
Common Login Screen

Common Login Screen

 Ops Center: https://opscenter Ip address/portal/

 Default credentials: sysadmin/sysadmin

© Hitachi Vantara LLC 2020. All Rights Reserved.

Single Sign-On

 Single sign-on from common services to the dashboard of each product


Common Services HDID Dashboard

Automator Dashboard

Analyzer Dashboard

Skip the log in screen

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-6
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Administrator

Hitachi Ops Center Administrator


This section provides an overview about Hitachi Ops Center Administrator.

Hitachi Ops Center Administrator Overview

 A GUI based simplified configuration management application for block


(including fabric switches ) and network attached storage (NAS)
• Dashboard with key metrics
• Simplified tasks based on best practice
• Advanced settings available to adjust parameters

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Hitachi Ops Center products that support the single sign-on functionality are as follows:

• Hitachi Ops Center Automator

• Hitachi Ops Center Analyzer

• Hitachi Ops Center Analyzer viewpoint

• Hitachi Data Instance Director (version 6.9 or later)

Page 1a-7
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Administrator Functions

Hitachi Ops Center Administrator Functions

 Basic administration, includes


• Adding or deleting a storage system, fabric switch or server
• Creating parity groups, storage pools or volumes

 Volume provisioning with local or remote replication

 Volume migration and external storage management

 Fabric zones are created, if SAN switch is added

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Administrator

 Port security needs to be enabled manually before a port can be used


for volume provisioning

 Verify that there is an active zone configuration set with at least one
dummy zone available when a switch is added

 Zone names created by administrator can not be understood by humans


-> that may not meet customer requirements

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-8
Hitachi Ops Center Deployment and Installation - 1
Instructor Demonstration

Instructor Demonstration

Topic: Ops Center Tabs and Administrator overview

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-9
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Analyzer

Hitachi Ops Center Analyzer


This section provides an overview about Hitachi Ops Center Analyzer.

Hitachi Ops Center Analyzer Overview

Central Viewpoint Multiple Deployment Options


Analyze Hitachi system health, Available as software-as-a-service (SaaS)
performance, capacity and events across or on-premises software running in your
a global enterprise environment data center

Predictive Analytics Problem Analysis


Configuration change management with
Plan and forecast future resource
patented root cause analysis to quickly
requirements with their interdependencies
troubleshoot performance problems

Automated Management
Machine Learning (ML) Operations
ML analysis for trends, anomalies and IT Integrated management workflows to
management recommendations automate configuration changes to correct
problems

NEW Insights To Make Data-driven I.T. Decisions

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Analyzer IT Analytics Delivered

Collect and store heterogeneous data center


statistics efficiently
Identify anomalies and ensure service-level
objectives for performance and capacity

View and analyze end-to-end topology from


virtual machines and server to shared storage

Machine-learning-based analysis for predictive


analytics and recommendations

Isolate and determine the root cause of


performance problems with recommended fixes
Execute AI-assisted management operations Delivers analytics, recommendations and
with integrated Ops Center Automator workflows integrated automation for data center
operations to optimize infrastructure resources
Global viewpoint provides centralized view
across distributed Ops Center Analyzer servers and avoid problems

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-10
Hitachi Ops Center Deployment and Installation - 1
Problem Analysis

Problem Analysis

 Track and log configuration changes. Many performance problems are


caused by recent system configuration changes (add, move, delete)
 Correlate configuration changes to assess their impact alongside any
recently detected performance anomalies or deviations

Change  Integrated root cause analysis to accelerate troubleshooting efforts and


Management quickly determine the root cause of problems
 Benefit: Problem analysis with configuration change management
Root Cause improves system availability and reliability
Analysis

© Hitachi Vantara LLC 2020. All Rights Reserved.

Automation Management Integration

 Integrate prescriptive analytics with Hitachi Ops Center


Automator management workflows to accelerate automation
efforts
 Advanced analytics and management operations
• Active management of storage I/O controls
• Active configuration changes within Ops Center Automator to
Automation correct problems faster
Director
Integration  Benefit: AI-assisted recommendations to drive
autonomous data center operations

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-11
Hitachi Ops Center Deployment and Installation - 1
Predictive Analytics

Predictive Analytics

 Predictive profiles for resource forecasts with custom forecasting rules

 Resource optimization planning based on historical trend analysis

 Evaluate proposed configuration changes to assess potential impacts


and ensure that objectives are met prior to implementation

Predictive  Benefit: Improve data center planning and budgeting while


reducing risks
Forecasts and
Planning

© Hitachi Vantara LLC 2020. All Rights Reserved.

Central Viewpoint

 Enterprise view to analyze IT operations across a multiple-data-


center environment

 Dashboard to highlight key Hitachi system health, performance,


capacity and events

 Direct assess to remote Ops Center Analyzer servers to troubleshoot


problems locally
Global
 Benefit: Central dashboard to monitor and control operations
Enterprise View
across a global enterprise

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-12
Hitachi Ops Center Deployment and Installation - 1
On-Premises and SaaS Analytics

On-Premises and SaaS Analytics

 Flexible models to address various deployment needs


• On-Premises Software: Installed and used in the data center
• Software-as-a-Service (SaaS): Used online in the cloud
SaaS  Analytics can run within the data center for on-site monitoring,
analysis, customization and maintenance
 Benefit: Comprehensive analytics offered either on-premises in
Flexible the data center, or SaaS via the cloud
Deployment
Models
Hitachi

Data
Ops Center Ops Center
Analyzer Analyzer
SaaS

Hitachi and Third-Party Probe


Data Center Resources

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Analyzer – SaaS

 Eliminates upfront management server hardware, software and


installation costs
 Removes ongoing analytics software maintenance and upgrades
within the data center
SaaS
 SaaS software only requires software probes to be installed on-site
 Scale easily as your needs grow, plus yearly subscription model
 Benefit: SaaS reduces costs, easier and faster to deploy
SaaS
Advantages
Hitachi
Ops Center
Analyzer
SaaS

Hitachi and Third-Party Probe


Data Center Resources
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-13
Hitachi Ops Center Deployment and Installation - 1
Analyzer Dashboard

Analyzer Dashboard

 Analyzer dashboard offers a global view of key information for the


customer including:
• Resource status, alerts, events, resource reports with customizable views
and sources with “easy to read” color code

© Hitachi Vantara LLC 2020. All Rights Reserved.

Customization

 Analyzer dashboard is fully customizable and can address specific


customer needs

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-14
Hitachi Ops Center Deployment and Installation - 1
Automated Root Cause Analysis and Resolution: How We Do It – Problem Analysis

Automated Root Cause Analysis and Resolution: How We Do It –


Problem Analysis

 Provide an end-to-end
topology view of the current
infrastructure from a common
console

 For the selected baseline


resource, view and correlate
all related resources
associated with the
bottleneck

 View key performance


indicators and trends for all Baseline bottleneck Show all related resources from the
bottleneck-related resources resource that is causing bottleneck resource to help narrow
the problem down the root cause
to quickly analyze shared
resource contention
© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Analyzer: Dynamic or Static Thresholds

 Establish service-level profiles (gold, Service Level Objectives

silver, bronze) and service-level IOPS MBPS Resp. time

objectives (IOPS, response times, Gold


volume
<300 <10 <0.1s

and so forth) for key performance Silver


volume
<200 <5 <0.5s

and capacity indicators Bronze


volume
<100 <1 <1.0s

 Assign static or dynamic threshold


values determined by ML-based Gold Policy Silver Policy Bronze Policy

analysis
Res Time 5ms Res Time 10ms Res Time 20ms
IOPS 1200 IOPS 1000 IOPS 800

Client A Client B Client C


 Monitor anomalies via centralized  Gold Service  Silver Service  Bronze Service
health dashboard for service-level
agreement compliance and alerting

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-15
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Analyzer: Resource Optimization Planning

Hitachi Ops Center Analyzer: Resource Optimization Planning

Resource Optimization Planning


Forecast future resource requirements with their end-to-end interdependences based on ML analysis
and historical trends. Improve infrastructure optimization, planning and budgeting

3. Check and compare selected resource predictive


analytics along with all dependent resources at a glance

1. Select the targeted resource from


the end-to-end view or search

2. Execute predictive analytics forecast for selected


resource with all associated, dependent resources

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Analyzer Summary

 Hitachi Ops Center Analyzer supports monitoring of the performance


metrics defined for infrastructure resources

 Hitachi Ops Center Analyzer alerts, when predefined or user-defined


thresholds are exceeded

 Hitachi Ops Center Analyzer Detail View can be used to report on


detailed metrics

 Predictive Analytics (advanced package) available

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-16
Hitachi Ops Center Deployment and Installation - 1
Active Learning Exercise: Group Discussion

 Hitachi Ops Center Analyzer idea is to guide the user through the
process of problem determination by showing alerts and warnings
• Without reasonable thresholds the user experience will be low
• Default thresholds after installation should be reviewed and adjusted if
required

 For Automation Management Integration, additional configuration must


be performed.

Note: Refer:Hitachi Ops Center Analyzer Installation


and Configuration guide.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: Group Discussion

Topic: What are the most common causes for performance problems and
can Analyzer help not to run into this kind of issues?

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-17
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Data Instance Director

Hitachi Ops Center Data Instance Director


In this section you will learn about Hitachi Ops Center Data Instance Director.

Enterprise Copy Data Management

Operational Recovery, Disaster Recovery


ement High Availability, Business Continuity
Ransomware Recovery

Data services Governance Copy Services


Index and Search, File Analysis
ons
Audits, Retention, RegTech

Orchestration Hitachi Data Instance Director

Recovery Agile Governance


Snapshots Local replication Object storage
Technologies
Remote replication Remote replication Cloud storage
Active-active cluster Mount Content indexing
Backup and CDP Data masking Data analytics

© Hitachi Vantara LLC 2020. All Rights Reserved.

Data Instance Director provides a modern, holistic approach to data protection, recovery and
retention. It has a unique workflow-based policy engine, presented in an easy-to-use
whiteboard-style user interface that helps map the copy data management processes to
business priorities. HDID includes a wide range of fully integrated storage-based and host-
based incremental-forever data capture capabilities that can be combined into complex
workflows to automate and simplify copy data management.

Page 1a-18
Hitachi Ops Center Deployment and Installation - 1
Storage Configurations: Block Storage

Storage Configurations: Block Storage


Management
Server
HDID
(Master Node)
Proxy Server
(Primary) Production Servers

HDID HDID
(Repository) (Source Node)
Oracle / SQL /
CCI
Command device Exchange

• Mapped to proxy server (Repository)


CMD
Primary volumes (PVol) PVols

• Mapped to the production server


DP Pool (SVol) SVols
• Local for ShadowImage
Thin Image pool
• Required for Snapshots Primary storage

Needs to be prepared by user Automatically created by HDID


© Hitachi Vantara LLC 2020. All Rights Reserved.

Operational Recovery

 HDID harnesses a wide range of technologies and capabilities to effect


operational recovery

 These capabilities are divided into two areas:


• Volume-level (Storage-based)
• File-level (Host-based)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-19
Hitachi Ops Center Deployment and Installation - 1
Storage-Based Operational Recovery

Storage-Based Operational Recovery

 Snapshots Storage-based Protection


• Hitachi Thin Image (block); file replication (file) Local Replication Remote Replication
• Frequent, space-efficient, near-instant recovery Solutions Solutions
Hitachi ShadowImage Hitachi TrueCopy
 Clones For full volume clones Synchronous, consistent
of business data with clones at remote location
• Hitachi ShadowImage (block); directory clone consistency up to 300km (~180 miles)
(file)
• Full copy, use for repurposing Hitachi Thin Image Hitachi Universal
Point-in-time virtual Replicator (HUR)
 Application-consistent, nondisruptive volumes of data with Heterogeneous,
consistency asynchronous, journal vs.
• Microsoft Exchange and SQL Server cache-based, pull vs.
push, resilient at any
• Oracle Database, SAP HANA platform distance
• VMware vSphere
• Others through scripting
© Hitachi Vantara LLC 2020. All Rights Reserved.

Storage-based operational recovery leverages the snapshot and clone technologies available in
Hitachi storage systems.

• Thin Image snapshots (block) and file replication (file) require very little space and can
run far more frequently than traditional backup to improve recovery point objectives

• ShadowImage (block) and directory clone (file) create a full copy that can be used for
repurposing, such as for test/dev, secondary backup, and so on

• HDID integrates both snapshots and clones with supported applications, creating
nondisruptive, application-consistent point-in-time copies. HDID also includes a scripting
interface to allow the quiescing of other application environments

Page 1a-20
Hitachi Ops Center Deployment and Installation - 1
Host-Based Operational Recovery

Host-Based Operational Recovery

 Continuous data protection (CDP) and software snapshots


• For critical Microsoft® environments Production
Servers
• Captures every block-level change as it is written
• Drives backup window and RPO to near zero

 Live backup
Hitachi Data
• Captures every change (CDP) then creates a point-in-time Instance
application-consistent software snapshot Director
Server
• Integrated with Microsoft Volume Shadow Copy Service (VSS)

 Batch backup
• Incremental-forever capture with full restore to any backup set
• For IBM ® AIX ®, Linux, Oracle Solaris and Microsoft Windows file
systems HDID Repository
© Hitachi Vantara LLC 2020. All Rights Reserved.

Host-based operational recovery captures and copies data from the application or file server
being protected. Host-based capabilities include:

• Continuous data protection (CDP) automatically saves a copy of every change made to
data, essentially capturing every version of the data that the user saves. It allows the
user or administrator to restore data to any point in time. CDP runs as a service that
captures changes to data to a separate storage location. It is best suited for highly
critical applications and data sets that do not include a built-in journaling or transaction
logging feature

• Live backup creates an application-consistent, point-in-time snapshot, using the


software snapshot technologies present in Microsoft Windows and VMware vSphere.
HDID uses true CDP in combination with VSS-enabled software snapshots to create
application-consistent recovery points. For less critical data, HDID can also run
traditional scheduled backup operations; these operations place a load on the
application server as HDID scans the directory for changed files (incremental backup)
but provides the ability to perform a full restore of any backup point-in-time without
incremental restore steps. HDID does not (yet) support CDP or Live Backup on Linux or
UNIX platforms.

• Batch backup is like traditional backup; however HDID provides an incremental-forever


data capture methodology that greatly reduces backup storage capacity requirements

• Bare metal restore allows the restoration of an entire server, including the operating
system and applications, from a single backup copy

Page 1a-21
Hitachi Ops Center Deployment and Installation - 1
Active Learning Exercise: Group Discussion

Active Learning Exercise: Group Discussion

Topic: Identify customer challenges with replication management.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Module Summary

 In this module, you should have learned to:


• Describe the function and uses of Hitachi Ops Center
• Describe the licensing model for Hitachi Ops Center

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1a-22
1b. Hitachi Ops Center Deployment and
Installation - 2
Module Objectives

 When you complete this module, you should be able to:


• Describe the steps required for Hitachi Ops Center installation
• Describe the different deployment options

© Hitachi Vantara LLC 2020. All Rights Reserved.

Following are the expanded versions of the acronyms used for products:
• HDID – Hitachi Data Instance Director
• NVMe – non-volatile memory express
• HCS – Hitachi Command Suite
• HTnM – Hitachi Tuning Manager
• HRpM – Hitachi Replication Manager
• SVOS – Hitachi Storage Virtualization Operating Systems
• HDLM – Hitachi Dynamic Link Manager
• HGLM – Hitachi Global Link Manager

Page 1b-1
Hitachi Ops Center Deployment and Installation - 2
Deployment Options

Deployment Options
This section explains about deployment options.

Virtual Appliance or Installer

 Integrated Virtual Appliance: When building a system with open virtual


appliance (OVA), it is easiest to:
• Use the Ops Center OVA
• Install the main products and then add the Analyzer Probe OVA

 Integrated Installer Media: When using an installer, you use the


installer specific to the product
• If you want to use the single sign-on functionality, you must also install the
Common Services

© Hitachi Vantara LLC 2020. All Rights Reserved.

Deployment Considerations
Ops Center
Deployment OVAs are for first time NEW installs only.

Oracle Enterprise Linux is an OS for


New Customer or No Obtain Installer Media
New Deployment for Upgrade OVA. Customer need to get support
Yes No, I want to use RHEL, Install Common from Oracle directly.
SEL instead of OEL Service Installer on
Leverage Ops Center Yes Linux VM or existing
Use OVA? Common Service? Linux Server (10.0 Future upgrade must be performed by
release only support
Yes No
Linux) copying the installer(s) to the VM(s) or
Install Common
Use individual
product Installer to
server(s).
Services and Products
Perform
using OVA
Install/Upgrade
(physical or VM) Existing installations will retain existing
Ops Center OVA(Lin) login when upgrading to Ops Center.
ISO
Analyzer viewpoint Existing user will use “legacy login” to
ISO OVA(Lin)
Deploy OVAs to ESX individual products.
Analyzer Probe Server
ISO OVA(Lin)
Administrator
ISO OVA(Lin)
Install more products
on a separate host
Yes

No
Done
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-2
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Options

Ops Center Deployment Options

1. Ops Center Integrated Management Server (OEL)


• Analyzer + Detail View, Automator, HDID Master, API Configuration Manager,
Common Services
2. Administrator (CentOS)
3. Analyzer viewpoint (OEL)
4. Analyzer Probe (OEL)
5. HDID ISM
Pre-configured
media

1. Ops Center Management Server Installer Medias ISOs (Windows and Linux)
• Analyzer and Detail View (Linux Only), Automator, HDID Master + Client, API
Configuration Manager, Common Services (Linux only for October Release)
2. Administrator Installer Media ISO
3. Analyzer Probe Installer ISOs (Windows and Linux)
4. Configuration Manager Installer ZIP (Windows and Linux)
Installation Media

© Hitachi Vantara LLC 2020. All Rights Reserved.

Ops Center Download Options

 Hitachi Ops Center download options are:


1. Preconfigured Media
2. Installation Media(Linux)
3. Installation Media(Windows)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-3
Hitachi Ops Center Deployment and Installation - 2
Ops Center Preconfigured Media

Ops Center Preconfigured Media

 Ops Center Preconfigured Medias are a set of preconfigured media


(known as VMA or OVA, built on OEL), recommended for new Ops
Center deployments
Preconfigured Media Usage
Administrator standalone Administrator
Analyzer standalone Analyzer
Analyzer Probe Analyzer Probes for storage, OS and switch
Analyzer Viewpoint standalone Viewpoint
Management Software Up to 10 Arrays, includes:
Automator, Analyzer, HDID, API-CM, Common
Services

© Hitachi Vantara LLC 2020. All Rights Reserved.

Ops Center Installation Media (Linux)

 Ops Center Installation Medias for Linux are a set media, that include
installers to be use for new installs or upgrades
Installation Media Usage
Administrator Install/Upgrade Administrator
Analyzer Probe Install/Upgrade Analyzer Probes for
storage, OS and switch
Analyzer Detail View Add On Package 3rd Party Probes and management
API Configuration Management Install/Upgrade API-CM
Common Services Install/Upgrade Common Service
Management Software Install/Upgrade Automator, Analyzer, HDID
Master, HDID Client, API-CM, Common
Services
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-4
Hitachi Ops Center Deployment and Installation - 2
Ops Center Installation Media (Windows)

Ops Center Installation Media (Windows)

 Ops Center Installation Medias for Windows are a set media, that
include installers to be used for new installs or upgrades

Installation Media Usage


API Configuration Management Install/Upgrade API-CM
Management Software Install/Upgrade Automator, HDID Master,
HDID Client, API-CM

© Hitachi Vantara LLC 2020. All Rights Reserved.

Single Common Service For Multiple OVAs

 To use the function provided by common service, register each product


in one instance of common service

 Products may be distributed across different OVAs and DC for


scalability or HA reasons
Simple Deployment enhanced Deployment

Administrator OVA
Administrator

Administrator OVA
Administrator

= Registration of ind. Products required

© Hitachi Vantara LLC 2020. All Rights Reserved.

The diagram shows an example system configuration in which the Hitachi Ops Center product
runs on one management server.

Page 1b-5
Hitachi Ops Center Deployment and Installation - 2
Activate / Deactivate Products

Activate / Deactivate Products

Purpose: Provide an ability of flexible OVA configuration in products combination point of view.
Also they can reduce the OVA configuration size if they deactivate some products.
Function: Customers can deactivate/activate each product in consolidated OVA.
Proposal: Provide a command of deactivate/activate registered services.
Example: opsvmservicectl disable Automator / Analyzer

Ops Center OVA Ops Center OVA


Common Service
Disable Automator
Common Service

Automator Analyzer Automator Analyzer

HDID detail view HDID detail view

API API

Possible to reduce number of CPU core


and physical memory size.
© Hitachi Vantara LLC 2020. All Rights Reserved.

1. disable/enable
/opt/OpsVM/vmtool/opsvmservicectl disable|enable product1 [product2 …] Table: Available product options
# Product Product Option
Deactivate/activate specified product. “deactivate” will do followings.
1 Hitachi Ops Center Automator Automator
1. Make specified product not to start when OS booting.
2 Hitachi Ops Center Analyzer Analyzer
2. Stop specified product’s service.
“activate” will do opposite operations of “deactivate”.
3 Hitachi Ops Center Analyzer Analyzerdetailview
Example: Following command will deactivate Automator and Analyzer. detail view
opsvmservicectl disable Automator Analyzer 4 Hitachi Ops Center APIConfigurationManager
API Configuration Manager
5 Hitachi Data Instance Director HDID
2. status 6 Hitachi Ops Center Common CommonServices
/opt/OpsVM/vmtool/opsvmservicectl status Services

“status” will show all installed product’s status as follows:


1. disabled/enabled
2. whether product’s service is running/stopped

Example: Following command will show specified product’s status:


opsvmservicectl status

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-6
Hitachi Ops Center Deployment and Installation - 2
Analyzer OVA Specification

Analyzer OVA Specification

 Analyzer and Analyzer detail view in separate OVA for enterprise customers
who need to scale their Analyzer systems
V10.0.1 V10.1.0 (2M)
Ops Center OVA Ops Center OVA Ops Center
Common Service Common Service Analyzer OVA
OVA
Automator Analyzer Automator Analyzer
HDID detail view HDID detail view System CPU 16 core 16 core
API API Require
Memory 48 GiB 48 GiB
ment of
Administrator OVA Administrator OVA VM Disk size 900 GiB 900 GiB
Administrator Administrator
OVA size File size 13 GiB 13 GiB
Analyzer OVA
Analyzer
detail view

Viewpoint OVA Viewpoint OVA


Common Service Common Service
Viewpoint Viewpoint

Probe OVA Probe OVA


Analyzer Probe + Analyzer Analyzer Probe + Analyzer
Agent + CM REST + CCI Agent + CM REST + CCI

© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: What Do You Think

In which scenario you will recommend to your customers preconfigured


media or individual installer?
What is a Workflow?

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-7
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Overview

Ops Center Deployment Overview


In this section you will learn about Ops Center deployment.

Ops Center Deployment

1. Review latest “Hitachi Ops Center Installation and Configuration


Guide” https://support.hitachivantara.com

2. Choose installation method (preconfigured or integrated media)

3. Check system requirements and prerequisites


a. Installation and configuration guide
b. Hitachi Ops Center Sizing Tool (https://ocst.hitachivantara.com)

© Hitachi Vantara LLC 2020. All Rights Reserved.

c. Hitachi Ops Center Sizing tool example

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-8
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment

9. Complete your setup in Ops Center portal


• Apply licenses
• Add storage systems/Data centers
• Configure user/AD settings

10. Install optional components


• Install/Deploy Administrator and register in common services*
• Install/Deploy Analyzer Viewpoint and register in common services
*if not included in Ops Center OVA ( Ops Center 10.0.x/10.1.x)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-9
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Overview

Ops Center Deployment Overview

Note: Virtual Appliances (pre-configured OVAs). © Hitachi Vantara LLC 2020. All Rights Reserved.

• *1. Under feasibility study when we can provide


• *2. HDID binary is included in OVA but it is needed to install HDID after deploying on 9E
version

Ops Center Deployment Details


This section explains Ops Center deployment in detail.

System Configuration (For Servers)

 Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description

1 New Ops Center OVA For first time new installation


installation Common Service
• Quick start for using Ops
Automator Analyzer
Center
Analyzer
• Configure by integrated media
HDID
detail view
API
(OVA)
2 New Ops Center OVA Ops Center OVA Viewpoint OVA • Multiple Ops Center products
installation Common Service Common Service Common Service
• Configure by each OVA
Automator Analyzer Automator Analyzer Viewpoint
(more than Analyzer Analyzer Analyzer
deployment
HDID HDID
one OVA) API
detail view
API
detail view detail view • One of common service
becomes a master
3 Upgrade to Install by Installer For existing customers
Ops Center by Common Service
configuration
Automator Analyzer
installation • Upgrade Automator/Analyzer
by installer
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-10
Hitachi Ops Center Deployment and Installation - 2
Configuration #1: New installation

Configuration #1: New installation

 Quick start to use Ops Center for the first time new installation
# Situation Configuration

1 New Ops Center OVA


(1) Download the “Ops Center OVA(Lin)”
installation Common Service

Automator Analyzer

HDID Analyzer
detail view
API
(2) Deploy Ops Center by “Ops Center OVA(Lin)”

“Ops Center OVA(Lin)”


(3) Login to Ops Center CS and apply license

(4) Configure user, user group and access control

© Hitachi Vantara LLC 2020. All Rights Reserved.

Configuration #1: New Installation(2) Deploy Ops Center by “Ops


Center OVA(Lin)” (1/2)

 Deploy and Configure Ops Center Management Server OVA to ESX


Select “Deploy OVF template” in vSphere Deploy ova file in the wizard screen
Mount and get ova file
Client (Any tool is ok to deploy ova file)

Log in as root (Operate from VMware


console because it is before NW setting)
Change the password
Power ON VM which is deployed
Completed

Next Page
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-11
Hitachi Ops Center Deployment and Installation - 2
Configuration #1: New Installation(2) Deploy Ops Center by “Ops Center OVA(Lin)” (2/2)

Configuration #1: New Installation(2) Deploy Ops Center by “Ops


Center OVA(Lin)” (2/2)

 Deploy and Configure Ops Center Management Server OVA to ESX


Set IP address, subnet mask (Optional) Set DNS server,
Execute “opsvmsetup” command (Optional) Set host name and default gateway time zone and NTP server

Operate form any tool with SSH.


(because it is after network setting.
Restart OS automatically Confirm and apply setting (Optional) Set SSL of HDCA
For example, Tera Term is as below)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Configuration #1: New Installation (3) Login to Ops Center CS


and apply license

 Apply licenses

Automator Analyzer

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-12
Hitachi Ops Center Deployment and Installation - 2
System Configuration (For Servers)

System Configuration (For Servers)

 Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description

1 New Ops Center OVA For first time new installation


installation Common Service
• Quick start for using Ops
Automator Analyzer
Center
Analyzer
• Configure by integrated media
HDID
detail view
API
(OVA)
2 New Ops Center OVA Ops Center OVA Viewpoint OVA • Multiple Ops Center products
installation Common Service Common Service Common Service
• Configure by each OVA
Automator Analyzer Automator Analyzer Viewpoint
(more than Analyzer Analyzer Analyzer
deployment
HDID HDID
one OVA) API
detail view
API
detail view detail view • One of common service
becomes a master
3 Upgrade to Install by Installer For existing customers
Ops Center by Common Service
configuration
Automator Analyzer
Installation • Upgrade Automator/Analyzer
by installer
© Hitachi Vantara LLC 2020. All Rights Reserved.

Configuration #2: New Installation (more than one OVA)

 Use multiple Ops Center products and register one common service
# Situation Configuration
(1) Download the “Ops Center OVA(Lin)”, “Analyzer
2 New Ops Center OVA Ops Center OVA Viewpoint viewpoint OVA(Lin)”
Common
installation Common Service
Automato
Common Service
Automato
OVA
Service

(more than r
Analyzer
Analyzer r
Analyzer
Analyzer
Viewpoint
Analyzer
one OVA)
HDID detail
view
HDID detail
view
detail (2) Deploy 2 Ops Centers by “Ops Center OVA(Lin)”
view
API API
and 1 Viewpoint by “Analyzer viewpoint OVA(Lin)”

(3) Register products to Ops Center CS

(4) Login to Ops Center CS and Apply licenses for 2


“Ops Center OVA(Lin)” Analyzer and 1 viewpoint instances

(5) Configure user, user group and access control


for all instances
“Analyzer viewpoint OVA(Lin)”
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-13
Hitachi Ops Center Deployment and Installation - 2
Configuration #2: New Installation (more than one OVA)(3) Register Each Product to Common Service

Configuration #2: New Installation (more than one OVA)(3)


Register Each Product to Common Service

• Execute “setupcommonservice” command to register each product to CS


Before After

All products are shown in


No products are launcher. Then, users
registered yet need to apply licenses

Automator

Analyzer

HDID

© Hitachi Vantara LLC 2020. All Rights Reserved.

System Configuration (For Servers)

 Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description

1 New Ops Center OVA For first time new installation


installation Common Service
• Quick start for using Ops
Automator Analyzer
Center
Analyzer
• Configure by integrated media
HDID
detail view
API
(OVA)
2 New Ops Center OVA Ops Center OVA Viewpoint OVA • Multiple Ops Center products
installation Common Service Common Service Common Service
• Configure by each OVA
Automator Analyzer Automator Analyzer Viewpoint
(more than Analyzer Analyzer Analyzer
deployment
HDID HDID
one OVA) API
detail view
API
detail view detail view • One of common service
becomes a master
3 Upgrade to Install by Installer For existing customers
Ops Center by Common Service
configuration
Automator Analyzer
Installation • Upgrade Automator/Analyzer
by installer
© Hitachi Vantara LLC 2020. All Rights Reserved.

Last one is the upgraded case for existing customers. This procedure is installing Ops Center
products and registering each product to common service.

Page 1b-14
Hitachi Ops Center Deployment and Installation - 2
Configuration #3: Upgrade to Ops Center

Configuration #3: Upgrade to Ops Center

 Upgrade by installer, for existing customers configuration


# Situation Configuration
(1) Download the “Integrated media(Lin)”
3 Upgrade to Install by Installer
Ops Center by Common Service

Installation Automator Analyzer

(2) Upgrade Automator/Analyzer by installer

(3) Install Ops Center CS by installer

(4) Register Automator/Analyzer to Ops Center CS


“Integrated media (Lin)”

(5) Configure user, user group and access control

© Hitachi Vantara LLC 2020. All Rights Reserved.

Configuration #3: Upgrade to Ops Center (2) Upgrade


Automator/Analyzer by Installer

 For upgrade, install by installer and register CS from each product


Upgrade HAD V8.6.5 to Automator by installer

Register to CS by executing
“setupcommonservice” command

Apply license in CS GUI

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-15
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO

Administrator SSO
 Procedure to connect Administrator with CS for SSO

# Procedure Documentation
0 We need to setup CS in advance, and Hitachi Ops Center Installation
need to check credential for CS and Configuration Guide

1 Input credential (for example, account,


IP address) for connecting CS during
Installation Administrator installation
2 Create User Groups in CS and map
Hitachi Ops Center Administrator
them to Administrator each user role (for Getting Started Guide
Map UG example, SecurityAdministrator,
StorageAdministrator)

Login 3 Login Administrator from CS (launcher)


with set user roles by Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-16
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO

 Create user groups in CS

We need to create user groups on CS

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Map created UG on CS to Administrator user role

Showing user roles

Showing UG list on CS

We need to map UG on CS to
Administrator user roles

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-17
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO

 Users can launch Administrator with specified user role from CS

CS Launcher screen

Log in to Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

If we set Administrator for cooperating


with CS, we can login Administrator
via CS

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-18
Hitachi Ops Center Deployment and Installation - 2
Hitachi Ops Center Upgrade Scenarios

Hitachi Ops Center Upgrade Scenarios


In this section we will discuss various Hitachi Ops Center upgrade scenarios.

Ops Center Upgrade Scenarios


Existing Installation
Product New Customer/Deployment
(No need for data migration)
HSA > Administrator Administrator OVA (separate VM) “Forklift to Administrator 10.0.0” to move from
CentOS to OEL (in place upgrade afterward)
HAD > Automator Ops Center Integrated Server OVA Upgrade with installer; uninstall existing HCS if no
longer in use
HPA > Analyzer Ops Center Integrated Server OVA, Upgrade with installer
Analyzer Probe OVA
HDID Ops Center Integrated Server OVA Upgrade with installer

CMREST > API Ops Center Integrated Server OVA Upgrade with installer
Configuration Manager
Analyzer viewpoint Analyzer Viewpoint OVA N/A (new product)

Common Services Ops Center Server OVA or Viewpoint OVA Install onto existing Linux server or new Linux
(pick one to be the master) server (Windows support in Mar/2020)
HCS Product > Ops Center N/A Deploy OVA for respective products
(HDvM > Administrator, HTnM > Analyzer, HRpM
> HDID)
© Hitachi Vantara LLC 2020. All Rights Reserved.

 If I have HAD in HCS OVA currently, how do I upgrade to Ops Center


• Upgrade HAD to Automator 10.0.0 via installer (See existing user migration
options)
• If Ops Center Common Services is not installed, install Common Services on
an existing or new Linux server (Windows version coming 3/2020)
• Use command line to register Automator with Common Services server
• Add new users using SSO and map them to Automator roles
• If other HCS components are no longer in use, uninstall them from the VM

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-19
Hitachi Ops Center Deployment and Installation - 2
Module Summary

 If I have existing product on different servers/VMs, should I migrated


them to the “Uber OVA”?
• No, there are no advantage to merge them into single OVA
• Upgrade each product individually
• If customer wants to consolidate many of the existing servers, please model
them with Ops Center Sizing tool and work with service team with a migration
plan

 Ops Center Administrator (HSA) 10.0.0 supports Oracle Enterprise


Linux instead of CentOS
• For existing customers on CentOS, it’s recommended they do one more
“forklift upgrade” to 10.0.0 then use in-place upgrade with OEL in the future
© Hitachi Vantara LLC 2020. All Rights Reserved.

Module Summary

 In this module, you should have learned to:


• Describe the steps required for Hitachi Ops Center installation
• Describe the different deployment options

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-20
Hitachi Ops Center Deployment and Installation - 2
Appendix

Appendix
It’s time to explore few topics in detail.

Hitachi Ops Center Backup and Restore Overview


In this section you will learn about Hitachi Ops Center backup and restore.

Hitachi Ops Center Backup and Restore

 There is no tool available to backup and restore all Ops Center components at
once

 Individual backup and restore procedures are required:


• Common Services - run csbackup/csrestore command
• Administrator - use the VAM tool
• Analyzer - follow the procedure from:
Hitachi Ops Center Installation and Configuration Guide
• Automator - run backupsystem/restoresystem command
• Data Instance Director - no procedure yet

 Backup and restore can also be done on host/vm level

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-21
Hitachi Ops Center Deployment and Installation - 2
Hitachi Ops Center Common Services

Hitachi Ops Center Common Services

 For backup / restore data in Common Services, run the


csbackup/csrestore command ( “root” user !)

 Location:
• installation-directory-of-Common-Services/utility/bin/csbackup.sh

• installation-directory-of-Common-Services/utility/bin/csrestore.sh

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Administrator

 Backup and restore for the Administrator can be done with the “Virtual
Appliance Manager”

 To access the Virtual Appliance Manager (VAM), open a browser and


enter: https://ip-address/vam/

Use Backup to
a file
download a tar.gz

Restore: Either drag-and-drop the


backup file or click the plus sign (+)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-22
Hitachi Ops Center Deployment and Installation - 2
Administrator VAM Tool: Backup

• The following system elements are preserved as part of the backup:

o Element inventories: Storage Systems, Servers, and Fabric Switch inventories are
preserved
o SNMP Managers: Locations for forwarding SNMP traps are preserved
o Jobs: All jobs on the system are preserved
o Alerts: Monitoring alerts are preserved
o Tier Names: Tier names for HDT pools are preserved
o Security information: Local usernames and passwords, as well as integrated
Active Directories are preserved
o Replication groups: All copy groups and replication groups and their associated
snapshot schedules are preserved
o Virtual Appliance Manager settings: Connected NTP servers, log level settings,
SSL certificate and service settings are preserved. Host settings are not
preserved
o Migration tasks: All migration tasks and their associated migration pairs are
preserved

Administrator VAM Tool: Backup

 Backup to save settings from VAM setup and Administrator configuration

 A tar.gz file is generated:

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-23
Hitachi Ops Center Deployment and Installation - 2
Administrator VAM Tool: Restore

Administrator VAM Tool: Restore

 Click RESTORE and drag-and-drop the backup package

© Hitachi Vantara LLC 2020. All Rights Reserved.

Analyzer Backup Restore Procedure

 You can back up the following four components of the Ops Center
Analyzer system, so that they can be restored later. For example, if
failure occurs, causing your system to go down
• Analyzer Server
• Analyzer detail view server
• Analyzer probe server
• RAID Agent

 The detailed procedures are described in the documentation. There is a


Backup and Restore Chapter in the Hitachi Ops Center Installation and
Configuration Guide
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 1b-24
Hitachi Ops Center Deployment and Installation - 2
Automator – CLI commands

Automator – CLI commands

 Ops Center Automator provides CLI commands for backup and restore
of the database and system information

 When running Linux OS, navigate to:


/opt/hitachi/Automation/bin

 When running in a Windows-based OS, navigate to:


<systemdrive>\Program Files\hicommand\Automation\bin

© Hitachi Vantara LLC 2020. All Rights Reserved.

Automator – backupsystem

 The backupsystem command backs up the system configuration and


database information in the specified directory

 where:
• /dir is an absolute or relative directory path that contains backup data
• /auto directs the Ops Center Automator, Common Component services and
database to start and stop automatically

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Admin role is not required to run this command.

Page 1b-25
Hitachi Ops Center Deployment and Installation - 2
Automator – restoresystem

Automator – restoresystem

 The restoresystem command restores the system configuration and


database information from the specified directory where the data was
backed up

 where:
• /dir is an absolute or relative directory path that contains data that is backed
up by the backupsystem command
• /auto directs the Ops Center Automator, Common Component services and
database to start and stop automatically

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Admin role is not required to run this command.

Note: Before restoring Ops Center Automator, confirm that the following conditions are the
same for the backup source Ops Center Automator server host and the restore destination Ops
Center Automator server host:

• Types, versions, and revisions for the installed Common Component products

• Installation location for each product using Common Component, Common Component,
the Common Component product database, and Common Component database

• The IP address and host name of the machines

• System locale and character code

• If the above conditions are not the same, Ops Center Automator cannot be restored

Page 1b-26
2. VSP 5000 Series Models
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the Controller Box (CBX) Components
• Review the Hitachi Virtual Storage Platform 5000 Series models and upgrade
path
• Review the VSP 5x00 Max Hardware Configurations
• Follow the VSP 5100 Block diagram
• Explain the Multi-node scale-out diagram
• Review the VSP 5000 Series Portfolio Positioning

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-1
VSP 5000 Series Models
Controller Box (CBX) Components

Controller Box (CBX) Components

Parts Explanation
Controller Board (CTL) Consists of CPU, DIMM and GUM.
BKMF
Channel Board (CHB) Front-End I/O Module (FC / iSCSI / FICON)

Battery Disk Board (DKB) Back-End I/O Module


• DKB: Non-encryption DKB for SAS protocol (SBX,
UBX)
CPU
DIMM CPU • EDKB: Encryption DKB for SAS protocol (SBX, UBX)
DIMM • DKBN: Non-encryption DKB for NVMe protocol (NBX)
Hitachi Interconnect Inter-CTL I/O Module (PCIe)
Edge (HIE)
CTL
175mm PS Unit Power Supply Unit
PSU (PSU)
LAN Board Consists of User Management LAN port and Maintenance
PSU LAN port
IO Module (CHB, HIE, DKB) (LANB)
446.3mm
Cache Flash Memory Flash memory to back up the DIMM data in the case of
(CFM) electric power failure.
Backup Module and Battery Cache Backup Battery (1 /BKMF)
FAN (BKMF)
FAN FAN unit (1 / BKMF)
DIMM Used for LM/PM/SM/CM
© Hitachi Vantara LLC 2020. All Rights Reserved.

VSP 5000 Series Offering


VSP5000 series include a 5x00H (Hybrid) models which support a mix of Flash and HDD drives
VSP 5500 6N

DCK- 5
DCK -1

CTL 52
DCK -3
VSP 5500 4N CTL 12 CTL 32

CTL 11 CTL 51
CTL 31

VSP 5500 2N
DCK -3
DCK-1

CTL 12 CTL 32

DCK -4
DCK- 0

CTL 42
DCK -2

CTL 02 CTL 22
CTL 11 CTL 31
CTL 01 CTL 41
CTL 21
DCK -1

CTL 12
DCK -0

DCK- 2

CTL 02 CTL 22
CTL 11
CTL 01 CTL 21

VSP 5100
DCK- 0

CTL 02 Upgrade

CTL 01
DCK -1

CTL 12 Upgrade

Upgrade
DCK- 0

CTL 01

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Undisruptive upgrade through the VSP 5XXX models

Page 2-2
VSP 5000 Series Models
Controller Box (CBX)

VSP 5500 6N

Node 2
Node 0

Node 2
Controller 1 Controller 5 Controller 9
VSP 5500 4N
Controller 2 Controller 6 Controller 10

Node 0

Node 2
VSP 5500 2N Controller 1 Controller 5

Node 3
Node 1

Node 3
Controller 3 Controller 7 Controller 11
Controller 2 Controller 6
Controller 4 Controller 8 Controller 12
Node 0

Controller 1

Node 1

Node 3
Controller 3 Controller 7
Controller 2
Controller 4 Controller 8

VSP 5100
Node 1

Controller 3 Upgrade

Controller 4
Node 0

Controller 1 Upgrade

Upgrade
Node 1

Controller 4

Refer to upgrade slide for upgrade rules and restrictions

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Undisruptive upgrade through the VSP 5XXX models

Controller Box (CBX)

CTLx2

CBX or DKC-x

CTLx1

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-3
VSP 5000 Series Models
VSP 5000 Series Offering

VSP 5000 Series Offering

Correspondence between DKC Number and Location Name

© Hitachi Vantara LLC 2020. All Rights Reserved.

• For VSP 5500 and VSP 5500H, four CTLs are installed in two DKCs (two CTLs in each
DKC)
• For VSP 5100 and VSP 5100H, two CTLs are installed in two DKCs (one CTL in each
DKC). The locations of CTLs are CTL01 in DKC-0 and CTL12 in DKC-1

VSP 5x00 Max Hardware Configs


Components Max config for VSP 5x00 by model & number of Controller Blocks (CB’s)

Table quantities assume all CB’s are the same type of port, VSP 5100 VSP 5500 1 CB VSP 5500 2 CB’s VSP 5500 3 CB’s
backend & media chassis. Intermix rules are provided. (10U, 2 controllers) (10U, 4 controllers) (18U, 8 controllers) (26U, 12 controllers)

CPU Cores, Memory 40c, (.5TiB MF only) 1TiB 80c, 2TiB 160c, 4TiB 240c, 6TiB
FC1 32G/16G SFP (8p increments) 32 64 128 192 1) FC ports are NVMeoF ready for
Frontend
software upgrade in 2HCY20
Optical I/O Ports
FiCON 16G SFP (8p increments) 32 64 128 192
(can intermix types
within CB) iSCSI 10G SFP (4p increments) 16 32 64 96
Backend PCIe Gen3 x 4 Lane NVMe ports
8 16 32 48
I/O Ports Or 12G SAS 4W Lane SAS ports
Global Spare Drives (8 per Media Chassis) 64 64 128 192

NAND Flash: SFF 1.9, 3.8, 7.6, 15.3TB , 30.6TB {TBD} SCM Flash: 3.75TB {TBD}
SFF
Drive Capacities # of 8U 96 slot chassis 1 1 2 3
NVMe
{Post-GA ETA}
# of drives w/max chassis 96 96 192 288
Max # of NAND Flash: 960GB, 1.9, 3.8, 7.6, 15.3, 30.6TB 10K HDD: 2.4TB
media chassis SFF
* # of 8U 96 slot chassis 8 8 16 24
SAS
Max # of drives
# of drives w/max chassis 768 768 1536 2304
7.2K NL-SAS HDD: 14TB
LFF
Each CB can be * # of 16U 96 slot chassis 4 4 8 12
SAS
either diskless, or # of drives w/max chassis 384 384 768 1152
all NVMe, or all
SAS. Different CB NAND Flash: 7, 14TB
types can intermix FMD
in a system in any * # of 8U 48 slot chassis 4 4 8 12
SAS
combination # of drives w/max chassis 192 192 384 576
* SFF / LFF / FMD chassis intermix Each SAS CB can have up to 8 media chassis. First chassis per CB must be SFF or FMD.
within Controller block Each SAS CB can have up to 4 FMD and/or up to 4 LFF chassis; the rest must be SFF
Parity 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-4
VSP 5000 Series Models
CPU and GUM Specs

• NAND flash memory is a type of nonvolatile storage technology that does not require
power to retain data

• Flash memory is a kind of Electronically Erasable Programmable Read Only Memory.


In a Nand flash memory, the memory cells are connected in series. All the data is
recorded in a transistor called Floating Gate. The other transistor named control Gate
controls charges flow from the Source to the Drain

• A flash solid state drive (SSD) is a non-volatile storage device that stores persistent
data in flash memory. There are two types of flash memory, NAND and
NOR. ... NAND has significantly higher storage capacity than NOR. NOR flash is faster,
but it's also more expensive.

CPU and GUM Specs


Items VSP G1000/G1500 VSP Gxxx VSP 5000
CPU type Haswell Broadwell Broadwell
Core Unit 8 core / MPK
number Min. 16 core = 8 x 2
20 core / CTL 20 core / CTL CPU
40 core = 20 x 2 80 core = 20 x 4 CTL
MPK CTL
Max. 128 core = 8 x 16 240 core = 20 x 12
MPK CTL
Frequency 2.3 GHz 2.2 GHz 2.2 GHz

Items VSP G1000/G1500 VSP Gxxx VSP 5000 Series


Processor N/A Pilot4
(ARM9 500MHz x 2core)
Memory capacity 1GB
GUM (DDR4-1600)
ROM capacity SPI Flash:128MB
eMMC:8GB
OS AMI MegaRACK RR11.6

© Hitachi Vantara LLC 2020. All Rights Reserved.

When comparing the CPU of the VSP Gxxx and VSP 5000, take note of the Core number and
Maximum Core number.

The VSP Gxxx unit has 2 controllers, but the VSP 5000 Series unit has 4 controllers, and a fully
expanded VSP 5000 Series will have 12 controllers.

Page 2-5
VSP 5000 Series Models
VSP 5100 Block Diagram

VSP 5100 Block Diagram

VSP 5100 Config VSP G1500 Basic Config (1 VSD)


2-Node configuration,
but only one CTL is
installed in one Node;
one of the CTL slots is
left empty. Duplicating
the CM/SM data
crossing the node (CL)
Dualize

Empty Empty Redundant


paths

<Point of Development>
Redundant Need to develop the # cover” for empty CTL slots (safety standards)
paths

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Single CTL per CBX with VSP 5100

VSP 5500 - Multi-node Scale-out


• Node-pairs can be SAS or NVMe
back-end - (diagram shows 4 node
all SAS configuration)
• Heterogeneous node-pairs: SAS
or NVMe node-pairs can be
intermixed
• SAS back-end nodes can have
SSD or HDD; NVMe back-end
nodes can have only SSD
Basic Concept
SAS NVMe Total
Nodes Nodes Nodes

2 0 2

0 2 2

2 2 4

4 0 4

0 4 4

6 0 6

0 6 6

4 2 6
Scale out from 2 to 4, 6 nodes in pairs and still manage as single image (1 array)
2 4 6

© Hitachi Vantara LLC 2020. All Rights Reserved.

• DBS2 must be the first drive box in VSP 5000 configuration, more information in the
next module

Page 2-6
VSP 5000 Series Models
VSP 5000 Portfolio Positioning

VSP 5000 Portfolio Positioning

VSP G/F1500
Models Nodes/Controllers Media Supported
Target Based

VSP 5000 5100 2N/2Ctr SSD/HDD/FMD 1-3 VSD


Series
5500 2N/4Ctr SSD/HDD/FMD 3-6 VSD
5500 4N/8Ctr SSD/HDD/FMD 7-8 VSD
5500 6N/12Ctr SSD/HDD/FMD

© Hitachi Vantara LLC 2020. All Rights Reserved.

• The table is the comparison from VSP 5XXX and G/F1500 with a single or multiple pair
of VSD (x 2MPB’s)

Active Learning Exercise: What Do You Think?

Identify the correct specification category (CPU or GUM).

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-7
VSP 5000 Series Models
VSP E990

VSP E990
In this section you will learn about VSP E990 hardware specifications.

VSP E990 HW Specifications

Controller 4U, dual controller, 56 cores, 1,024 GiB cache


Drives 0-96 SFF NVMe: 1.9/3.8/7.6/15.3TB SSD 30.6TB SSD, 375GB SCM–GA TBD
Drive Box (DB) 2U, 24 drive 0–4
Diskless 0
Backend
# of dual port DKBN (non-encryption) 4 (1-2 drive box)
NVME boards 8 (3-4 drive box)
EDKBN (encryption-ready, license optional)

0 drive box 16/32G FC 64 (80*)


(diskless) 10G iSCSI (optical/copper) 32 (40*)

1-2 drive 16/32G FC 48 (64*)


Ports
box 10G iSCSI (optical/copper) 24 (32*)

3-4 drive 16/32G FC 32 (48*)


box 10G iSCSI (optical/copper) 16 (24*)
© Hitachi Vantara LLC 2020. All Rights Reserved.

Drive Boxes

 Drive Box (DBN) min. 0–4 with 24 SFF NVMe drives

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-8
VSP 5000 Series Models
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the Controller Box (CBX) Components
• Review the Hitachi Virtual Storage Platform 5000 Series models and upgrade
path
• Review the VSP 5x00 Max Hardware Configurations
• Follow the VSP 5100 Block diagram
• Explain the Multi-node scale-out diagram
• Review the VSP 5000 Series Portfolio Positioning

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 2-9
VSP 5000 Series Models
Module Summary

This page is left blank intentionally.

Page 2-10
3a. VSP 5000 Series Architecture and
Availability - 1
Module Objectives

 Upon completion of this module, you should be able to:


• Explain Hitachi Virtual Storage Platform 5000 series architecture
• Review major technical differences between VSP 5000 series and VSP
G1500 / VSP F1500
• Describe differences between 5100 and 5500 controllers
• Review drive boxes, RAID configuration and power resiliency for the VSP
5000 series
• Explain the multi-node scale-out diagram and service processor hardware
unit
• Review the VSP 5x00 series models and upgrade path, max hardware
configurations and block diagram
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-1
VSP 5000 Series Architecture and Availability - 1
Hardware

Hardware
This section discuss about VSP 5000 series hardware.

Naming Cross Reference

Documentation and GUI names Marketing Names


DKC (controller pair-CBX); aka High availability node
Node
HSNBX Node interconnect switch
DKU (4 HDD or SSD trays) Media chassis
Ordering unit (DKC+HSNBX+DKU) Quad-controller block
Combination of ISW and HIE Hitachi Accelerated Flash fabric
HIE Fabric-acceleration module
Hitachi built flash drive Custom flash drive

© Hitachi Vantara LLC 2020. All Rights Reserved.

High Level Concept


42U-Rack 42U-Rack
 Scale-out system:
8U
8U SAS
SAS
• Min 5100: 1/2 Quad-controller block
SFF
SFF Media
Media 8U SAS
Chassis (96) 8U SAS
• Min 5500 : 1 Quad-controller block (High availability (HA) node pair) Chassis SFF Media
SFF Media
Chassis (96)
• Max: 3 Quad-controller blocks (High availability node pairs) Chassis

16U LFF 8U SAS


8U SAS
 Cache: 2TB/module; Max 6TB/system SFF Media
SAS Media SFF Media
Chassis (96) Chassis (96)
Chassis
 HA nodes interconnect via PCIe switches in Node Interconnect Switch (layers) Quad-Ctl block 2
HA Node4 (4U)
• Manage as 1 system
HA Node5 (4U)
• 4 node interconnect switches (2x1u layers)
8U SAS
SFF Media
8U NVMe
 Frontend: FC / FICON / iSCSI Chassis (96) 8U NVMe
Minimum system SFF Media
SFF Media
18u w/ media) 1U Node Inter Sw-1 Chassis (96)
Chassis
 Backend:12G SAS or NVMe 1U Node Inter Sw-0
Quad-Ctl block 0 Quad-Ctl block 1
Minimum system DKC (4U) HA Node2 (4U)
HA Node0
10u (w/o media)
 Max media : Pairs
0 (4U)
HA Node1 HA Node3 (4U)
• SAS: SFF (2304) - LFF (1152)

• NVMe: SFF (288) © Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-2
VSP 5000 Series Architecture and Availability - 1
Module (CBX Pair) Component Location

Module (CBX Pair) Component Location

2x1U

4U
10U

4U

© Hitachi Vantara LLC 2020. All Rights Reserved.

Connections
 16G FC (4 ports per card) (64/128/192)
Cable Path

QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP


1U
QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP
• 4/8/16Gbps
Cable Path

QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP


1U
QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP

12A 12B
 32G FC (4 ports per card) (64/128/192)
12A 12B

• 8/16/32Gbps
DKBs
HIEs

12E
12E 12F
12F
4U

11A
11A 11B
11B

11E
11E 11F
11F
 16G FiCON (4 ports per card) (64/128/192)

02A
02A 02B
02B
• 4/8/16Gbps
02E 02F
DKBs

02E 02F
HIEs

4U

01A
 10G iSCSI (2 optical ports per card)
01A 01B
01B
(32/64/96)
01E 01F

Rear view
• 10Gbps

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-3
VSP 5000 Series Architecture and Availability - 1
ISW (Interconnect Switch) / HIE

ISW (Interconnect Switch) / HIE

 2 Ports HIE card


Cable Path

QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP


1U
QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP
Cable Path

QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP

4 pairs (16 ports) per DKC pair


1U
QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP QSFPQSFPQSFP

12C

12G
4U

11C

11G

02C

02G
4U

01C

01G

Rear view

© Hitachi Vantara LLC 2020. All Rights Reserved.

System Interconnect (HSNBX x 2)


42U-Rack

8U SAS
SFF Media
Chassis

16U LFF
SAS Media
Chassis

8U SAS
SFF Media
Chassis
1U HSNBX-1
1U HSNBX-0

DKC
8U DKC
Pairs
Pair
0
0

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-4
VSP 5000 Series Architecture and Availability - 1
Interconnection Architectures

Interconnection Architectures
VSP G/VSP F1500 VSP 5500
(x-paths via cache boards with alternate routes) (Controller-level “by 4” (x4) independent switching)

VSP 5500 ‘x4’ improves resiliency, especially during maintenance or upgrade events
© Hitachi Vantara LLC 2020. All Rights Reserved.

4 redundant switches (HSN) . Each HSN is broken down into switches.

VSP 5000 Series Logical System Connectivity

Page 3a-5
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series Logical System Connectivity

Following abbreviations are used on this slide.

• DKU = Disk Unit, a grouping of drive trays


• SBX = SAS small form factor Box
• UBX = SAS Uber (large) form factor Box
• FBX = SAS Fmd form factor Box
• NBX = NVME small form factor Box
• HSNBX = Hitachi Switch Network Box
• ISW = Inter-controller Switch aka Hitachi Accelerated Fabric
• SSVP = Standby Service Processor – runs Linux and provides a watchdog service for SVP
failover and so on
• SVP = Service Processor – runs Windows and provides management interface, and so
on
• CBX = Controller Box – In this case the enclosure that holds up to 2 controller boards,
CFMs, Power and Fans
• CBX Pair – Controller Box Pair – Two CBX that make up a Controller Block in VSP 5000
• DKC = Disk Controller – Sometimes used instead of CBX Pair
• HIE = Hitachi Interconnect Edge aka Fabric Acceleration Module
• DKB = Disk Board (for SAS) aka BED (Back End Director)
• eDKB = encrypting version
• DKBN = Disk Board (for NVMe) aka BED (Back End Director)
• eDKBN = encrypting version
• P/F = Power Supply/Fan, in some cases it also includes a battery for the CFM and cache.
Note that a pair of Power supplies and fans are shared by the two controllers in the
same DKC so in the diagram it spans across them.
• CTRL xx MP = Controller Multi Processor which is a logical grouping of cores from one or
more CPU’s
• CHB = Channel Board aka I/O Module which could be FC or Ethernet Ports
• DRAM = Dynamic Random Access Memory

Page 3a-6
VSP 5000 Series Architecture and Availability - 1
Front End Ports

• GUM = Gateway for Unified Management

• CFM = Cache Flash Module – Nonvolatile store for configuration information and in the
case of a power outage, cache contents
• MNT = Maintenance Ethernet port on controller. It is not used for Jupiter and [need to
confirm the following] is physically blocked. Therefore I changed it to GUM to indicate
each controller has a Gateway for Unified Management processor
• The ethernet connection shown between the management ports on controller pairs is
internal, not externally cabled. It is shown to make it clear that if one HSNBX LAN is
down, the controller connected to it can still be communicated with by routing through
the paired controller
• Note that there is no correlation between the ISW color and the controller color. There
are only so many colors to choose from that can be easily differentiated
• Customer Management LAN includes things like HiTrack Monitor Server, CM REST server,
Hitachi Storage Advisor, Hitachi Infrastructure Analytics Advisor, Hitachi Automation
Director, Hitachi Data Instance Director

Front End Ports


CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port

12A 2B 4B 6B 8B 12B 2D 4D 6D 8D 32A 2K 4K 6K 8K 32B 2M 4M 6M 8M 52A AB CB EB GB 52C AD CD ED GD

12E 2F 4F 6F 8F 12F 2H 4H 6H 8H 32E 2P 4P 6P 8P 32F 2R 4R 6R 8R 52E AF CF EF GF 52G AH CH EH GH

11A 2A 4A 6A 8A 11B 2C 4C 6C 8C 31A 2J 4J 6J 8J 31B 2L 4L 6L 8L 51A AA CA EA GA 51C AC CC EC GC

11E 2E 4E 6E 8E 11F 2G 4G 6G 8G 31E 2N 4N 6N 8N 31F 2Q 4Q 6Q 8Q 51E AE CE EE GE 51G AG CG EG GG

12A 12B 32A 32B 52A 52C


CBX #1

CBX #3

CBX #5
DKBs

DKBs

DKBs
HIEs

HIEs

52G
HIEs
12E 12F 32E 32F 52E

11A 11B 31A 31B 51A 51C

11E 11F 31E 31F 51E 51G

22A 22B 42A 42B


CBX #0

CBX #2

02A 02B
DKBs

DKBs

DKBs
CBX #4
HIEs

HIEs

HIEs

02E 02F 22E 22F 4 3E4


42F

01A 01B 21A 21B 41A 41B

01E 01F 21E 21F 41E 41F

CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port

02A 1B 3B 5B 7B 02B 1D 3D 5D 7D 22A 1K 3K 5K 7K 22B 1M 3M 5M 7M 42A 9B BB DB FB 42B 9D BD DD FD

02E 1F 3F 5F 7F 02F 1H 3H 5H 7H 22E 1P 3P 5P 7P 22F 1R 3R 5R 7R 43E 9F BF DF FF 42F 9H BH DH FH

01A 1A 3A 5A 7A 01B 1C 3C 5C 7C 21A 1J 3J 5J 7J 21B 1L 3L 5L 7L 41A 9A BA DA FA 41B 9C BC DC FC

01E 1E 3E 5E 7E 01F 1G 3G 5G 7G 21E 1N 3N 5N 7N 21F 1Q 3Q 5Q 7Q 41E 9E BE DE FE 41F 9G BG DG FG

CBX pair #0 CBX Pair #1 CBX pair #2

© Hitachi Vantara LLC 2020. All Rights Reserved.

• LUN count per port increased to 4096

Page 3a-7
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series Rear – 5500 2Node

VSP 5000 Series Rear – 5500 2Node

DBS2

HSNBX

CBX (2 Nodes)

© Hitachi Vantara LLC 2020. All Rights Reserved.

VSP 5000 Series – 5500 6Nodes

HSNBX

CBX 1 (2 Nodes)
CBX 2 (2 Nodes)
CBX 3 (2 Nodes)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-8
VSP 5000 Series Architecture and Availability - 1
Drive Boxes and RAID Configuration

Drive Boxes and RAID Configuration


In this section you will learn about drive boxes and RAID configuration.

Drive Boxes
DB Explanation G/F1500 VSP 5000 Series
type
Height BE port# Protocol Installable Drive# Support Support DKU

DBL 2U 2 SAS LFF x 12 Support Support UBX (=DBL x 8)

DBS 2U 2 SAS SFF x 24 Support N/A SBX (=DBS x 8)

DBS2 2U 4 SAS SFF x 24 N/A Support SBX (=DBS2 x 4)

DBN 2U 4 NVMe SFF x 24 N/A Support NBX (=DBN x 4)

DBF 2U 2 SAS LFF(FMD) x 12 Support N/A FBX (=DBF x 4)

DB60 4U 2 SAS LFF x 60 N/A N/A --

DBF3 2U 4 SAS LFF(FMD*) x 12 N/A Support FBX (=DBF3 x 4)

*1: FMD: This is the conventional FMD that does not support Accelerated Compression or Encryption features in VSP 5000.

© Hitachi Vantara LLC 2020. All Rights Reserved.

DBS2 drive tray appears as two separate 12 drive trays.

PG and RAID Layout – Single Pair Controller Block

Fixed PG assignment method


(same policy as R800’s spec)
 PG with 16 drives:
- Taking 2 Drives from 8 sequential
DB#s (Ex: DB#0~#7)
- Starting even# of Slot# (Ex: Slot#0
and 1, #2 and 3,,,)

 PG with 8 drives:
- Taking 1 Drive from 8 sequential
PG #1: (7D+1P) x 2

PG #1: (7D+1P) x 2
PG #0: 14D+2P
PG #0: 14D+2P

PG #4: 2D+2D

DB#s (Ex: DB#0~#7, DB#8~#15)


PG #2: 6D+2P

PG #3: 7D+1P

C
B PG with 4 drives:
PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2


PG #0: 14D+2P
PG #0: 14D+2P

PG #2: 6D+2P

PG #5: 3D+1P

X
PG #3: 7D+1P

- Taking 1 Drive from 4 sequential


even# of DB#s or odd# of DB#s (Ex:
P DB#0/2/4/6, DB#1/3/5/7)
a
i
 DBS2 and FMD3 are two
r
logical trays in a single physical
tray to provide redundancy.
Although they do have a
common backplane, it is
passive and highly reliable

 All RAID levels are protected


from single logical tray failure

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Raid Config’s - 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P


• DBS2 drive tray appears as two separate 12 drive trays.

Page 3a-9
VSP 5000 Series Architecture and Availability - 1
PG and RAID Layout – Multiple Pair Controller Block

PG and RAID Layout – Multiple Pair Controller Block

C P
B a
X i
r

PG #N: 6D+2P
Fixed PG assignment method
C P 1 PG assignment range is within the same CBX Pair
B a
X i
r

C P
B a
X i
r

© Hitachi Vantara LLC 2020. All Rights Reserved.

Spare Drive Location

Fixed Spare drive assignment

Reserved
method (same policy as R800’s spec)
 Spare Drive can be
assigned only to Slot#11 or S
Slot#23 in each DB
 Each Media Chassis has 8
DB’s therefore a max of 8
spares
PG #N: 2D+2D

 When first Spare drive is


assigned, the other three
Reserved for additional Spare drives

drives in the same area are

PG #N: 2D+2D
reserved for additional
Spare drive and they can’t
S
be assigned as PG
anymore.
 Remaining 4 drives in
Slot#11 and/or Slot#23 can
be assigned as PG with 4
d
Reserve

drives like 2D+2D or 3D+1P.


 Copy-back setting only (no
Copy-back setting is not
available.)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-10
VSP 5000 Series Architecture and Availability - 1
Spare Drive Qty

Spare Drive Qty

Max number of spares is also limited by 8 spares per Media Chassis


CBX configuration Max Spare drive number [ / CBX Pair = 2CBXs] (*1) Max Spare drive number
CBX0~CBX1 CBX2~CBX3 CBX4~CBX5 [ / System]
(CBX Pair 0) (CBX Pair 1) (CBX Pair 2)
2 CBXs 64 -- -- 64
4 CBXs 64 64 -- 128
6 CBXs 64 64 64 192
*1: Spare drive can be assigned for data-drives (of same type) in a different CBX Pairs. For example, Global
Spare

Recommended quantity
Drive type Recommendation for Spare Drive quantity
SAS (10k) 1 Spare Drive for every 32 Drives
NL-SAS (7.2k) 1 Spare Drive for every 16 Drives
SSD 1 Spare Drive for every 32 Drives
FMD 1 Spare Drive for every 24 Drives

© Hitachi Vantara LLC 2020. All Rights Reserved.

SAS Media Chassis Connectivity

Chassis 7
EXP 060 EXP EXP 061 EXP EXP 062 EXP EXP 063 EXP

EXP 056 EXP EXP 057 EXP EXP 058 EXP EXP 059 EXP
DB-056&057 DB-058&059 DB-060&061 DB-062&063

EXP EXP EXP EXP EXP EXP EXP EXP

Chassis 6
052 053 054 055

EXP x 2, SAS Port x 2 EXP 048 EXP EXP 049 EXP EXP 050 EXP EXP 051 EXP
DB-048&049 DB-050&051 DB-052&053 DB-054&055

8x DBL (LFF)
Chassis 5
EXP 044 EXP EXP 045 EXP EXP 046 EXP EXP 047 EXP

EXP 040 EXP EXP 041 EXP EXP 042 EXP EXP 043 EXP
DB-040&041 DB-042&043 DB-044&045 DB-046&047

Chassis 4
EXP 032 EXP EXP 036 EXP EXP 033 EXP EXP 037 EXP EXP 034 EXP EXP 038 EXP EXP 035 EXP EXP 039 EXP

First chassis must be DBS2 (SFF) or FMD3


DB-032 DB-033 DB-034 DB-035 DB-036 DB-037 DB-038 DB-039

Chassis 3
EXP 024 EXP EXP 028 EXP EXP 025 EXP EXP 029 EXP EXP 026 EXP EXP 030 EXP EXP 027 EXP EXP 031 EXP

DB-024 DB-025 DB-026 DB-027 DB-028 DB-029 DB-030 DB-031

Chassis 2
EXP 016 EXP EXP 020 EXP EXP 017 EXP EXP 021 EXP EXP 018 EXP EXP 022 EXP EXP 019 EXP EXP 023 EXP

DB-016 DB-017 DB-018 DB-019 DB-020 DB-021 DB-022 DB-023

Chassis 1
EXP 008 EXP EXP 012 EXP EXP 009 EXP EXP 013 EXP EXP 010 EXP EXP 014 EXP EXP 011 EXP EXP 015 EXP

DB-008 DB-009 DB-010 DB-011 DB-012 DB-013 DB-014 DB-015

Chassis 0
EXP 004 EXP EXP 005 EXP EXP 006 EXP EXP 007 EXP

EXP 000 EXP EXP 001 EXP EXP 002 EXP EXP 003 EXP
DB-000&001 DB-002&003 DB-004&005 DB-006&007

EXP x 4, SAS Port x 4

4x DBS2 (SFF) or Controller Block


SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL
DKB DKB DKB DKB DKB DKB DKB DKB
CTL CTL CTL CTL

FBX3 Node0 Node1

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-11
VSP 5000 Series Architecture and Availability - 1
SAS Media Chassis Connectivity Optimization

SAS Media Chassis Connectivity Optimization

 First chassis must be SFF or FMD to provide enough connectivity so I/O to any PG can be
performed by any controller without having to use inter-controller access
Chassis 1
(DBL x 8)
EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP
008 012 009 013 010 014 011 015

(PG#1: 7D+1P)
DB - 008 DB - 009 DB - 010 DB - 011 DB - 012 DB - 013 DB - 014 DB - 015

EXP 004 EXP EXP 005 EXP EXP 006 EXP EXP 007 EXP
Chassis 0
EXP 000 EXP EXP 001 EXP EXP 002 EXP EXP 003 EXP (DBS2 x 4)
DB - 000 & 001 DB - 002 & 003 DB - 004 & 005 DB - 006 & 007
(PG#0: 7D+1P)

SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL

DKB DKB DKB DKB DKB DKB DKB DKB


CTL CTL CTL CTL
CBX0 CBX1
CHB

NVMe Media Chassis connectivity is similar to DBS2 so


the same optimization applies but of course there is no
Host I/O to PG#0 and PG#1 DBL connectivity or daisy chain issue to consider
© Hitachi Vantara LLC 2020. All Rights Reserved.

VSP 5x00 Max Hardware Configs


Components Max config for VSP 5x00 by model & number of Controller Blocks (CB’s)

Table quantities assume all CB’s are the same type of port, VSP 5100 VSP 5500 1 CB VSP 5500 2 CB’s VSP 5500 3 CB’s
backend & media chassis. Intermix rules are provided. (10U, 2 controllers) (10U, 4 controllers) (18U, 8 controllers) (26U, 12 controllers)

CPU Cores, Memory 40c, (.5TiB Mf only) 1TiB 80c, 2TiB 160c, 4TiB 240c, 6TiB

Frontend FC1 32G/16G SFP (8p increments) 32 64 128 192 1) FC ports are NVMeoF
Optical I/O Ports
(can intermix types
FiCON 16G SFP (8p increments) 32 64 128 192 ready for software
within CB) iSCSI 10G SFP (4p increments) 16 32 64 96 upgrade in 2HCY20
Backend PCIe Gen3 x 4 Lane NVMe ports
8 16 32 48
I/O Ports Or 12G SAS 4W Lane SAS ports
Global Spare Drives (8 per Media Chassis) 64 64 128 192

NAND Flash: SFF 1.9, 3.8, 7.6, 15.3TB , 30.6TB {TBD} SCM Flash: 3.75TB {TBD}
SFF
Drive Capacities # of 8U 96 slot chassis 1 1 2 3
NVMe
{Post-GA ETA}
# of drives w/max chassis 96 96 192 288
Max # of NAND Flash: 960GB, 1.9, 3.8, 7.6, 15.3, 30.6TB 10K HDD: 2.4TB
media chassis SFF
* # of 8U 96 slot chassis 8 8 16 24
SAS
Max # of drives
# of drives w/max chassis 768 768 1536 2304
7.2K NL-SAS HDD: 14TB
LFF
Each CB can be * # of 16U 96 slot chassis 4 4 8 12
SAS
either diskless, or # of drives w/max chassis 384 384 768 1152
all NVMe, or all
SAS. Different CB NAND Flash: 7, 14TB
types can intermix FMD
in a system in any * # of 8U 48 slot chassis 4 4 8 12
SAS
combination # of drives w/max chassis 192 192 384 576
* SFF / LFF / FMD chassis intermix Each SAS CB can have up to 8 media chassis. First chassis per CB must be SFF or FMD.
within Controller block Each SAS CB can have up to 4 FMD and/or up to 4 LFF chassis; the rest must be SFF
Parity 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Removed CPU reference to Broadwell (E5-2618Lv4 10c/2.2GHz/75W) since it is not


relevant for most reasons and some customers will have a knee jerk reaction about CPU
“vintage” rather than focusing on the high overall system performance and high
IOPS/core efficiency vs competitors

Page 3a-12
VSP 5000 Series Architecture and Availability - 1
Active Learning Exercise: Raise Your Hands If You Know It!

Active Learning Exercise: Raise Your Hands If You Know It!

What are the key benefits of VSP 5000 series?

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Architecture and Specifications


This section explains the architecture and specifications.

System Configuration (SAS Backend)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-13
VSP 5000 Series Architecture and Availability - 1
System Configuration (NVMe Backend)

System Configuration (NVMe Backend)

© Hitachi Vantara LLC 2020. All Rights Reserved.

System Configuration (SAS/NVMe Mixed)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-14
VSP 5000 Series Architecture and Availability - 1
Power

Power
This section explains the power resiliency.

Power Resiliency
 Each CBX, HSNBX, and DB should be powered from redundant PDPs to avoid system failure from a single PDP failure.
Recommended
UBX  Every CBX Pair must be supplied power from the same pair of PDPs. If not, there is a high possibility of data loss when
DB-023 power failure occurs. Both CBXs in a CBX-pair are recommended to be next to each other to avoid misconnection.
DB-022
DB-021  Every Drive Box in a Media Chassis must to be placed next to each other, powered from the same pair of PDPs. If not,
DB-020 lots of drives will be blocked when a pair of PDPs stops supplying power, and there is a high possibility of data loss.
DB-019
P P
D DB-018
D No Good No Good OK (Not recommended)
U DB-017 U
PDPs are Not redundant Paired CBXs are supplied power 2 CBX are supplied power from the same pair of PDPs.
DB-016
for each CBX. from different pairs of PDPs. (BTW not enough cable length if NVMe media chassis)
SBX
DB-014&015
DB-012&013 P P P P
DB-010&011 D D D D
CBX-3
DB-008&009 U U U U
CBX-2 CBX-1
SBX
DB-006&007
P P
DB-004&005
D D
DB-002&003 P P P P P P P P
U U
DB-000&001 D D D D D D D D
CBX-1 U U U U
HSNBX-1 U U U U
HSNBX-0 CBX-0 CBX-0 CBX-0 CBX-1
CBX-pair0

CBX-1

CBX-0

PDP1 PDP2 PDP1 PDP2 Pair PDP3 PDP4 PDP1 PDP2


PDP1 PDP: Power Distribution Panel PDP2

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Interconnect Edge (HIE)


Also called HAF (fabric-acceleration module). In this module you will learn about HIE.

Offload By HIE
Inter-controller communication method will be changed from Intel CPU for VSP G200/VSP G400/VSP
G600/VSP G800/ VSP F200/VSP F400/VSP F600/VSP F800 to HIE for VSP 5000. Some processing of CPU
for the inter-controller communication will move to HIE

# Type of communication VSP G/F 200/400/600/800 (No HIE) VSP 5000 Series (HIE offload)
1 Read memory on the Request to a CPU on the target controller and HIE sends the target data to the own
other controller it reads the memory and write it to destination memory
2 Atomic access Request to a CPU on the target controller and HIEs adjust the atomic operation
it processes atomic operation
3 Inter-controller transfer of Intel DMA transfers the target to the other HIE transfers the target data to the other
user data controller and after finished, CPU on the controller and simultaneously verifies it
source controller requests to verify it to CPU by T10DIF calculation
on the destination controller, and it verifies the
target data by T10DIF calculation
Random Write 8KB(cache hit)
Performance 4000000
benefit +46%
(IOPS)

2000000
0
No HIE HIE offload
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-15
VSP 5000 Series Architecture and Availability - 1
Offload By HIE

Atomic operations in concurrent programming are program operations that run completely
independently of any other processes. Atomic operations are used in many
modern operating systems and parallel processing systems.

T10 Data Integrity Feature (DIF) = type of error correction mechanism

• Originally proposed by IBM, Logical Block Guarding is one component of DIFSBC-3 /


SPC-4

• 520 byte sectors with a twist

• 8 bytes of protection data per sector

• Guard tag : CRC

• Reference tag : Typically LBA

• Application tag: User defined content

• T10 DIF – Device Capabilities:

o Device can support one or more protection types

o Target can only be formatted with one protection type at a time

o RDPROTECT/WRPROTECT/VRPROTECT must match target format somewhat

o READ(32)/WRITE(32) feature special DIF knobs

o APP tag ownership/verification

Page 3a-16
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series: Hardware Offload Design

VSP 5000 Series: Hardware Offload Design


(Offload to HAF=Hitachi Accelerated Fabric)
Example: Data read from CTL-1 by CTL-0 (no CPU load on cross-CPU)
w/o HAF Data Read Processing of CPU
CPU-0 CPU-1
CTL-0 CTL-1 Issues
CPU-0 CPU-1 read request
1) 2) Receives
read request
Memory-0 data request data Memory-1
Transfers
3) target data
With HAF there a reduction of overload request during
Request
Offload the process
w/ HAF Data Read
the Data
Processing read
of CPUbecause the Offload design process
Data
Processing of HIE
CPU-0 HAF-0 HAF-1 CPU-1
CTL-0 CTL-1
1) Issues
read request
CPU-0 HIE-0 HIE-1 CPU-1
2) Receives
read request
Memory-0 data data Memory-1
Transfers
target data
© Hitachi Vantara LLC 2020. All Rights Reserved.

Memory Read/Atomic Access on the Other Controller

G/F 200/400/600/800
1. Write a request for reading Memory2 2. Receive the request 3. Write the target data to Memory1

Controller 1 Controller 2

CPU1 CPU2

Memory1 data request data Memory2

VSP 5000 Series


1. Send a request for reading Memory2 2. Write the target data to Memory1

Controller 1 Controller 2 No load


CPU1 HIE1 HIE2 CPU2 to CPU2

Memory1 data data Memory2

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-17
VSP 5000 Series Architecture and Availability - 1
Data Transfer to the Other Controller

Data Transfer to the Other Controller

G/F 200/400/600/800
1. Send a transfer request to Memory2 2. Transfer the data to Memory2 3. Request to verify

Controller 1 Controller 2 4. Verify the data by T10DIF


CPU1 Intel DMA CPU2

Memory1 DATA DIF request DATA DIF Memory2

VSP 5000 Series


1. Send a transfer request to Memory2 2. Transfer the data to Memory2 3. Verify the data by T10DIF

Controller 1 Controller 2
No load
CPU1 HIE1 HIE2 CPU2
to CPU2
Memory1 DATA DIF DATA DIF Memory2

© Hitachi Vantara LLC 2020. All Rights Reserved.

Service Processor
In this section we will discuss about service processor.

SVP Unit

VSP G1500/VSP F1500 VSP 5000 Series Rear side

SSVP/HUB unit
Front side
SVP unit

VSP G1500/ VSP F1500 SVP: Switching VSP 5000 SVP: SVP unit is motherboard function only. To
HUB is embedded in SVP unit together with ensure higher availability, switching HUB is installed in SSVP
motherboard. unit with less failure rate. Thus, even during SVP
replacement, internal LAN can keep alive, for example user
can keep accessing all nodes/all CTLs even if in SVP
replacement. © Hitachi Vantara LLC 2020. All Rights Reserved.

SSVP – Sub Service Processor interfaces the SVP to the DKC.

Page 3a-18
VSP 5000 Series Architecture and Availability - 1
SVP LAN Cable Routing

SVP LAN Cable Routing


 Basic SVP is installed in HSNBX-0;
Internal  Option SVP is installed in HSNBX-1
LAN  Option SSVP/HUB is mandatory if option SVP is
Rear
installed
Interconnect LAN  # of aggregated cables:
(aggregated) - x1: # of CBXs = 1,2
- x2: # of CBXs ≥ 3
Public LAN

Rear Rear

PL SSVP/HUB * PL
Option SVP

Basic
SSVP/HUB
SVP *
(if option SVP installed)
Front Front
aggregated cable
(if option SVP installed and # of CBXs is 3-6)
aggregated cable
aggregated cable
(if # of CBXs is 3-6)
Mainte. LAN
aggregated cable* (if option SVP installed)
* mandatory
© Hitachi Vantara LLC 2020. All Rights Reserved.

SVP Connection Architecture


G/F1500 VSP 5000 Series

- MP consolidation
- MP consolidation
- CTL consolidation
- CTL consolidation
Public LAN Public LAN

Public LAN Public LAN

Major differences on management architecture:


1. Network topology between SVP unit and CTL (MPB) changes from daisy chain to star topology to ensure high
availability in multi-node architecture. In VSP 5000 architecture, against single point failure on internal LAN,
user can keep accessing all nodes/all CTLs through either of clusters of internal LAN.
2. To ensure further availability, the internal LAN connection between HSNBXs will be dual path using a Link
Aggregation Control Protocol (LACP).
3. While VSP Gxxx is single SVP/SVP-less architecture, SVP unit is installed in HSNBX that consolidates
interconnects between nodes in VSP 5000 architecture. Dual SVP will be supported (optional) as in VSP
G1500/VSP F1500. © Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3a-19
VSP 5000 Series Architecture and Availability - 1
Proxy on SVP

Proxy on SVP

 SVP provides the proxy functionality of maintenance utility, PF REST


API, and JSON API User

Maintenance
External LAN
Personnel
Internal LAN
RDP Login
VSP 5000 SVP
SVP GUI PF REST JSON API
(MU Launch) (proxy) (proxy)

SVP provides all interfaces


for management and
maintenance.
GUM GUM GUM is accessed by SVP
MU PF REST MU PF REST … internally.

JSON API JSON API


© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: Raise Your Hands If You Know It!

What is Hitachi Accelerated Flash fabric?

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Page 3a-20
VSP 5000 Series Architecture and Availability - 1
Module Summary

Module Summary

 In this module, you should have learned to:


• Explain Hitachi Virtual Storage Platform 5000 series architecture
• Review major technical differences between VSP 5000 series and VSP
G1500 / VSP F1500
• Describe differences between 5100 and 5500 controllers
• Review drive boxes, RAID configuration and power resiliency for the VSP
5000 series
• Explain the multi-node scale-out diagram and service processor hardware
unit
• Review the VSP 5x00 series models and upgrade path, max hardware
configurations and block diagram
© Hitachi Vantara Corporation 2019. All Rights Reserved.

Page 3a-21
VSP 5000 Series Architecture and Availability - 1
Module Review

Module Review

1. True or False: The VSP 5000 series is designed to support both open system
and mainframe needs.

2. True or False: The controller blocks (node pairs) can be only be NVMe
backend.

3. Which VSP model supports dual redundancy between HSNBXs:


a) VSP G1500
b) VSP 5500
c) All VSP models
d) None of the VSP models

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Page 3a-22
VSP 5000 Series Architecture and
Availability - 2
Module Objectives

 Upon completion of this module, you should be able to:


• Review the cache and shared memory for the Hitachi Virtual Storage
Platform 5000 Series
• Explain the MP failure design and impact of replacing CPU or memory
• Review technical differences between VSP 5000 series and VSP G1500/
VSP F1500
• Explain volumes, capacity and hardware specifications for the VSP 5000
Series
• Review licensing for VSP 5000

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-1
VSP 5000 Series Architecture and Availability - 2
Cache and Shared Memory

Cache and Shared Memory


In this section you will learn about cache and shared memory configuration.

Shared Memory Allocation

 Base extension includes SM Block #1-4 which is the minimum and default config

 Most systems will never need more, but if capacity exceeds 4.4PiB, expansion is
nondisruptive and all controllers ship with max memory so no upgrade (with controller
offline) is needed

Extension # SM Block HDP max pool capacity [PiB]


Block # Total SM capacity / system OPEN : MF (*1):
VSP 5500 VSP 5100 (HDP/HDT/HTI) (HDP/HDT)
Base (*2) SM Block#1 16G 8G
SM Block#2 32G 16G
4.4 3.9
SM Block#3 48G 24G
SM Block#4 64G 32G
Extension 1 SM Block#5 80G 40G 8.05 7.3
Extension 2 SM Block#6 96G 48G 12.5 11.3
Extension 3 SM Block#7 112G 56G 16.6 15.0

(*1) Max pool capacity of MF HDP/HDT is calculated with about 10% reduction of OPEN.
(*2) Max specification for all Replication PPs (SI/TI/TC/UR/GAD) is covered by the Base block of Shared Memory.

© Hitachi Vantara LLC 2020. All Rights Reserved.

SM/CM Resiliency Improvement

VSP Gxx0/VSP Fxx0 VSP G1500/VSP F1500 VSP 5000


A single controller (CTL) One cache and one MP A CTL failure is sustained by
failure loses the system failure sustained system redundancy in operating CTLs.
redundancy. Double CTL operation. But a pair of
failure causes the system cache failures cause the Intern Intern Intern Intern
down. system down. al
Switc
al
Switc
al
Switc
al
Switc
h h h h
CHB

CHB
CHB

CHB
CHB

CHB
CHB

CHB

CHB

CHB

CHB

CHB
CHB

CHB

CHB

CHB
CHB

CHB

CHB

CHB
HIE

HIE

HIE

HIE
CHB

CHB

CHB

CHB

CHB

CHB
HIE

HIE

HIE

HIE
CHB

CHB
CHB

CHB
CHB

CHB

Cache-1 Cache-2
SM CM CM CTL--0 CTL--1 CTL--2 CTL--3
CTL--0 CTL--1
SM CM SM CM SM CM CM
SM CM CM
CPUs CPUs CPUs CPUs
CPUs CPUs MP-1 MP-2
DKB

DKB

CPUs CPUs
DKB

DKB

DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
DKB

DKB

DKC-1 DKC-2
DKC DKC
SM/CM are duplicated on survived CTL.

Total availability Total availability Total availability

Six 9 2VSD/CPX : Seven 9 VSP 5500 2Node+ : Eight 9


Min config. is Six 9 VSP 5100 : Six 9

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-2
VSP 5000 Series Architecture and Availability - 2
Shared Memory (SM) Design

Shared Memory (SM) Design

 In VSP 5000 Series, primary SM area and secondary SM area are placed on different
nodes (node 0, node 1) to be able to endure node failure

 Nodes other than node0/node1 do not have SM area. Therefore, SM capacity does not
increase even if # of node increases

node 0 node 1 node 2 node 75


CTL0 CTL1 CTL0 CTL1 CTL0 CTL1 CTL0 CTL1
MP MP MP MP MP MP ・・・ MP MP

SM rs v SM rs v
CM CM CM CM
CM CM CM CM

Module #0 Module #1 Module #2

Primary Secondary
© Hitachi Vantara LLC 2020. All Rights Reserved.

Memory (DIMM) Architecture Comparison


VSP G1500/VSP F1500 (Hierarchical Memory) VSP 5000 Series (All memory areas in Cache)

HAF

Min cache size Minimum cache size is


is 32GB 256GB (/CBX)
512GB (/CBX Pair)

Data Location in Memory


DATA VSP G/F1500 VSP 5000 Series
CM (user cache data) Cache Cache
SM (Shared memory data) Cache Cache
LM (Local memory - MP used data) MPB(LM) Cache
FE transfer buffer CPA(DXBF) Cache
BE transfer buffer DKB(DXBF) Cache

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-3
VSP 5000 Series Architecture and Availability - 2
Global Cache Mirroring

Global Cache Mirroring

 There are more complexities to the cache mirroring algorithm, but two primary considerations are:
• Mirror to the opposite “side” of a CBX pair (including other CBX pairs) in case of CBX failure
• Mirror to the owing controller of the LUN if the data was not received by the owning controller
 As with prior generation products, only non-destaged (dirty) data is mirrored, for efficiency, until
destaged
Side A Side B Primary data CBX-0
CBX-0 CBX-0
Mod-0
Basic Mirror data CTL0 CTL1 CTL0 CTL1
Side A Side A
CBX Pair-0
Side B CBX-1 Side B CBX-1
Option1 CTL2 CTL3 CTL2 CTL3
One of these area is
selected as mirror.
CBX-2 VSP 5000 (2CBX/2CTL)
Mod-1 Option2 CTL4 CTL5
Side A
CBX Pair-1
CBX-3
Side B
Option3 CTL6 CTL7

CBX-4
VSP G1500 CTL8 CTL9
Side A
CBX Pair 2
CBX-5
Side B
CTL10 CTL11

VSP 5000 (6CBX) © Hitachi Vantara LLC 2020. All Rights Reserved.

Shared Memory Caching Method

 In VSP 5000 series, the SM table is cached to each CTL’s PM area timely to avoid the overhead of inter-CTL access
CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB

CHB
CTL#0 CTL#1 CTL#2 CTL#3 CTL#4 CTL#5 CTL#6 CTL#7 CTL#8 CTL#9 CTL#10 CTL#11

MP MP MP MP MP MP MP MP MP MP MP MP
U U U U U U U U U U U U

PM PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master
(Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror)

SM SM SM SM CM CM CM CM CM CM CM CM
Master Reserved Slave Reserved

CM CM CM CM

LM LM LM LM LM LM LM LM LM LM LM LM
DKB

DKB

DKB

DKB

DKB

DKB

DKB

DKB

DKB

DKB

DKB

DKB
HIE

HIE

HIE

HIE

HIE

HIE

HIE

HIE

HIE

HIE

HIE

HIE

CBX #0 CBX #1 CBX #2 CBX #3 CBX #4 CBX #5

HSNBX
ISW HSNBX ISW

1. The MPU which is in the same CTL of SM table(Master) accesses to the table directly.
2. The other MPUs can access to mirrored table in the same CTL’s PM area.
3. Most of SM accesses including replication PPs can be adopted this SM caching feature.
© Hitachi Vantara LLC 2020. All Rights Reserved.

MPU – Microprocessor Unit

Page 3b-4
VSP 5000 Series Architecture and Availability - 2
Shared Memory (SM) Resiliency(Compare/Contrast VSP 5000 versus G/F1x00)

Shared Memory (SM) Resiliency(Compare/Contrast VSP 5000


versus G/F1x00)

 In VSP G1x00/VSP F1x00, both primary SM area and secondary SM area are finally
assigned in basic cache PK in both CL (CMPK#0, #1)

 In VSP 5000 series, primary SM area and secondary SM area are placed on different
nodes (node 0, node 1) to be able to endure node failure

CMPK#0 CMPK#1

CTL#0 CTL#1 Node2 Node4


SM Primary SM Secondary

CMPK#2 CMPK#3 SM Primary Rsv Area

Node0 CTL#4 CTL#5 CTL#8 CTL#9

CTL#2 CTL#3 Node3 Node5


CMPK#4 CMPK#5
SM Secondary Rsv Area
Node1 CTL#6 CTL#7 CTL#10 CTL#11
CMPK#6 CMPK#7

SM assignment in VSP 5000 series

SM assignment in VSP G1x00/


VSP F1x00
© Hitachi Vantara LLC 2020. All Rights Reserved.

Shared Memory Resiliency (VSP G1x00/ VSP F1x00)

 In VSP G1x00/VSP F1x00, when one CMPK fails, the SM is not


redundant until the failed CMPK is replaced
CMPK#0 CMPK#1 CMPK#0 CMPK#1 CMPK#0 CMPK#1
SM Secondary
SM Primary SM Secondary SM Primary →SM Primary SM Primary SM Primary

CMPK#2 CMPK#3 CMPK#2 CMPK#3 CMPK#2 CMPK#3

1. Normal status 2. CMPK#0 failure 3. SM takeover

Data copy

CMPK#0 CMPK#1 CMPK#0 CMPK#1

Blocked
→SM Secondary SM Primary SM Secondary SM Primary

CMPK#2 CMPK#3 CMPK#2 CMPK#3

4. FRU replacement in 5. Recovered


CMPK#0
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-5
VSP 5000 Series Architecture and Availability - 2
Shared Memory Resiliency

Shared Memory Resiliency

 In VSP 5000 series, the reserved area is same size of SM area, and it is not used except for the failure case

 Immediate resiliency (Same as VSP G1x00/ VSP F1x00)

 SM data copy takes 5-10 minutes depending on system load

 Resiliency re-established before maintenance replacement

 Log events used to ensure resiliency is re-established before service event continues

Example #1: Controller fails in Node0 that contains Primary SM


CTL#0 CTL#1 CTL#0 CTL#1 CTL#0 CTL#1 CTL# CTL# CTL# CTL#
Reserved Area Blocked
0 1
SM Secondary 0 1
SM Primary Reserved Area SM Primary
→ SM Secondary SM PrimarySM Secondary →Reserved
→SM Secondary SM SecondaryReserved Area

Node0 Node0 Node0 Node0 Node0


CTL#2 CTL#3 CTL#2 CTL#3 CTL#2 CTL#3 Data CTL# CTL# CTL# CTL#
SM Secondary copy 2 3 2 3
SM SecondaryReserved Area → SM PrimaryReserved Area SM Primary Reserved Area SM PrimaryReserved Area SM PrimaryReserved Area
Node1 Node1 Node1 Node1 Node1
Node2 Node2 Node2 Node2 Node2
CTL# CTL# CTL# CTL#
CTL#4 CTL#5 CTL#4 CTL#5 CTL#4 CTL#5 4 5 4 5
Node3 Node3 Node3 Node3 Node3
CTL# CTL# CTL# CTL#
CTL#6 CTL#7 CTL#6 CTL#7 CTL#6 CTL#7 6 7 6 7

1. Normal status 2. CTL#0 failure 3. SM takeover


4. FRU replacement in 5. Recovered
CTL#0 © Hitachi Vantara LLC 2020. All Rights Reserved.

Example #2: Whole “Node0” (where Primary SM is) failure case!

CTL#0 CTL#1 CTL#0 CTL#1 CTL#0 CTL#1

SM Primary Reserved Area SM Primary Reserved Area SM Primary Reserved Area


Node0 Node0 Node0
CTL#2 CTL#3 CTL#2 CTL#3 CTL#2 CTL#3
Data copy
SM Secondary Reserved Area
SM Secondary Reserved Area → SM Primary→SM Secondary SM Primary SM Secondary
Node1 Node1 Node1

1. Normal status 2. Node#0 failure 3. SM takeover

CTL#0 CTL#1 CTL#0 CTL#1

Blocked Blocked→
→SM Secondary Reserved Area SM Secondary Reserved Area
Node0 Node0
Data copy
CTL#2 CTL#3 CTL#2 CTL#3

SM Secondary
SM Primary →Reserved SM Primary Reserved Area
Node1 Node1

4. FRU replacement in 5. Recovered


Node#0
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-6
VSP 5000 Series Architecture and Availability - 2
VSP 5100 Shared Memory

VSP 5100 Shared Memory

 In VSP 5100, master SM area and slave SM area are placed on


different CBXs (CBX 0, CBX 1) to be able to endure CBX failure

 There is no reserved area

© Hitachi Vantara LLC 2020. All Rights Reserved.

But 2CTL configuration doesn’t have reserved area because it has only 2 CTLs.

Shared Memory – VSP 5100

 There is no reserved area, so it is same redundancy as R800


CTL#0 CTL#1 CTL#0 CTL#1 CTL#0 CTL#1

SM Master SM Master SM Master

CBX0 CBX0 CBX0


CTL#2 CTL#3 CTL#2 CTL#3 CTL#2 CTL#3
SM Slave
SM Slave →SM Master SM Master

CBX1 CBX1 CBX1

1. Normal status 2. CTL#2 failure 3. SM takeover

CTL#0 CTL#1 CTL#0 CTL#1


Blocked
→SM Slave SM Slave
CBX0 Data copy CBX0
CTL#2 CTL#3 CTL#2 CTL#3

SM Master SM Master
CBX1 CBX1

5. Recovered
© Hitachi Vantara LLC 2020. All Rights Reserved.

• When loosing shared the write mode will be write through

Page 3b-7
VSP 5000 Series Architecture and Availability - 2
Optimization of Cache Access Logic – Simplifying the DIR Table Architecture

Optimization of Cache Access Logic – Simplifying the DIR Table


Architecture
In VSP 5000 series, due to the improvement of cache directory management logic, H/W failure doesn’t cause the Host
I/O degradation with Write Through process anymore
Case # Impact for Host I/O in H/W failure case
Explanation Actual estimated time for I/O
degradation
Case #1: Until the failed CMPK is physically replaced and whole SM data is copy backed from another side A few hours ~ a few days?
VSP G1x00/VSP of CMPK, the Host I/O is executed as Write Through mode (Depending on the maintenance
F1x00 w/ 2 CMPKs works by CE.)
Case #2: F/W logic blockades the whole CMPKs in the same side of cluster as failed CMPK, then recreate 1 ~ 2 mins
VSP G1x00/VSP the cache directory in another side of CMPK’s cache. Until finishing this process, the Host I/O is
F1x00 w/ 4+ CMPKs executed as Write Through mode
Case #3: No Host I/O degradation (No Write Through mode) 0 min
VSP 5000 (2, 4, 6 * Because the cache directory is managed in the different method, it is not necessary the cache
Nodes) directory recreation in Node failure case. (Improvement Logic in VSP 5000!)

Case#1 VSP G1x00/VSP F1x00 Case#2 Case#3 VSP 5000 Series (2, 4, 6Nodes)
w/ 2 CMPKs VSP G1x00/VSP F1x00 w/ 4+ CMPKs
CTL#0 CTL#1
CMPK#0 CMPK#1 CMPK#0 CMPK#1
Improvement
SM Primary Reserved Area of Cache
SM Primary SM Secondary SM Primary SM Secondary Directory
Node0 Management
CMPK#2 CMPK#3
CTL#2 CTL#3 logic

SM Secondary Reserved Area


Node1
© Hitachi Vantara LLC 2020. All Rights Reserved.

• Only exception is 5100 as shown in slide 51

Optimization of Cache Access Logic Simplifying the DIR Table


Architecture

 VSP G1x00/VSP F1x00 cache dir management logic:


• Architecture of the table is dramatically improved as basically managing just a same CTL’s owning cache data
 Owning cache data located in the same CTL’s cache and owning cache data located in the other CTL’s cache
• So if one of CTL is failed, the table doesn’t have to be recreated from scratch, but keep using the remaining CTL’s information

Cache DIR: I/O data searching table on the cache to judge Cache Hit or Cache Miss
Case #3
VSP 5000 (2, 4, 6Nodes)
#0-a #0-b #0-c #0-d CTL#0 CTL#1 #1a #1-b #1-c #1-d
Cache DIR Cache DIR
#1-a #2-b #2-c #3-d #10a #2-b #2-c #3-d
SM Primary Reserved Area
#4-a #5-b #6-c #6-d #4-a #5-b #6-c #6-d
Node0

#2-a #2-b #2-c #2-d


CTL#2 CTL#3 #3-a #3-b #3-c #3-d
Cache DIR Cache DIR
#1-a #1-b #2-c #3-d #0-a #2-b #2-c #2-d
SM Secondary Reserved Area
#4-a #5-b #6-c #6-d #4-a #5-b #6-c #6-d
Node1

#4-a #4-b #4-c #4-d


CTL#4 CTL#5 #5-a #5-b #5-c #5-d
Cache DIR Cache DIR
#0-a #2-b #2-c #3-d #1-a #2-b #2-c #3-d

#3-a #5-b #6-c #6-d #4-a #6-b #6-c #6-d


Node2

#6-a #6-b #6-c #6-d CTL#6 CTL#7 #7-a #7-b #7-c #7-d
Cache DIR Cache DIR
#2-a #2-b #2-c #3-d #1-a #2-b #2-c #3-d

#4-a #5-b #7-c #7-d #4-a #5-b #6-c #6-d


Node3 © Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-8
VSP 5000 Series Architecture and Availability - 2
MP Failure

MP Failure
This section explains about MP failure.

MPU Ownership in Failure


Cases
 Pair relation to change the MPU ownership are as below (Same node
pair, different CBX)
CTL#0 – CTL#2 CTL#4 – CTL#6 CTL#8 – CTL#10

CTL-8 CTL-9 CTL-10 CTL-11

CTL#1 – CTL#3 CTL#5 – CTL#7 CTL#9 – CTL#11

© Hitachi Vantara LLC 2020. All Rights Reserved.

Replace CPU or Memory


This section discusses about impact of replaced CPU and memory.

Impact of Replacing CPU or Memory

 VSP G1x00/ VSP F1x00: Host path is not affected by MPB and cache maintenance but
• Hosts may still have performance impact

• More ports are affected when a CHA or DKC fails (VSP 5000 are less effected)

 VSP 5x00: Maintenance on any CTL (CPU or Cache) triggers host path failover but
• With max memory there is no need for memory upgrades

• With Hitachi Interconnect Edge (HIE) FRU’s there will be less need to replace or dummy replacement of controller (vs Gxxx
NTB)

• With reliability improvements to the logic board there should be even less controller failures
Failure part Number of FE Ports affected by failure
R800 R900
SFP 1 port 1 port
CHA/CHB ~ 8 port ~ 4 port
CTL/DIMM 0 ~ 16 port
Cluster ~ 32 port -
DKC/CBX ~ 128 port ~ 32 port © Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-9
VSP 5000 Series Architecture and Availability - 2
Active Learning Exercise: Raise Your Hands If You Know It!

Active Learning Exercise: Raise Your Hands If You Know It!

Can you describe the new Shared Memory (SM) process and its
benefits within the VSP 5000 series?

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Major Technical Differences Between VSP 5000 Series and VSP


G1500/VSP F1500
In this section you will learn about technical differences between VSP 5000 series and VSP
G1500/VSP F1500.

Next-Gen High-End Storage

 Flexible scale-out and/or scale-up architecture


 Maintain single SVOS system image and global cache
Controller blocks can be
 Flash-optimized with NVMe and SAS intermix capability
added/removed
 Increased Adaptive Data Reduction (ADR) scalability and performance nondisruptively*
* Base controller block removal is TBD
 High-end storage in simpler, more resilient design
 SVOS compatible, Mainframe and Open Systems Support
Controller block (base) Controller block
 Diskless nodes can be intermixed
Front End
Node
VSP 5000 Series Front End
Node
CPU CPU CPU CPU

Front
Front
End Memory
Front End Front Memory
Hitachi
End End Back End Back End

High Speed Switch Accelerated


Front End Fabric Front End
Node
VSP G1x00/ Processors + Global Cache +
Shared Memory
Node
CPU CPU CPU CPU

VSP F1x00
Memory Memory
High Speed Switch Back End Back End

Back Back
End End
Back Back
End End Media Media
Chassis Chassis

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-10
VSP 5000 Series Architecture and Availability - 2
Summary of Major Differences

Summary of Major Differences


Item G/F1500 VSP 5000 Series
System architecture • hierarchical star net • Hitachi Accelerated Fabric (HAF)1
• ASICs • Flash-optimized SVOS RF (for
example, DCT)
Scalability/specifications: • 128 cores/2TB cache • 240 cores/6TB cache (Broadwell)
• Max CPU (cores)/cache (Haswell) • Scale-up (model 5100-to-5500)2
• Upgradeability/scalability design • Highly granular scale-up • Scale-out (controller blocks)2
• Media Density • Maximum media is 15TB • 30TB Media
NVMe Drives No Yes
SCM media ready No Yes, with HDT tier-0 media recognition3
NVMe-FC front-end ready No Yes3
SAS/NVMe controller block intermix N/A Yes
NVMe encryption N/A Yes3
Max. recommended ADR effective capacity4 765TiB 6144TiB (> 6x!)
Rebuild Time Improvement N/A Reduced 80% for Flash; 20% for HDD
1. A.k.a., HIE in engineering diagrams 3. Roadmap
2. First release after GA 4. Effective capacity of the system is larger when adding/mixing non-ADR volumes
© Hitachi Vantara LLC 2020. All Rights Reserved.

Item VSP G1500/ VSP F1500 VSP 5000 Series


Overall system resiliency/memory architecture Seven 9s Eight 9s
• Primary /Secondary Shared • Primary/Secondary SM with
Memory (SM) reserved areas
• Alternate routes for x-paths • Independent x4 switching
GAD consistency groups 256 1024
VMWare VVol HA/DR support (for example, GAD or No Yes1
UR)
Non-disruptive data-in-place upgrade design No Yes; Nondisruptive node/ add/delete2

Proprietary Flash Media (FMD) Yes. Compression and optional Yes. Compression or optional
encryption is on the drives encryption is not on the drives.3
SAS Links SAS 6G 4WL x 32 path SAS 12G 4WL x 48 path

FCoE Front-End features Supported Not-supported

1. Roadmap 3. Encryption is supported via Back-end Director options


2. Non-disruptive elimination of base controller block is TBD

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-11
VSP 5000 Series Architecture and Availability - 2
VSP 5000 Series vs. VSP G1500/VSP F1500 Summary (Major Changes Aligned to Value)

VSP 5000 Series vs. VSP G1500/VSP F1500 Summary (Major


Changes Aligned to Value)

Category Differences: VSP 5000 Series vs. G/F1500 Value Alignment

• More cores/cache; ADR software • Better $$$/IOP; better $$$/GB (via


improvements significantly increased effective capacity with
Performance/Scalability
• HAF (HIE); scale-out ADR)
• Flash-optimized (DCT, memory management) • Lower latency (application SLAs)
• Lower latency (application SLAs)
• NVMe back-end; NVMe front-end (roadmap)
• Better $$$/IOP; better $$$/GB
• SAS/NVMe controller block intermix
Future Proof Agility • Optimize use cases between SAS and NVMe
• Customer-executed non-disruptive data
• Quicker ROI refreshing to VSP 5000 from
migration
GAD supported platforms
• System architecture; memory management • Lower risk of downtime from multiple
Resiliency • “By 4 (x4)” interconnection architecture component/media failures, maintenance or
• Rebuild time improvement upgrade events
• Lower cost data confidentiality (compliance)
• Enhanced data sanitization
Security • Nondisruptive third-party KMS migration
• Enhanced key management
• Enhanced resiliency with third-party KMS
• Increase GAD consistency groups • More granular high-available and disaster
Data Protection
• HA/DR for VMWare recovery testing and failover
© Hitachi Vantara LLC 2020. All Rights Reserved.

Direct Command Transfer (DCT) Logic (Enhancement


for Optimizing the ASIC Emulator)

 Refactoring the ASIC emulation logic to reduce the microcode overhead for host I/O transaction

 Front-end I/O task can directly access to the back-end


VSP Gx00/VSP Fx00,
VSP G1x00/VSP F1x00 Panama II, VSP 5000 (DCT)
FE I/O Task FE I/O Task

Function: Function:
Message Message
receiving/sending receiving/sending

Integrated Function:
BE I/O Task
RAID configuration Note:
+
Single Drive access Shown with SAS. But,
Function:
RAID configuration with NVMe, this DCT
logic also applies
Function: Function:
Single Drive access SAS protocol

Function: BE I/O Task


SAS protocol

Drive Drive
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-12
VSP 5000 Series Architecture and Availability - 2
Hardware independence (ASIC-less)

Hardware Independence (ASIC-less)


# ASIC features How is this addressed on newer platforms?
VSP VSP Gxx0/ VSP Fxx0 VSP 5000 Series
G1x00/VSP
F1x00
1 LR(command distribution) – Open ASIC SVOS SVOS
2 LR(command distribution) – MF ASIC (MF is N/A) Hyper Transfer Processor
(HTB) on CHB
3 DRR(parity calculation) ASIC SVOS SVOS
4 Inter-controller data transfer ASIC Intel NTB + Intel MCH DMA HIE DMA
(mirroring, cross-access)
5 Inter-controller communication ASIC Intel NTB + SVOS + HIE bridge +SVOS +
(Control communication) Intel MCH DMA Intel MCH DMA

 No ASICs in VSP 5000 series


 Hitachi Accelerated Fabric (HAF)
(Hitachi Interconnect Edge (HIE) protocol via FPGA plus switched PCIe node interconnect)
 FPGAs for hardware assist offload functions
 SVOS Flash-optimized code paths (for example, Direct Command Transfer (DCT) from VSP Gxx0/ VSP Fxx0)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Program Product Changes


In this section you will learn about program product changes.

Program Product List (Differences Only)

Program Product VSP G1500/ VSP 5000 Series


VSP F1500 GA Roadmap
Hitachi Cache Residency Manager Software Support N/A N/A
Hitachi Cache Residency Manager Software for Mainframe Support N/A N/A
Hitachi Cache Manager Utility Software Support N/A N/A
Hitachi Compatible XRC Software Support N/A Support
Hybrid License (for VSP 5000 “h”) -- Support Support
Scale-out License (for VSP 5500) -- Support Support
VSP G1500 and F1500 VSD (Suez->Zeus) Support N/A N/A
All Flash Array (“G” to “F”) Support N/A N/A

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-13
VSP 5000 Series Architecture and Availability - 2
Volume Capacity Overview

Volume Capacity Overview


This section explains about volume capacity.

Volumes/Capacity
Max Specification

Model Non-HDP HDP/HDT Data Reduction (ADR Comp/Dedup)


Max LDEV Max DP-Vol Max DP Pool Max DRD-Vol Max DRD Pool
number number capacity number (*2) capacity
VSP G900/ VSP 65,280 (64k-256) 63,232 (62k-256) 16.6PiB Comp only: 32,639 16.6 PiB – 0.10X (*1)
F900 Comp/Dedup:
32,287
VSP G1x00/ 65,280 (64k-256) 63,232 (62k-256) 12.5PiB Comp only: 32,639 12.5 PiB – 0.10X (*1)
VSP F1x00 Comp/Dedup:
32,632
VSP 5000 Series 65,280 (64k-256) 63,232 (62k-256) 16.6PiB Comp only: 32,639 16.6 PiB – 0.10X (*1)
Comp/Dedup:
32,287

*1: ‘0.10X’ = 10[%] for total used capacity of DRD-Vols.


*2: Max DRD-Vol number (about 32K) is less than Max DP-Vol number because of microcode architecture.

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Point out device count with ADR

Page 3b-14
VSP 5000 Series Architecture and Availability - 2
Key Features and Discussion Points

Key Features and Discussion Points


In this section you will learn about key features and discussion points.

Key Features

 Heterogeneous module (node-pair) addition/deletion (nondisruptive)


• Growth with data-in-place
• Future-proofing
 “Refresh” modules (node-pairs) can be different CPUs, cache-sizes, and so on
 SAS and NVMe intermix by module in the system
 In-system nondisruptive migration to different media (Volume migrator,
no global-active device)

 Front-end NVMeoF (32Gbps FC) by software change (roadmap)

 Density: 30+ TB SAS media at GA; 30TB SSD


• 30TB SSD via 24 drives in 2U (x4)
© Hitachi Vantara LLC 2020. All Rights Reserved.

• NVMeoF – NVME over Fabric (Fibre Channel)

• NVMe – NVM Express (non-volatile Memory host controller interface)

 Rebuild time improvement


 Bidirectional port option (Target or Bi-Directional)

 ADR with Intelligent Tiering

 HDP with Intelligent Data Placement

 LUN count per port increased to 4096

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-15
VSP 5000 Series Architecture and Availability - 2
Rebuild Time Improvement

Rebuild Time Improvement

 Drive rebuild time of SSD or FMD is improved to 1/5 times of the current

 The following chart is example for SSD:

2D+2D 3D+1P 7D+1P or 14D+2P Note


6D+2P
960GB SSD 1hr 10min 1hr 1hr 20min 2hr 10min
(70min) (60min) (80min) (130min)
⇒14min new ⇒12min ⇒16min ⇒26min
time
1.9TB SSD 2hr 10min 1hr 40min 2hr 20min 3hr 50min About x2
(130min) (100min) (140min) (230min) 960GB SSD
⇒26min ⇒20min ⇒28min ⇒46min
3.8TB SSD 4hr 30min 3hr 30min 4hr 50min 8hr About x2
(270min) (210min) (290min) (480min) 1.9TB SSD
⇒54min ⇒42min ⇒58min ⇒96min
15.3TB SSD 18hr 14hr 19hr 20min 32hr About x4
(1,080min) (840min) (1,160min) (1,920min) 3.8TB SSD
⇒3.6hr ⇒2.8hr ⇒4hr ⇒6.4hr
30.6TB SSD 36hr 28hr 40hr 64hr
⇒7.2hr ⇒5.6hr ⇒8hr ⇒12.8hr
© Hitachi Vantara LLC 2020. All Rights Reserved.

Bidirectional Port – Concept (Open CHB Option)

 The “Bidirectional Port” can support the functionality of “Target”, “Initiator”, “RCU Target”
and “External” within one physical port. This means that one physical port can work both
as initiator and target

 Port attribute can be only set target or bidirectional

Ex: 4 ports used

Ex: 2 ports used

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-16
VSP 5000 Series Architecture and Availability - 2
Bidirectional Port Option - Considerations

Bidirectional Port Option - Considerations

 Initiator performance affection by the I/O queue depth of Bi-Directional port


• Port IOPs performance of initiator I/O (FC32G CHB Port’s limit actual value)
VSP G1x00/VSP F1x00 VSP 5000 Series
External Port (Initiator Mode only) Bi-Directional Port (Target & Initiator Mode)
I/O pattern Initiator I/O QD=2K[/Port ] Initiator I/O QD=1K[/Port ]
#of I/O Request=1K #of I/O Request=2K #of I/O Request=1K #of I/O Request=2K
RD 8K IOPs 361,000 361,000 361,000 361,000
WR 8K IOPs 220,000 225,000 220,000 225,000
RD 512 IOPs 588,000 588,000 588,000 467,000 (-20.6%)
WR 512 IOPs 293,000 293,000 293,000 293,000

In the case of Bi-Directional port, the limit performance of 512B Read IOPs with 2K I/O requests is decreased about
20%. However, in the case that external storage is Hitachi Storage, this degradation of initiator I/O is not an issue
because the limit performance of 512B Read IOPs in the target port is 400K IOPs which is less than 467,000 IOPs.

Thus, there is no problem to replace external port to Bi-directional port with 1K initiator I/O queue depth.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Discussion Points

 Hitachi Command Suite (HCS) is supported, not bundled or required!

 Flash media transitions from FMD/FMD-HDE to commodity SSDs


• SSD (SAS and NVMe) flash media; no new FMD, FMD-HDE
• Improved ADR scalability and performance
• Greater density will be available with 30TB media and 24-drives in 2U

 Encryption highlights
• SAS encryption is back-end director (EDKB)
• KMIP support (qualifications, and so on) will be same as VSP G1x00/ VSP
F1x00 at GA
• NVMe Encryption is Back-end Director (EDKBN) © Hitachi Vantara LLC 2020. All Rights Reserved.

• KMIP= Key management Product

Page 3b-17
VSP 5000 Series Architecture and Availability - 2
Licensing for VSP 5000

Licensing for VSP 5000


In this section you will learn about licensing for VSP 5000.

VSP 5000 Packaging, Licensing and Pricing Framework

 Appliance packaging with base software included advanced package (optional):


• Base or advanced software packaging not shown as a separate line item on quote

 All software in advanced available as Add-on or optional packages:


• Shown outside appliance model in quote

 Frame licensing for everything (no capacity licenses sent)

 Software priced by node pair and by capacity/frame (depending on software):


• Node pair similar to pricing on VSP G1500/VSP F1500

 Separate open, mainframe and open/MF packages

© Hitachi Vantara LLC 2020. All Rights Reserved.

VSP 5000 Base Packages for Open and MF

Open Base Package MF Base Open/MF Base


Package Package
SVOS with all Enterprise SVOS with all Enterprise (Open,
SVOS with all
(Open) features MF) features
Enterprise (MF) features

Local replication (Open) Local replication (Open, MF)


Local replication (MF)
Dynamic Tiering (Open) Dynamic Tiering (Open, MF)
Dynamic Tiering (MF)
Hitachi Ops Center Hitachi Ops Center Analyzer (w/
Analyzer (w/ 25 node 25 node license)
Mainframe Analytics
license)
Recorder
Hitachi Data Instance Director
Hitachi Data Instance (HDID – Storage Replication)
Director (HDID – Storage
Replication)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-18
VSP 5000 Series Architecture and Availability - 2
VSP 5000 Advanced Packages for Open and MF

VSP 5000 Advanced Packages for Open and MF

Open Advanced Package MF Advanced Package Open/MF Advanced Package


(includes Open Base (incudes MF Base (includes Open/MF Base software)
software) software)
Remote Replication + RRE (Open,
Remote Replication + Remote Replication + MF)
Remote Replication Remote Replication
Extended (Open) Extended (MF) Mainframe Essentials

Global-Active Device (GAD) Mainframe Essentials Global-Active Device (GAD)


Hitachi Ops Center Hitachi Ops Center
Automator Automator Hitachi Ops Center Automator

Hitachi Ops Center Analyzer Nondisruptive Migration Hitachi Ops Center Analyzer
predictive analytics (NDM) predictive analytics
Nondisruptive Migration
(NDM) Nondisruptive Migration (NDM)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Optional Software Contents and Licensing

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 3b-19
VSP 5000 Series Architecture and Availability - 2
Active Learning Exercise: Raise Your Hands If You Know It!

Active Learning Exercise: Raise Your Hands If You Know It!

What makes a VSP 5000 series easily scale up to meet different


performance requirements?

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Module Summary

 In this module, you should have learned to:


• Review the cache and shared memory for the Hitachi Virtual Storage
Platform 5000 Series
• Explain the MP failure design and impact of replacing CPU or memory
• Review technical differences between VSP 5000 series and VSP G1500/
VSP F1500
• Explain volumes, capacity and hardware specifications for the VSP 5000
Series
• Review licensing for VSP 5000

© Hitachi Vantara Corporation 2019. All Rights Reserved.

Page 3b-20
4. VSP 5000 Series Adaptive Data
Reduction
Module Objectives

 Upon completion of this module, you should be able to:


• Explain Adaptive Data Reduction and its functions
• Review the ADR supported platforms and requirements
• Discuss various ADR Terminology and pool requirements
• Compare ADR Inline vs Post Process
• Explain HDT Smart Tiers, garbage collection, migration, monitoring and ADR
sizing

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-1
VSP 5000 Series Adaptive Data Reduction
ADR and It’s Functions

ADR and It’s Functions


In this section you will learn about ADR and it’s functions.

What is Adaptive Data Reduction

 Adaptive Data Reduction (ADR) function reduces the amount of stored


data by compression and deduplication technology. It is an SVOS
program product offering

 ADR includes both controller-based capacity savings (Compression and


Deduplication) and drive based accelerated compression
(Compression)

 ADR is enabled at LUN level. A pool can have a mix of ADR enabled
LUNs and have no capacity savings (Non-ADR enabled)

© Hitachi Vantara LLC 2020. All Rights Reserved.

What is Compression

 Compression takes raw data and reduces its size by applying an


algorithm. It is a function by software provided by storage controller

 Data compression function uses LZ4 compression algorithm to


compress the data.

© Hitachi Vantara LLC 2020. All Rights Reserved.

LZ4 is a “lossless” compression method, so called because the data can be uncompressed
without losing any of the original info. LZ4 offers good capacity savings and one of the highest
performing algorithms.

Page 4-2
VSP 5000 Series Adaptive Data Reduction
What is Deduplication

What is Deduplication

 Deduplication looks for matches in the input stream to data its already
seen. It is a function provided by the storage controller. It searches the
input data stream and looks for a string of matches to data in its
“dictionary” and when it sees a match, it replaces the actual data with a
pointer to a match

 Deduplication watches the incoming data stream and catalogs all the
data it is seen
• It keeps a fingerprint of the data (a CRC like calculation)
• It also keeps a pointer to where the first instance got written

© Hitachi Vantara LLC 2020. All Rights Reserved.

 As more data comes on, Deduplication keep checking the fingerprints


• If there is a match, it checks the data that was written before
• If the bytes match, dedupe updates the new data pointer to points to the
match
• The new data never gets written
• This activity generates metadata that has to be managed and stored

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-3
VSP 5000 Series Adaptive Data Reduction
ADR Supported Platform and Requirements Overview

ADR Supported Platform and Requirements Overview


This section explains ADR supported platform and requirements.

ADR Supported Platforms and Requirements

 Supported Platforms
• Hitachi Virtual Storage Platform 5x00 Arrays
• FMD, SSD and spinning drives
• HDP / HDT(*1) Pools
• External storage virtualized into a pool (no matter the storage make-up)
• No MF support

 Capacity savings license for each array is part of SVOS

Note: (*1) Smart tiering in lowest tier only.


© Hitachi Vantara LLC 2020. All Rights Reserved.

 BED encryption can be used with ADR

 ADR is configured on a LUN level (Compression or


Compression/Deduplication) in a pool

 Shared memory requirements


• Compression and dedupe is in base memory configuration

 ADR required 30% buffer in a pool to perform properly


• 10% for ADR and 20% for Hitachi Dynamic Provisioning (pool buffer to not
run into pool full issue)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-4
VSP 5000 Series Adaptive Data Reduction
ADR Constraints

ADR Constraints

 ADR is not supported on Hitachi Thin Image SVols for example can’t
have ADR devices in dedicated HTI pool

 ADR is not supported (can’t enable) on journal volumes

 When an SI pair is created using ADR enabled devices, the quick


restore function cannot be used

 Active flash function cannot be used for multi tier pools

 ADR is not supported for Mainframe

© Hitachi Vantara LLC 2020. All Rights Reserved.

ADR Terminology Overview


In this section you will learn about various ADR terminologies.

ADR Terminology
 DRD-Vol Compression Deduplication+Compression

• A DP-VOL with the capacity savings function A A B B Before capacity


saving
enabled
B B Compression

 DSD-Vol Compression

• A DP-VOL that stores duplicate data. DSD Storage controller A B


Deduplication
volumes are automatically created when the
deduplication function is enabled. It is shared
among with DRD-Vols in the pool
… …

 FPT-Vol A B After capacity


saving
• DP-VOL that stores the table to search duplicate FMD SSD/HDD/External storage
data. FPT volumes are automatically created
when the deduplication function is enabled Pool Pool

Note: DP-Vol with only deduplication enabled is not supported. Dedupe is performed for data on DRD-Vol with
Deduplication and Compression enabled in the same pool.

© Hitachi Vantara LLC 2020. All Rights Reserved.

ADR can be applied on any DPVOL. The available settings are compression only, compression
and deduplication. There is no deduplication only setting.

Page 4-5
VSP 5000 Series Adaptive Data Reduction
ADR – DRD-VOL, FPT and DSD-Vol Distribution

 Dedupe Store Volume (DSD-Vol)


• A single copy of all the duplicate data blocks is stored in DSD volumes in the
pool instead of the actual volume itself

 Fingerprint Volume (FPT Vol)


• When a SHA hash for an 8KB block is calculated, a fingerprint is generated
and stored in metadata table. Fingerprints are the first level of compression
for blocks being deduped
• If the fingerprint is not found, the block is not dupe
• If the fingerprint is found, then a byte comparison is done to ensure it’s a
dupe
• Fingerprints are placed in the FPT volumes in the pool
© Hitachi Vantara LLC 2020. All Rights Reserved.

For each pool there will be 24 FPT volumes and 24 DSD volumes. They are automatically
created in the highest CU FE. In Storage Navigator the attribute will show “Deduplication
System Volume (Fingerprint)” or Deduplication System Volume Datastore.

ADR – DRD-VOL, FPT and DSD-Vol Distribution

 DRD-Vols are associated with an owning controller DRD DRD


DRD DRD
DRD DRD
DRD DRD

 When the first DRD-Vol is created, FPT-Vol and FPT


DRD
DRD
DSD
DRD
DRD
FPT
DRD
DRD
DSD
DRD
DRD

DSD-Vols are created


DRD DRD
DRD DRD

Owning CTL
FPT
DRD
DSD
DRD
Owning CTL
FPT
DRD
DSD
DRD
FPT DSD FPT DSD

Owning CTL Owning CTL


 DRD-Vols, FPT-Vol and DSD-Vols all reside in Owning CTL
FPT DSD Owning CTL
FPT DSD

pools Node
Owning CTL Node
Owning CTL

Node Node
Module Module

Pool 0 Pool 1
Pool 2

© Hitachi Vantara LLC 2020. All Rights Reserved.

DRD VOL – Data Reduction Volume


FTP – Deduplication Fingerprint Volume
DSD-Vol – Deduplication Store Data Volume

Page 4-6
VSP 5000 Series Adaptive Data Reduction
Industry Data Reduction Terms

Industry Data Reduction Terms

 Data reduction ratio


• A measure of the data reduction effect as an N:1 ratio. For instance, 2:1 means 100TB written files
in 50TB of physical usable capacity. If 3:1 then 100TB fits in 33.3TB physical usable
• Data Reduction Ratio = Uncompressed Size / Compressed Size
• Data Reduction Ratio = 1 / (1-Savings Rate)

 Savings rate
• A measure of data reduction effect, measured as “Percent capacity saved”. So if the data reduction
ratio was 2:1, the savings rate would be 50%. If 3:1 then savings rate is 66%
• Savings Rate = 1 – Compressed Size / Uncompressed Size
• Savings Rate = 1 – 1 / Data Reduction Rate

 Total efficiency
• The overall storage efficiency combining the data reduction, thin provisioning and snapshots
© Hitachi Vantara LLC 2020. All Rights Reserved.

ADR Notes

 DRD-Vols data that is compressed but not duplicated is stored within


the DRD-Vol. Fingerprints of compressed data will go to metadata of
pool (2% of 10%)

 Duplicated data in all DRD-Vols is managed in a common area (Dedupe


store) and each volume references the duplicated data in this common
area so that duplicated data would not be storage in the pool space
dedicated to DRD-Vols. Only one instance remains, it is still in the
DSD-Vol

 Pages written to the DSD-Vol and FPT occur in round robin fashion and
have no affinity to DRD-Vol and CBX pair

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-7
VSP 5000 Series Adaptive Data Reduction
What is Effective Capacity

 On VSP 5000 arrays, when you create the first DRD-Vol that has ADR
attributes the DSD volumes and FPT volumes will be created as follows
• For each pool, there will be 24 data store devices (DSD) and 24 fingerprint
volumes (FPT) created distributed across CBX pairs
• The capacity of each DSD volume between 5.98TB to 42.7TB
• The capacity of each FPT volume is 1.7TB

© Hitachi Vantara LLC 2020. All Rights Reserved.

What is Effective Capacity


In this section you will learn about effective capacity.

Raw vs Effective Capacity

 Raw capacity
• The physical media in the array or pool, depending on the scope of data reduction
• 8 drives of 1.92TB SSDs equals 15.36TB of raw capacity

 Usable capacity
• Capacity available after RAID data protection
• For example RAID 6 (6+2) of 1.92TB SSDs is 11.52TB of usable capacity
• For example RAID 5 (7+1) of 1.92TB SSDs is 13.44TB of usable capacity

 Effective capacity
• The amount of data written by, or available to the host
• In the above R6 example, if compression yields 1.5:1 and dedupe yields 2.0:1, then the effective
capacity would be 34.56TB [ = 1.5 * 2 * 11.52TB]

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-8
VSP 5000 Series Adaptive Data Reduction
ADR Pool Requirements Overview

ADR Pool Requirements Overview


In this section you will learn about ADR pool requirements.

ADR Pool Requirements

 ADR feature requires metadata that consumes pool capacity 10% (*1)
• The capacity consumed by “metadata” for the capacity savings function is 3% of the
total consumed capacity of all ADR enabled devices (DRD-Vols). This 3% metadata
overhead (2% is metadata, 1% is Deduplication Fingerprint table)
• The Capacity consumed by “garbage data” is 7% of the total consumed capacity of all
ADR enabled devices (DRD-Vols)
• Pool buffer space (HDP) – Manage the used capacity of the pool so it is lower than the
“Warning” threshold of 70%. This will prevent IO from being degraded. If pool usage
exceeds “Depletion” threshold of 80% or when an operation is performed while the
pool is almost full, garbage collection is prioritised which may impact performance

(*1) During periods of high write activity from the host, this capacity might increase
over 10% temporarily and then returns to 10% when the activity decreases
© Hitachi Vantara LLC 2020. All Rights Reserved.

• Use cpk.hitachivantara.com for capacity calculator

Storage Pool Usage With ADR Enabled

• The 10% for X covers 1% for Fingerprint, 2% for metadata and 7% for garbage data

Page 4-9
VSP 5000 Series Adaptive Data Reduction
ADR Hitachi Dynamic Tiering Smart Tiers

ADR Hitachi Dynamic Tiering Smart Tiers


This section explains about ADR HDT smart tiers.

ADR HDT Smart Tier

 Tier 1 is not reduced to ensure DRD-Vol


performance

 Tier 2 (or the bottom tier) will be


reduced as a post process
Tier 1
 Once the page becomes “hot” it is
rehydrated and promoted to Tier 1

 Data reduction is always post- Data


Tier 2 Reduction Tier 2
processed after pages are relocated Non-Reduced DSD-Vol
to the bottom tier
HDT Pool
 Rehydration is inline

Note: Data reduction is applied only to the data in the lowest tier.
© Hitachi Vantara LLC 2020. All Rights Reserved.

ADR HDT Smart Tiers

• Use cpk.hitachivantara.com for capacity calculator

Page 4-10
VSP 5000 Series Adaptive Data Reduction
ADR Garbage Collection

ADR Garbage Collection


In this section you will learn about ADR garbage collection.

Garbage Collection

 When compressed data is updated and its size changes, the original
stored data in the data storage area is no longer needed. This
unneeded data called “garbage” data

 In addition, when data is invalidated (made redundant) by deduplication,


garbage data is also created. The capacity is dynamically consumed
based on garbage data created by the capacity savings process and
cleaned by the garbage collection process

 As valid data is copied to a new location, the mapping tables are


updated accordingly. Garbage collection uses the physical and logical
mapping table to identify valid data that needs to be copied from the
fragmented page to a new location
© Hitachi Vantara LLC 2020. All Rights Reserved.

 Garbage collection monitors the percentage of garbage sectors in each


compressed page. Garbage collection needs to run frequently enough to keep
pace with updated and deduplication, but it is also paced to minimise
interference with host I/O performance

 It is possible for the write I/O rate to exceed the rate at which garbage collection
can reclaim free space. In the worst case, a pool could fill up with garbage data,
even in the absence of new writes

 There are three types of limit on garbage collection throughput; garbage


collection per system, garbage collection per MP unit and garbage collection per
DRD-Vol. If average write throughput routinely exceeds these limits, garbage
data will accumulate, and the pool will eventually fill

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-11
VSP 5000 Series Adaptive Data Reduction
ADR Inline vs Post Process

ADR Inline vs Post Process


This section explains ADR migration.

Inline vs Post Process

Page 4-12
VSP 5000 Series Adaptive Data Reduction
ADR Monitoring

ADR Monitoring
This section explains ADR monitoring.

Monitoring

 A complete pool level savings ratio is available

 It is strongly recommended to enable capacity savings options when the


result is expected to be 20% or higher

 If the ratio is not what you expect, check the LDEV level reporting to see
if there are any DRD-Vols that not attaining the expected ratio

 Disabling ADR will cause data to be rehydrated to its full size. The used
capacity of the pool is increased by data decompression. Before
performing this operation, make sure that the pool has enough free
capacity for the capacity used by the DP-Vol of the target DRD-Vol and
array resources. This does take additional MP cycles
© Hitachi Vantara LLC 2020. All Rights Reserved.

 The throughout per MP is the sum of throughout of DRD-Vols that


belong to that MP

 It is always recommended maintaining MP utilization around 40% or


less to allow for failover in case of MP failure. There will be spikes due
to host workload characteristics and capacity savings function but
average is the 40% range

 Monitor MP utilisation and cache write pending (CWP) and look for any
elevated levels using Ops Center Analyser and performance monitor

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-13
VSP 5000 Series Adaptive Data Reduction
Monitoring Pool Window

Monitoring Pool Window

© Hitachi Vantara LLC 2020. All Rights Reserved.

Raidcom LDEV Metrics

 To see volume level metrics for ADR


• raidcom get ldev –ldev_id 257 –key software_saving
C:\HORCM\etc>raidcom get ldev -ldev_id 257 -key software_saving -I102

LDEV# TLS_R TOTAL_SAVING(BLK) CMP(BLK) DDP(BLK)

257 28.30 265359360 43260808 229937195

RECLAIM(BLK) SYSTEM(BLK) PRE_USED(BLK) POOL_USED(BLK)

796048 8634691 275079168 9891840

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-14
VSP 5000 Series Adaptive Data Reduction
ADR Inline vs Post Process

ADR Inline vs Post Process


In this section you will compare ADR inline and post process.

Inline vs Post Process

 Inline performs the compression and deduplication synchronously with


I/O for new data
• The compression processing for new data and updated data is performed
synchronously with I/O
• The deduplication processing for new data is performed synchronously(*1)
with I/O whereas for updated data is performed asynchronously with I/O

Note: (*1) – if the data length is smaller than 64KB then dedupe is performed asynchronously with I/O.

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Post-process performs the compression and deduplication


asynchronously with I/O for new data for example it saves data in a
temporary area and then performs the compression and deduplication
asynchronously
• The compression processing for new data is performed asynchronously with
I/O whereas for updated data is performed synchronously(*2) with I/O
• The deduplication processing for new data and updated data is performed
asynchronously with I/O

Note: (*2) – Update write area for which the compress / dedupe is not performed asynchronously with I/O.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-15
VSP 5000 Series Adaptive Data Reduction
ADR Sizing

ADR Sizing
In this section you will learn about ADR sizing.

ADR Calculator

 The ADR calculator is a component of CPKweb

 CPKweb can be found at https://cpk.hitachivantara.com

 First thing is to select the model in the upper left corner

 Then click on the ADR calculator icon in the middle of the top banner

© Hitachi Vantara LLC 2020. All Rights Reserved.

Click on the ADR


Set the System Calculator icon
Type first!

The cache size


options in the ADR
Calculator depend on
number of CBX pairs.
The recommended
max. ADR capacity
also depends on
number of CBX pairs.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-16
VSP 5000 Series Adaptive Data Reduction
ADR Input

ADR Input

Units for all capacity inputs

Useable non-ADR capacity

Capacity reserved for HTI snapshots

Capacity reserved for HUR

Click on ”MAX” to enter the maximum recommended ADR


effective capacity, based on the subsystem cache size

Use 2 for the compression ratio


Guaranteed by
Hitachi Vantara
Use 2 for the deduplication ratio

Recommended pool depletion threshold is 80%

Cache size options depend on number of CBX pairs

Choose base cache or cache extensions

© Hitachi Vantara LLC 2020. All Rights Reserved.

• 4:1 is guaranteed by Hitachi Vantara (Compression = 2; Dedupe = 2)


• Use DRE (Data Reduction Estimator) tool for real values

ADR Calculator - Results

Stacked bar charts show a


breakdown of the usable
physical capacity required
without ADR and after ADR

The total The sum of


useable these is non-
capacity ADR capacity
that you tell for the DOC
the DOC

© Hitachi Vantara LLC 2020. All Rights Reserved.

• This shows a side-by-side comparison of the non-ADR sizing vs ADR sizing

• The Total on the bottom right is what you tell the DOC to configure

Page 4-17
VSP 5000 Series Adaptive Data Reduction
Active Learning Exercise: Jigsaw Puzzle

Active Learning Exercise: Jigsaw Puzzle

Module Summary

 In this module, you should have learned to:


• Explain Adaptive Data Reduction and its functions
• Review the ADR supported platforms and requirements
• Discuss various ADR Terminology and pool requirements
• Compare ADR Inline vs Post Process
• Explain HDT Smart Tiers, garbage collection, migration, monitoring and ADR
sizing

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 4-18
5. VSP 5000 Series High Availability and
Storage Navigator Differences From
G1x00
Module Objectives

 Upon completion of this module, you should be able to:


• Explain High Availability (HA) differences from Hitachi Virtual Storage
Platform G1000/ VSP G1500
• Describe HA single and two point failure scenarios
• Review differences between the VSP G1000 and VSP 5000 Series:
 DKC
 Logical devices
 Pools
 Ports

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-1
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
HA Differences From VSP G1000/ VSP G1500

HA Differences From VSP G1000/ VSP G1500


In this section you will learn about HA differences from VSP G1000/ VSP G1500.

Difference From G1000/1500


# VSP G1000/ VSP G1500 VSP 5500
1 When Cache-1A and Cache-2A fail, the system SM data is duplicated in two controllers like Zeus,
is down because SM data is stored only in but when one of them fails, SM data is copied to
them. other controller and the redundancy will be
For example, Cache-1A fails at first, and if recovered Immediately without operation by a
Cache-1B fails before a CE recovers Cache-1A, CE.(Except for minimum controller configuration.)
the system will be down.
2 When two X-path cables fail, the system might Even when any seven X-path cables fail, the
be down. system isn’t down.
3 When two DKAs fail, the system might be down. Even when three DKBs fail, the system isn’t down.
(Except for minimum controller configuration.)
4 A backboard(chassis) in DKC can’t be replaced A backboard(chassis) in DKC/HSNBX can be
without offline. replaced without offline.

5 Frontend ports aren’t blocked during a cache Frontend ports are blocked during a cache memory
memory maintenance. maintenance.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Single Point Failure

# Configuration Failure Phenomena Note


1 A backboard failure in a Require offline Very rare because the
Drive box. replacement. backboard in a Drive box is
RAID5 7D+1P “Passive backboard”.
RAID6 14D+2P
2 A connector failure of a Require offline A connector failure hardly
Drive box. replacement. occurs.

7D+1P 14D+2P 6D+2P 3D+1P 2D+2D


Drive Box Drive Box

Drive Box Drive Box

Drive Box Drive Box

Drive Box Drive Box

Drive box blockade PG will not be blocked when a


causes PG blockade. Drive box is blocked.
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-2
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Two Point Failure

# Configuration Failure Phenomena Note


3 Two alternate A controller, CHB, SFP The host can’t access the
paths between or FC cable failure storage.
the host and during an additional
the storage cache memory
installation.
4 Two controller A controller failure during The system is down and
configuration an additional cache some of data will be lost.
memory installation.

#3 CHBs are blocked during a #4 Two controller


controller/DIMM maintenance. configuration
CHB

CHB

CHB

CHB

CHB

CHB
CTL CTL CTL CTL CTL CTL
Maintenance Maintenance
DKC0 DKC1 DKC0 DKC1

Storage Subsystem Storage Subsystem


© Hitachi Vantara LLC 2020. All Rights Reserved.

Two Point Failure

# Configuration Failure Phenomena Note

1 Two controllers simultaneous The system is down and some Super rare because
failure. of data will be lost. “simultaneous” means “in
several minutes” normally.
2 Short-circuit failure of The system is down and some Super rare because the
backboards in two DKCs. of data will be lost. backboard in a DKC is “Passive
backboard”.
3 Short-circuit failure of The system is down. Super rare because the
All backboards in two HSNBXs. backboard in a HSNBX is
“Passive backboard”.
4 Thermal sensors incorrectly The system will be off. Super rare because a failure of
interpret that the temperature “incorrectly interpreting
exceeds the specification on temperature” hardly occurs.
two DKCs.
5 Two PSOFF signal lines The system will be off. Super rare because a signal line
failure on a HSNPANEL. failure hardly occurs.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Additional details for each failure items:


#1 2 controllers at the same time failure the system cannot rebuild shared memory
#2 Same as above
#3 If both HSNBX’s go down there is no way for the system to communicate
#4 Same as 1
#5 Power off is forced due to hardware failure

Page 5-3
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
VSP 5500 and VSP 5100 – Two Point Failure

VSP 5500 and VSP 5100 – Two Point Failure


# Configuration Failure Phenomena Note

6 RAID5 7D+1P Two drives failure in one parity The data in the parity Only when 2nd failure occurs
RAID5 3D+1P group. group will be lost. before completing to copy data
RAID1 to a spare drive.
7 RAID5 7D+1P Two enclosure boards or two Some of parity groups will
RAID6 14D+2P PSUs failure on one Drive box. be blocked.

DKC0 DKC1
CTL CTL CTL CTL

DKB
DKB

DKB
DKB

DKB
DKB

DKB
DKB
#7 Two enclosure boards
failure in one Drive box

ENC ENC ENC ENC ENC ENC ENC ENC

Drive Box Drive Box Drive Box Drive Box

Storage Subsystem
© Hitachi Vantara LLC 2020. All Rights Reserved.

# Configuration Failure Phenomena Note

8 Two controllers failure The system is down and some


Two controller of data will be lost.
9 configuration Two DKBs failure Most of parity groups will be
blocked.

DKC0 DKC1
#9 Two DKBs failure CTL CTL
DKB
DKB

DKB
DKB

ENC ENC ENC ENC ENC ENC ENC ENC

Drive Box Drive Box Drive Box Drive Box

Storage Subsystem

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-4
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
X-Path/HIE/ISW

X-Path/HIE/ISW
# Configuration Failure Phenomena Note
1 Two X-paths failure on one Only the HIE will be blocked.
HIE.
2 Two HIEs failure on one Only the controller will be
All
controller. blocked.
3 Two HIEs failure on different Only one of the controllers will The controller which has 2nd
controllers. be blocked. failed HIE will be blocked.

#2 Two HIEs failure #3 Two HIEs failure on


on one controller DKC0 DKC1 different controllers DKC0 DKC1
CTL CTL CTL CTL CTL CTL CTL CTL
HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE
ISW ISW ISW ISW ISW ISW ISW ISW
HSNBX0 HSNBX1 HSNBX0 HSNBX1
Storage Subsystem Storage Subsystem
© Hitachi Vantara LLC 2020. All Rights Reserved.

HIE = Hitachi Interconnect Edge

ISW = Interconnect Switch

X-Path/HIE/ISW
# Configuration Failure Phenomena Note

4 Two ISWs or two PSUs All of HIEs connected to the A HSNBX replacement
failure on one HSNBX. HSNBX will be blocked. procedure can recover all failed
All HIEs without offline.
5 One ISW and one X-path The failed ISW, X-path and
failure. one HIE will be blocked.

#4 Two ISWs failure on one HSNBX #5 One ISW and one X-path failure
DKC0 DKC1 DKC0 DKC1
CTL CTL CTL CTL CTL CTL CTL CTL
HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE
HIE
HIE

HIE
HIE

HIE
HIE

HIE
HIE

ISW ISW ISW ISW ISW ISW ISW ISW


HSNBX0 HSNBX1 HSNBX0 HSNBX1
Storage Subsystem Storage Subsystem

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-5
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Active Learning Exercise: Jigsaw Puzzle

Active Learning Exercise: Jigsaw Puzzle


Jigsaw/Brainstorming

Match the following:

Configuration Phenomena
1. Two paths between the host 1. The system is down, and
and the storage some data will be lost.

2. VSP 5100 with two 2. The host can’t access the


controllers configuration storage.
failure.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Storage Navigator Differences From VSP G1x00


In this section you will learn about storage navigator differences from VSP G1x00.

DKC

 VSP G1x00 and previous generations could exist with one DKC, but
VSP 5000 Series will have a minimum of two DKC

 VSP G1x00 has controller temperature displayed; this is not present for
VSP 5000 Series
G1x00 vs VSP 5000 Series

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-6
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Logical Devices – Column Settings

Logical Devices – Column Settings

 LDEV pinned status from Storage NAV instead of needing to log in to


SVP
G1x00 vs VSP 5000 Series

© Hitachi Vantara LLC 2020. All Rights Reserved.

Note: Pinned status indicates data still in cache and hasn’t been destaged to devices.

Logical Devices

 In this screenshot, we can see


the PIN Status Column

 A new option is present to


Interrupt Formatting for a specific
LUN(s)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-7
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Pools – More Actions

Pools – More Actions

 In the Pools sections, navigate to


virtual volumes

 Select more actions, we can now


see two new options:
• Initialize Duplicate Data (Dedupe)
• Interrupt LDEV Task Format.
Same as what we have see in the
Logical Device options

© Hitachi Vantara LLC 2020. All Rights Reserved.

Ports – Column Settings

 Adapter Type is now changed to Board Type

 Biggest change - Now there are only two ports attributes visible: Target and Bidirectional

G1x00 vs VSP 5000 Series

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-8
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Port Conditions

Port Conditions

 Port Condition view is like Panama 2 boxes with 2 DKCs

© Hitachi Vantara LLC 2020. All Rights Reserved.

Module Summary

 In this module, you should have learned to:


• Explain High Availability (HA) differences from Hitachi Virtual Storage
Platform G1000/ VSP G1500
• Describe HA single and two point failure scenarios
• Review differences between the VSP G1000 and VSP 5000 Series:
 DKC
 Logical devices
 Pools
 Ports

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-9
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Questions to IT PRO

Questions to IT PRO
We are updating our 100% Data Availability Guarantee for Zeus2 and need confirmation of the following availability /
resilience. Is there any conditions where:

1. VSP 5500 has an outage from a single component failure?


Yes, if “an outage” includes a failure during a controller maintenance.

2. VSP 5500 does not have an outage from a single component failure, but requires an outage (for example POR) to
return to normal operation during / after the repair?
Yes, but only the failure of “Passive backboard” in a drive box.

3. VSP 5500 has an outage from a redundant component failure? (for example both HIE boards on the same
controller fail)
Yes. It depends on the configuration.

4. VSP 5500 does not have an outage from a redundant component failure, but requires an outage (i.e. POR) to
return to normal operation during / after the repair?
No.

5. VSP 5500 has an outage from a two non-redundant component failures? (for example HIE board on one
controller and CPU on another controller)
No.

6. We ask this one because there have been cases on Panama2 where I-path blocked + controller blocked has
required a POR to return to normal.
No, because there are four redundant paths between each controller, and DKC chassis can be recover without
offline. © Hitachi Vantara LLC 2020. All Rights Reserved.

Page 5-10
6. VSP 5000 Series Security and Encryption
Enhancements
Module Objectives

 Upon completion of this module, you should be able to:


• Review encryption components
• Review key management options
• Explore encryption and sanitation documentation
• Explain enhanced sanitization
• Discuss audit logging of encryption events
• Review additional security changes

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 6-1
VSP 5000 Series Security and Encryption Enhancements
Encryption

Encryption

 The data-at-rest encryption feature protects your sensitive data against breaches
associated with storage media

 Hardware-based Advanced Encryption Standard (AES) encryption, using 256-bit keys in


the XTS mode of operation, is provided for open and mainframe systems

 Encryption can be applied to some or all supported internal drives (HDD, SSD, FMD)

 Each encrypted internal drive is protected with a unique data encryption key

 Encryption has negligible effects on I/O throughput and latency

 Encryption requires little to no disruption of existing applications and infrastructure

 Cryptographic erasure (media sanitization) of data is performed when an internal


encrypted drive is removed from the storage system
© Hitachi Vantara LLC 2020. All Rights Reserved.

Encryption Components

 Software license for Encryption License Key


• The Encryption License Key software license must be installed on the
storage system

 Encryption hardware
• The data at-rest encryption (DARE) functionality is implemented using
cryptographic chips included as part of the encryption hardware. For Hitachi
Virtual Storage Platform 5000 series, VSP G700/ VSP F700, and VSP G900/
VSP F900, encryption hardware encrypting back-end modules (EBEMs)
perform the encryption

© Hitachi Vantara LLC 2020. All Rights Reserved.

When encryption is enabled for a parity group, DEKs are automatically assigned to the drives in
the parity group. Similarly, when encryption is disabled, DEKs are automatically replaced (old
DEKs are destroyed, and keys from the free keys are assigned as new DEKs). You can combine
this functionality with migrating data between parity groups to accomplish rekeying of the DEKs

DEKs - Data encryption keys

Page 6-2
VSP 5000 Series Security and Encryption Enhancements
Encryption Components

 Key management
• Managing following key types:
 Data encryption keys (DEKs): Each encrypted internal drive is protected with a
unique DEK that is used with the AES-based encryption. AES-XTS uses a pair of
keys, so each key used as a DEK which is a pair of 256-bit keys
 Certificate encryption keys (CEKs): Each encrypted back-end module or encrypted
controller requires a key for the encryption of the certificate (registration of the
EBEM/ECTL) and a key to encrypt the DEKs stored on the EBEM/ECTL
 Key encryption keys (KEKs): A single key, the KEK, is used to encrypt the CEKs
that are stored in the system

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 6-3
VSP 5000 Series Security and Encryption Enhancements
Key Management Options

Key Management Options

 Integrated Key Management:


• Simplified within the storage system

 External Key Management:


• Support for OASIS KMIP v1.0 – v1.4 and v2.0
• Significantly expanded 3rd Party Solutions Qualified:
 All Cryptsoft qualified KMS covered
 Future qualifications completed as soon as KMS updates are made available from vendors

• Nondisruptive migration to a different Key Management Solution (KMS) made easier


• Eliminate need for KMS clustering; support geo-dispersed KMS for disaster
recovery/business continuity

© Hitachi Vantara LLC 2020. All Rights Reserved.

The key management can be configured in a stand-alone mode (integrated key management),
or key management can be configured to use third-party key management (external key
management). When external key management is leveraged, some or all the following
functionality can be used:

• Initial and/or subsequent generation of keys used as CEKs and DEKs.

• Generation and protection of KEKs.

• Manual and automated backup of keys to a key management server (KMS).

• Restoration of keys from a key backup on a KMS.

All communications with a KMS are performed using the OASIS Key Management
Interoperability Protocol (KMIP) version 1.0 over a mutually authenticated Transport Layer
Security (TLS) version 1.2 connection. The TLS authentication is performed using X.509 digital
certificates for both the storage system and two cluster members of the KMS.

Page 6-4
VSP 5000 Series Security and Encryption Enhancements
Support Specifications for Encryption License Key

Support Specifications for Encryption License Key

© Hitachi Vantara LLC 2020. All Rights Reserved.

Communications Encryption using TLS 1.2

Encryption Comparison – EDKB’s to SED


# Items Backend Encryption SED (Self Encrypted Device) Remarks
1 Development items EDKB (for SAS), EDKBN (for NVMe) Each SED for different media type or
Cap.
2 Encryption standard AES-256 AES-256

3 Certification FIPS 140-2 (Lv.2 target) FIPS 140-2 (depends on SED)

4 Support Media Type HDD(SAS), SSD(SAS), SSD(NVMe), HDD(SAS), SSD(SAS), SSD(NVMe) 1. Vendors does not have plan to
SCM provide SED in SCM area.
2. In case of SED, some part of
encryption spec will be defined
by vendors, then there is risk,
that compatible drive cannot
be developed / supplied due to
vendor spec change.

5 I/O performance Same as non-encryption Backend Same as non-encryption Drive


6 Data Reduction (Dedup Available Available
& Compression)
7 Migrating the data from Online Online 3. DEDKBN doesn’t require the
Non-encryption to (VM) (VM) physical capacity addition to
Encryption environment Vol Vol Vol change the data attribute from
Non-E Non-E SED non-encryption to encryption.
Non-Encryption Non-E PG  Enc
PG0 PG1 Enc PG
PG PG
No need to prepare new PG, but it can reuse Need to prepare necessary amount of SEDs at
the existing PG with changing the attribute least as same capacity as source PG

© Hitachi Vantara LLC 2020. All Rights Reserved.

• DEK: Data Encryption Key. The key for the encryption of the stored data
• CEK: Certificate Encryption Key. The key for the encryption of the certificate and the
key for the encryption of DEK per drive to register DEK on EBEM or ECTL
• KEK: Key Encryption Key. The key for encrypting a key in a storage system with an
attribute other than KEK

Page 6-5
VSP 5000 Series Security and Encryption Enhancements
Encryption Documentation

Encryption Documentation

 Encryption License Key User Guide

© Hitachi Vantara LLC 2020. All Rights Reserved.

Sanitization Concepts

1. The Volume Shredder software enables you to securely erase data on


volumes by overwriting existing data to prevent restoration of the
erased data
• Volume Shredder considerations:
 The times of overwriting data (from one to eight), advised at least 3 times
 Shredding times, standard required times for shredding without host I/O

2. Cryptographic Erasure:
• Available as part of Data At-rest Encryption
• Data Encryption Keys (DEK) destroyed when encryption is disabled and/or
media is removed

© Hitachi Vantara LLC 2020. All Rights Reserved.

Note: Complete data erasure can be guaranteed only for hard disk drives (HDDs). For flash
drives (SSDs and FMDs), complete data erasure (overwriting all cells including overprovisioned
cells) cannot be guaranteed. For information about data erasure for flash drives (for example,
cryptographic erasure, data eradication services), contact customer support.

Page 6-6
VSP 5000 Series Security and Encryption Enhancements
Shredder Operations

Shredder Operations

1. Verify that the current shredding status for the volume is Normal.

2. Block the volume.

3. Calculate the number of overwrite passes.

4. Define the shredding conditions.

5. Shred volumes.

6. Check the shredding results in the results file.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Enhanced Sanitization

 Media-aligned sanitization using overwrites with verification

 Customer usable feature (for example, does not require Hitachi


personnel)

 Event log entries can be used to generate certificates of sanitization

 Multiple drives can be sanitized simultaneously

 Complies with NIST SP 800-88r1 and ISO/IEC 27040:2015 media


sanitization guidelines

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 6-7
VSP 5000 Series Security and Encryption Enhancements
Sanitation Documentation

Sanitation Documentation

 Shredder User Guide

© Hitachi Vantara LLC 2020. All Rights Reserved.

Audit Logging

 Broad coverage on events that are logged

 Log entries synchronized using external time sources (NTP server)

 Both internal and external logging supported

 Support for the syslog protocol

© Hitachi Vantara LLC 2020. All Rights Reserved.

Audit logging of encryption events. The audit log feature provides logging of events that occur
in the storage system, including events related to encryption and data encryption keys. When
the KMIP key manager is configured, the interactions between the storage system and the KMIP
key manager are also recorded in the audit log. You can use the audit log to check and
troubleshoot key generation and backup. If you enable and schedule regular encryption key
backups, the regular backup tasks are recorded in the audit log with the regular backup user
name, even if the regular backup user was not logged in when the backup was performed.

Page 6-8
VSP 5000 Series Security and Encryption Enhancements
Additional Security Changes

Additional Security Changes

 Customizable warning banners

 Enhancements to password management functionality

 Enhancements to digital certificate management

 Enhancements to the role-based access control (RBAC)

 FIPS 140-2 level 2 qualification ready

© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: Whiteboard Drawing

Topic: Arrange the Shredder Operations in correct order.


1. Block the volume.
2. Check the shredding results in the results file.
3. Define the shredding conditions.
4. Shred volumes.
5. Calculate the number of overwrite passes.
6. Verify that the current shredding status for the volume is Normal.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 6-9
VSP 5000 Series Security and Encryption Enhancements
Module Summary

Module Summary

 In this module, you should have learned to:


• Review encryption components
• Review key management options
• Explore encryption and sanitation documentation
• Explain enhanced sanitization
• Discuss audit logging of encryption events
• Review additional security changes

© Hitachi Vantara LLC 2020. All Rights Reserved.

Questions

 More detail is needed for LUN Sanitization:


• Can it be executed by the customer?
• Is the process certified or according to any standard to meet specific
customer security concerns?
• Can Hitachi give a sanitization certificate to customers?
• Is this offered as a service?

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 6-10
7. VSP 5000 Series and Mainframes
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the capabilities of the Mainframe with Hitachi Virtual Storage
Platform 5000 Series components

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 7-1
VSP 5000 Series and Mainframes
Hitachi Vantara Solutions for Mainframe >40 Years Experience and 14 Generations of Solutions

Hitachi Vantara Solutions for Mainframe >40 Years Experience


and 14 Generations of Solutions

 Mainframe feature/function rich storage solutions


 Continued Technology Information License Agreement with IBM for
latest feature/function compatibility
 Industry leading asynchronous replication solution minimizes
bandwidth requirements and TCO
 High availability and automated failover solutions for multi data
center operational and disaster recovery
 The only tiering technology with full SMS integration
 Vendor agnostic solutions for VTL and Mainframe Cloud Tiering

Hitachi Mainframe Storage Timeline

1978 1985 1991 1993 1995 1998 2000 2002 2004 2007 2010 2014 2016 2019

7350 7380 7390 7690 7700 7700E 9900 9980V USP USP V VSP VSP G1000 VSP G/F1500 VSP 5000

© Hitachi Vantara LLC 2020. All Rights Reserved.

• USP – Hitachi Universal Storage

• USP V - Hitachi Universal Storage Platform V

Mainframe and VSP 5000 Series

Page 7-2
VSP 5000 Series and Mainframes
VSP 5000 Changes

VSP 5000 Changes

 Some system option mode’s moved to options panel in Storage


Navigator
• For example Hierarchical memory usage (SOM 1050 and 1058)

 Volume emulations same as on previous machines

 On VSP 5100 ports are not symmetrical as on other machines due to


different hardware layout (Ctrl0 and Ctrl4)

 Support for z/15 and new Ficon board (16SA) but no compression
currently on these boards

 SAID`s listed in True Copy Mainframe manual MK-98RD9030


© Hitachi Vantara LLC 2020. All Rights Reserved.

SOM - System Option Mode

Changes on VSP 5000

 Mainframe system functions


• QHA, Softfence and verify online are active by default
• SuperPav disabled

 SOM 484 still not default, needs to be set

© Hitachi Vantara LLC 2020. All Rights Reserved.

QHA - Query Host Access

Page 7-3
VSP 5000 Series and Mainframes
Mainframe and VSP 5000 Series

Mainframe and VSP 5000 Series

• GDPS - Geographically Dispersed Parallel Sysplex

• FIPS - Federal Information Processing Standard Publication

• MTMM - Multi Target Metro Mirror

Module Summary

 In this module, you should have learned to:


• Describe the capabilities of the Mainframe with Hitachi Virtual Storage
Platform 5000 Series components

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 7-4
8. VSP 5000 Series HDP-HDT
Module Objectives

 Upon completion of this module, you should be able to:


• Discuss pool configuration
• Explain HDT Tiering and smart tier
• Review LU Ownership assignment range in multi CBX
• Identify front end or backend cross IO
• Review back end optimisation and DP page placement
• Explain pool rebalance

© Hitachi Vantara LLC 2020. All Rights Reserved.

Following acronyms are used in this module:


• HDP - Hitachi Dynamic Provisioning
• HDT - Hitachi Dynamic Tiering
• CBX - Controller chassis (box)
• ADR - Adaptive Data Reduction
• VSP – Hitachi Virtual Storage Platform
• SAS – Serial Attached SCSI
• SSD – Solid-state Drive or Solid-State Disk
• MPU – Microprocessor Unit

Page 8-1
VSP 5000 Series HDP-HDT
Pools

Pools
In this section you will learn about pools.

Pool Definitions

Pool Configuration

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB

Node0 Node1 Node2 Node3 Node4 Node5

PG ≒ Pool-LDEV PG ≒ Pool-LDEV PG ≒ Pool-LDEV

POOL

Module#0 Module#1 Module#2

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-2
VSP 5000 Series HDP-HDT
Hitachi Dynamic Tiering Overview

 Pool crossing CBX pairs is best practice but may not be the situation for all configurations.
Need to review the configuration and customer requirements. Not crossing CBX pairs may
very well limit performance and processing power for ADR

 Not crossing CBX pairs will result in FE cross and backend cross since ownership is
distributed round robin across the MPs depending on pool configuration

 In most cases pools are recommended to span all CBX pairs to optimise flexibility and
load balancing

 Need to understand Flash page placement so RGs across the CBX pairs are equivalent
when a pool extends across CBX pairs

 One exception is mixing SSD and NVMe drives in the same pool. They are the same tier
but have performance differences

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Dynamic Tiering Overview


This section provides an overview of Hitachi Dynamic Tiering.

HDT Tiers

 HDT tiering same as VSP G1x00/ VSP Gx00 – NVMe SSD same as SAS
SSD/FMD Priority Media

1 SCM (future support)


2 NVMe-SSD and SAS-SSD/FMD
3 SAS-HDD (15k and 10k)
4 NL-SAS
5 External Vol (High)
6 External Vol (Middle)
7 External Vol (Low)

 HDT treats NVMe and SAS SSD as the same tier, so should they be mixed in
the same pool
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-3
VSP 5000 Series HDP-HDT
Smart Tier

Smart Tier

 Tier 1 is not reduced to DRD-Vol


ensure performance

 Tier 2 (or the bottom tier) will


be reduced as a post process

 Once a page become ”hot”, it Tier 1


is rehydrated and promoted to
Tier 1

 Data Reduction is always


Data
post-processed after pages Tier 2 Reduction Tier 2
are relocated to the bottom Non-Reduced DDS Vol
tier
HDT Pool
 Rehydration is inline

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Media with short response time are positioned in higher tiers and with
long response time is positioned as lower tiers

 With HDT, order of tiers is defined based on media type and rotational
speed (rpm)

 Maximum 3 tiers in one pool

 The lower tier cannot be added to a pool to which the ADR is enabled

Note: In an HDT pool with ADR enabled you cannot add a lower tier,
only higher tier.
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-4
VSP 5000 Series HDP-HDT
LU Ownership

LU Ownership
This section explains about LU ownership.

LU Ownership Assignment Range in Multi CBX

CTL-8 CTL-9 CTL-10 CTL-11

DP Pool (consists of one CBX-Pair) DP Pool (consists of one CBX-Pair) DP Pool (consists of one CBX-Pair)

Pool Pool Pool


LDEV LDEV LDEV
LDEV LDEV LDEV

DP-Vol DRD-Vol DP-Vol DRD-Vol DP-Vol DRD-Vol

Volume type LU Ownership assignment range (how to assign the MPU)


LDEV, Pool LDEV Assign MPU within same CBX Pair that provides the PG as round-robin order
DP-Vol, DRD-Vol Case #1 Pool consists of one CBX Pair: Assign MPU within same CBX Pair that provides the Pool as round-robin order

© Hitachi Vantara LLC 2020. All Rights Reserved.

 When a pool only spans a node pair then DP/DRD volumes ownership
is only assigned to MPs within that node pair. You can assign or move
them to MPs outside the node pair. In general that would not be a good
idea because it increases backend cross

 When a pool spans more than one node pair, DP/DRD volumes
ownership is spread across all node pairs even if there are no pool PGs
behind them. For example, if pool spans 2 node pairs but the system
has 3 node pairs then all three node pairs get volume assigned by
default. The same logic applies to DDS and FPT volumes

 Auto-assignment of devices to a MP can be managed per Controller per


DKC – Similar to VSP G1x00 /VSP Gxx0
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-5
VSP 5000 Series HDP-HDT
LU Ownership Assignment Range in Multi CBX

CTL-8 CTL-9 CTL-10 CTL-11

DP Pool (consists of all CBX-Pairs)

Pool Pool Pool


LDEV LDEV LDEV
LDEV LDEV LDEV

DP-Vol DRD-Vol

Volume type LU Ownership assignment range (how to assign the MPU)


LDEV, Pool LDEV Assign MPU within same CBX Pair that provides the PG as round-robin order
DP-Vol, DRD-Vol Case #2 Pool consists of plural CBX Pairs: Assign MPU in whole system as round-robin order

© Hitachi Vantara LLC 2020. All Rights Reserved.

CTL-8 CTL-9 CTL-10 CTL-11

DP Pool (consists of one CBX-Pair) No DKUs (PGs) No DKUs (PGs)

Pool
LDEV
LDEV
These DKU’s will get DP-VOL’s/DRD-VOL’s, DSD and FPT. Requires manual operation to
change ownership
DP-Vol DRD-Vol

Volume type LU Ownership assignment range (how to assign the MPU)


LDEV, Pool LDEV Assign MPU within same CBX Pair that provides the PG as round-robin order
DP-Vol, DRD-Vol Case #1 Pool consists of one CBX Pair: Assign MPU within same CBX Pair that provides the Pool as round-robin order

© Hitachi Vantara LLC 2020. All Rights Reserved.

When the pool only spans a single node pair then DP/DRD volumes ownership is only assigned
to MPs within that node pair. You can manually assign or move them to MPs outside the node
pair. In general that would not be a good idea because it increases backend cross. When a pool
spans more than one node pair, DP/DRD volumes ownership is spread across all node pairs
even if there are no pool PGs behind them. The same logic applies to DDS and FPT volumes.

Page 8-6
VSP 5000 Series HDP-HDT
LU Assignments

CTL-8 CTL-9 CTL-10 CTL-11

DP Pool (consists of two CBX-Pairs) No DKUs (PGs)

Pool Pool
These DKU’s will get DP-VOL’s/DRD-VOL’s, DSD and
LDEV LDEV
LDEV LDEV FPT. Requires manual operation to change ownership
.
DP-Vol DRD-Vol

Volume type LU Ownership assignment range (how to assign the MPU)


LDEV, Pool LDEV Assign MPU within same CBX Pair that provides the PG as round-robin order
DP-Vol, DRD-Vol Case #2 Pool consists of plural CBX Pairs: Assign MPU in whole system as round-robin order

© Hitachi Vantara LLC 2020. All Rights Reserved.

LU Assignments

 LDEV, pool volumes


• Assign the Microprocessor Unit (MPU) within same CBX pair that
provides the PG as round-robin order

 DP-VOL, DRD-VOL
• Case #1 (Pool consists of one CBX pair) – Assign the MPU within
same CBX pair that provides the pool as round-robin order
• Case #2 (Pool consists of plural CBX pairs) – Assign the MPU in
whole system as round-robin order

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-7
VSP 5000 Series HDP-HDT
Active Learning Exercise: Group Discussion

Active Learning Exercise: Group Discussion

Topic: Gather information on LU Ownership Assignment Range in Multi CBX.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Front End or Back End Cross IO


This section explains front end or back end cross IO.

What is Front / Back End; Straight / Cross, I/O?

 FE Straight: FE I/O Port and LUN Owning CTL are same  FE Cross: FE I/O Port and LUN owning CTL are different
 BE Straight: Owning CTL BE I/O to a PG in the same CBX pair  BE Cross: Owning CTL BE I/O to a PG in a different CBX pair
Front-End Front-End
CH

CH
CH

CH

CH

CH
CH

CH

B
B

B
B

B
CH

CH
CH

CH

CH

CH
CH

CH

B
B

B
B

FE Straight An I/O can be a combination of


CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 BECTL6
Cross
CTL7
• FE Straight; BE Straight
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7

• FE Straight; BE Cross FE Cross Owner


Owner
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

• FE Cross; BE Straight
CBX 1 CBX 0 CBX 1 CBX 2 CBX 3
CBX 0 CBX 2 CBX 3
• FE Cross, BE Cross
BE Straight
Back-End Back-End

CBX Pair 0 CBX Pair 1


CBX Pair 0 CBX Pair 1

 With an ASIC-less design, some consideration of cross I/O is appropriate since it has some performance impact
• Various optimizations reduce the occurrence of cross I/O, and HIE DMA offload reduces the overhead of cross I/O when
required
• Testing shows that in practice, under most conditions, the optimizations and offload are effective with little overhead
 For mainframe, FE straight is similar to FE cross because even for straight, the data has to be sent to HIE for CKD/FBA
conversion offload optimization. If the I/O is FE cross it gets the conversion while passing through HIE

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-8
VSP 5000 Series HDP-HDT
Back End Optimization and DP Page Placement

Back End Optimization and DP Page Placement


This section explains back end optimization and DP page placement.

HDP Data Intelligent Placement

 HDP pool volume assignment is determined based on ownership counter per


CTL and PG layout in the pool

 Flash – New DP Page allocation of DP-Vol is assigned on the same CBX pair as
PG to avoid BE cross (BE straight configuration will be the best practice in case
the HDP pool is configured across the modules)

 HDD – New DP page allocation of DP-Vol is assigned across modules SAS-


HDD PGs to take BE cross advantage for HDD drive bottleneck and increase
the performance by increasing number of HDDs in DP-Pool PG across modules

 HDT is supported in combination with ADR


• Data reduction only occurs on the lowest tier

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-9
VSP 5000 Series HDP-HDT
Back End Cross Optimisation (Flash Data Placement)

Back End Cross Optimisation (Flash Data Placement)

 If a pool span multiple CBX pairs:


 For HDD pools, pages are distributed round robin across all PGs in the pool regardless of which CBX pair the PGs
belong
• This is to optimize workload distribution across HDDs which has more benefit than minimising BE cross

 For Flash pool, pages are distributed round robin across all PGs in the pool behind the CBX pair that owns the LUN
• This is to optimize for BE straight, which has more benefit than spreading the workload across all PGs in the pool
• If there is not enough capacity on PGs behind the owning controller, some pages will have to be stored on other PGs in
the pool
HDD Flash

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7 CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7

Owner Owner Owner BE Owner Owner Owner


DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
Straight
Node0 Node1 Node2 Node3 Node4 Node5
BE Cross Node0 Node1 Node2 Node3 Node4 Node5

PG ≒ Pool- PG ≒ Pool- PG ≒ Pool- PG ≒ Pool- PG ≒ Pool- PG ≒ Pool-


LDEV POOL LDEV LDEV LDEV POOL LDEV LDEV

© Hitachi Vantara LLC 2020. All Rights Reserved.

• For pages in DRD VOLs (ADR enabled DP-Vols) the media dependent (HDD vs Flash)
back-end cross optimisation is used because these volumes only contain data owned by
the same LUN

• For pages in DSD VOLs the media dependent (HDD vs Flash) back end cross
optimization is not used. DSD Vols are spread across all PGs in a pool. This is because
within each DSD Volume page there are 8K blocks referenced by multiple DRD Vols and
those DRD Vols could be owned by any controller, so it is not practical to optimize the
page placement to be in the same module as the owing LUN

Page 8-10
VSP 5000 Series HDP-HDT
Back End Cross Optimisation With HDT vs HDP

Back End Cross Optimisation With HDT vs HDP

 Typically HDP does not have a mix of HDD and flash but it is possible

 For HDT it is common to have a mix of flash and HDD (different tiers)
Drive type in Pool
Flash (SAS, HDD (SAS,
Pool kind NVMe) NLSAS) Method of page allocation
✔ As BE straight as possible
HDP ✔ Distributed across all PGs
✔ ✔ Distributed across all PGs
✔ As BE straight as possible
HDT ✔ Distributed across all PGs
✔ ✔ As BE straight as possible
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-11
VSP 5000 Series HDP-HDT
Back End Cross Optimisation – No Pool Span

Back End Cross Optimisation – No Pool Span

 If a pool does not span CBX pairs as shown below


• LUN created will be assigned ownership round robin to all controller within the CBX pair
• If the pool has any DRD volumes (ADR enabled DP Vols) then the pool will also have 24 – DSD
and FPT volumes to support ADR
• The DDS and FPT volumes in the pool will be assigned round robin only to controllers with in the
CBX pair
• Optimisation eliminates BE cross I/O, but limits flexibility and convenience of the pool design and
load balancing Front-End Front-End

CH

CH

CH

CH
CH

CH

CH

CH

CH

CH
CH

CH

B
B

B
B

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7

Owner Owner
DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
DKB

DKB

CBX 0 CBX 1 CBX 2 CBX 3 CBX 4 CBX 5

BE Straight
Back-End Back-End

CBX Pair 0 CBX Pair 1 CBX Pair 2

© Hitachi Vantara LLC 2020. All Rights Reserved.

 If a pool does not span CBX pairs as shown below


• If you manually assign or move LUN ownership to another CBX pair for load balancing, these LUNs will be 100% BE
cross

• If you manually assign DRD vol ownership to another CBX pair, then ownership of all DSD and FPT Vols for that
pool will automatically be distributed equally to all controllers in every CBX pair that owns any DRD Vols from the
pool

• Conversely, moving all DRD-Vol ownership off of a CBX pair will automatically redistribute DSD and FPT ownership
off that CBX pair

• Whether load balancing or BE straight is more beneficial, depends on the situation and priority
CH

CH
CH

CH

CH

CH
CH

CH

B
B

B
B

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 BE


CTL6Cross CTL7

Owner
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

CBX 0 CBX 1 CBX 2 CBX 3

Back-End

CBX Pair 0 CBX Pair 1


© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-12
VSP 5000 Series HDP-HDT
Back End Cross Optimisation – Pool Span

Back End Cross Optimisation – Pool Span

 If a pool spans multiple CBX pairs as shown below


• LUN created in a pool will be assigned MP ownership round robin to all controllers within all CBX pairs

• If the span is across all CBX pairs then load balancing is most convenient and there may be less BE cross

• If there are 3 CBX pairs and pool PGs span 2 CBX pairs, all 3 CBX pairs will have DP / DRD Vols ownership
assigned and therefore DSD and FPT Vols will also be distributed to all CBX pairs

• You can manually assign / move LUNs if you want to prioritise BE straight over ownership workload distribution.
Therefore DSD and FPT Vols will also re-distribute to only CBX pairs with DRD Vols from the pool

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7 CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7

OwnerOwner Owner Owner OwnerOwner OwnerOwner OwnerOwner OwnerOwner Owner Owner Owner Owner Owner Owner OwnerOwner Owner Owner OwnerOwner
DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB

DKB

DKB
DKB

DKB
Node0 Node1 Node2 Node3 Node4 Node5 Node0 Node1 Node2 Node3 Node4 Node5

PG ≒ Pool-LDEV PG ≒ Pool-LDEV PG ≒ Pool-LDEV PG ≒ Pool-LDEV PG ≒ Pool-LDEV


POOL POOL

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page Placement

 Smart allocation with flash. If the local pool (CBX) volumes fill up, page
allocations will move down to the next CBX pair
• Volume ownership will not change but new pages will be in PGs on the other node /
CBX pair. This leads to backend cross which will impact flash performance. In theory if
you knew this was happening you might manually move the ownership and perform
pool rebalance so that data in the other node pair will be moved and now backend
straight

 In a flash environment with 4 or 6 nodes, assume that the device owner is node
0/1 and admin moves the owner to node 2/3. At that time, smart allocation will
use pool volumes for those nodes as long as the pool is extended across all
nodes
• If you do a smart rebalance, the previously written data will be moved over to straight
PGs (space permitting)
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-13
VSP 5000 Series HDP-HDT
Pool Rebalance Overview

Pool Rebalance Overview


This section explains pool rebalance.

Pool Rebalance

 Pages in DSD and FPT will be rebalanced across all PGs for both Flash
and HDD

 For Flash, pages in DP / DRD Vols will not be rebalanced to PGs behind
the other node pair but will be rebalanced across PGs within the node
pair

 For HDD, pages in DP / DRD Vols will be rebalanced across all PGs in
the pool

 If you manually changed ownership of DP / DRD Vols to the new node


pair before the rebalance, and the pool is flash, all their pages will be
relocated to PGs in their new node pair
© Hitachi Vantara LLC 2020. All Rights Reserved.

 DP / DRD Vols ownership is not changed during rebalance

 Relocate pages to equally allocate used ages of each RG in a pool per


DP / DRD Vol when adding pool volume and performing zero data page
reclamation

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-14
VSP 5000 Series HDP-HDT
Module Summary

Module Summary

 In this module, you should have learned to:


• Discuss pool configuration
• Explain HDT Tiering and smart tier
• Review LU Ownership assignment range in multi CBX
• Identify front end or backend cross IO
• Review back end optimisation and DP page placement
• Explain pool rebalance

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 8-15
VSP 5000 Series HDP-HDT
Module Summary

This page is left blank intentionally.

Page 8-16
9. VSP 5000 Series and Replication
Module Objectives

 When you complete this module, you should be able to:


• Discuss Hitachi Thin Image and Hitachi Universal Replicator replication
enhancements
• Discuss uses cases of defrag operations
• Explain replication roadmap
• Explore Global-Active Device enhancements

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-1
VSP 5000 Series and Replication
Hitachi Thin Image Enhancements

Hitachi Thin Image Enhancements


In this section you will learn about Hitachi Thin Image enhancements.

Thin Image Defrag

 After a Thin Image pair is deleted, resynchronized or restored, some


areas that store unnecessary data in the pool (snapshot data area).
These data areas are referred to as “garbage”

 Garbage can be reused only when snapshot data is stored in the same
snapshot tree and it cannot be used for other purposes

 It consolidates the areas being used to return the pages that store only
garbage data to unallocated pages. It achieves the following effects:
• The free capacity of a pool is increased
• The released pages can be used for other purposes

 Defrag function is available (MC 90-03-01-00/00 or later)


© Hitachi Vantara LLC 2020. All Rights Reserved.

 Defrag can be performed by using CLI (CCI) only. Storage Navigator


cannot be used for defrag operations

 Defrag operation can be stopped in the middle of the operation. If the


operation is stopped in the middle, the free capacity of the pool is
increased by the pages that have been returned to unallocated pages
before it is stopped

 Use CCI (raidcom commands) to get the current amount of garbage

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-2
VSP 5000 Series and Replication
When to Perform Defrag

When to Perform Defrag

 The amount of garbage is 1GB or larger

 It is effective in an environment where the number of pairs and the


differential data size change significantly

 It is not effective in an environment where the number of pairs are not


changed, and the amount of differential data does not change much.
This is because garbage can be reused as areas that store differential
data in the same snapshot tree, when the amount of differential data is
large, the amount of garbage is reduced even if defrag is not performed

© Hitachi Vantara LLC 2020. All Rights Reserved.

Defrag Operations

 Defrag stops temporarily


• When the system load is high (MPB(*1) 50% and CWP 30%), defrag automatically
stops temporarily to reduce impact on I/O performance of the host. When the load is
reduced, defrag resumes automatically (The stopped job progress rate continues)
• When the storage array is powered off. When the array is powered on, defrag resumes
automatically (The stopped job progress continues)
• When a pair operation (for example resync, delete, restore) is performed, defrag is
suspended so impact on the pair operation can be reduced. After the pair operation is
completed, defrag resumes automatically (The progress rate starts from 0%)

 Defrag execution status : Stop / Processing / Suspending (Stopping)


• Progress: 0 – 100% (100% means the defrag job is complete)
Note: (*1) – MPB of the root LDEV
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-3
VSP 5000 Series and Replication
Remote Replication

Remote Replication
In this section you will learn about HUR replication enhancements.

HUR Replication Enhancements


 Improved Hitachi Universal Replicator (HUR) replication performance.
Able to maximise the use of MP core resources on the controller and
optimise cache and shared memory access

 DRE license rename to remote replication extended (RRE)

 All Program Products fit in basic cache option

 HUR minimum journal capacity required = 10GB

 HUR Inflow control disabled by default

 Some System Option Mode (SOM) settings moves to “User SOM” to be


able to change by Storage Navigator GUI (Edit Advances System
Settings)
© Hitachi Vantara LLC 2020. All Rights Reserved.

Replication Roadmap Overview


This section will guide you about replication roadmap.

Replication Roadmap

 Grow open DPVOL in HUR, TrueCopy, Global Active Device,


ShadowImage while pairs in suspended state. Resync required (*1) –
SVOS 9.4

 HUR support for VMware VVol for Hitachi Virtual Storage Platform 5000
and Panama II – SVOS 9.3 (GA – May)(*2)

 Support matrix
• VSP 5x00, G1x00, F1500, Fxx0, Gxx0 TC HUR GAD Replication Intermix
Matrix

Note:
(*1) – HDT and ADR is not supported
(*2) – VP and HUR GA May. SVOS 9.3 is release recently
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-4
VSP 5000 Series and Replication
Active Learning Exercise: Follow the Manual

Active Learning Exercise: Follow the Manual

Topic: Explore support matrix.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Global-Active Device Enhancements Overview


This section explains about global-active device enhancements.

Global-Active Device Enhancements

 GAD CTGs increased from 256 to 1024

 GAD support for VMware VVol


• Target GA 05/2020 – 06/2020 for VSP 5000 and VSP F900/VSP G900
• Requires update to VASA provider and Hitachi Data Instance Director S/W

 GAD recovery enhancement enables first I/O to determine which side


survives
• If both arrays simultaneously write to the quorum when remote links fail,
typical behaviour is always recovered with the smaller serial number between
arrays, this enhancement will recover the side that gets the first write I/O after
communication stoppage (SVOS 9.2 Panama II and VSP 5000)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-5
VSP 5000 Series and Replication
GAD Enhancements

GAD Enhancements

 GAD quorum in the cloud


• Includes VSP 5000 and VSP F900/G900 (G/F1500 – 8.3.0 80-06-72-00/00
MR release)
• Support for AWS, Azure
• Host based S/W required – Fedora 29, Windows 2012 server or CentOS
Linux

© Hitachi Vantara LLC 2020. All Rights Reserved.

 GAD migrations
• Only supports previous subsystems that support GAD (G1x00 and Panama
II)
• Quorum less (Migration only)
• Multiple quorum support options – Traditional, iSCSI and AWS cloud
• Customer deployable and fully automated by using HSA/HDID/HAD
• GAD + UR – maintain RPO/RTO if source array has HUR (MC 90-03-01-
00/00)

© Hitachi Vantara LLC 2020. All Rights Reserved.

• HAS - Hitachi Automation Suite

• HDID - Hitachi Data Instance Director

• HAD – Hitachi Automation Director

• RPO - Recovery Point Objective

• RTO - Recovery Time Objective

Page 9-6
VSP 5000 Series and Replication
Module Review

Module Review

 In this module, you should have learned to:


• Discuss Hitachi Thin Image and Hitachi Universal Replicator replication
enhancements
• Discuss uses cases of defrag operations
• Explain replication roadmap
• Explore Global-Active Device enhancements

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 9-7
VSP 5000 Series and Replication
Module Review

This page is left blank intentionally.

Page 9-8
10. Hitachi Ops Center Replication
Module Objectives

 When you complete this module, you should be able to:


• Describe the replication functionality of individual Ops Center components
• Explain the requirements of Ops Center Administrator for replication
management
• Discuss basic replication management task, such as:
 Provisioning of volumes with local replication
 Provisioning of volumes with High Availability

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Page 10-1
Hitachi Ops Center Replication
Ops Center Replication Overview

Ops Center Replication Overview

Ops Center Component Replication Support


Common Service -
Hitachi Ops Global-Active Device (GAD) Setup guidance, Data
Center Administrator Protections Alerts,
GAD volume management
Local Replication(Snap/Clone)
Hitachi Ops Center Analyzer E2E-View: GAD PVol, SVol display
Detail View: remote path performance
Hitachi Ops Service templates for volume allocation for High Availability,
Center Automator GAD setup, volume allocation with Clone Snapshot
Hitachi Data Instance Creation of data flows to support TI, SI, TC, UR and GAD
Director

© Hitachi Vantara Corporation 2020. All Rights Reserved.

• TI – Thin Image

• SI – ShadowImage

• TC – Total Cost

• UR – Universal Replicator

Page 10-2
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Replication

Hitachi Ops Center Administrator Replication


In this section, you will find information about Hitachi Ops Center Administrator replication.

Administrator Replication

 Product licenses are required in the storage systems, before replication


can be used in Hitachi Ops Center Administrator
• You can launch Storage Navigator to check/install licenses
• Select Storage System, right click on gear

 Hitachi Ops Center Administrator supports volume provisioning with


• Local Replication (Clone Now, Snap (Snap on Snap, Snap Clone))
• Remote Replication (Global-Active Device)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Launch Replication Page

 From Storage Systems  Replication Groups window you can


directly launch replication page in Hitachi Device Manager Storage
Navigator or launch Hitachi Data Instance Director

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-3
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Local Replication

Hitachi Ops Center Administrator Local Replication


In this section, you will find information about Hitachi Ops Center Administrator local replication.

Administrator Local Replication Overview

 Protecting volumes by cloning


• Useful for maintenance or testing of applications
• Volumes need to be attached
• 3 clones of a volume is supported

 Select Volume and select “Protect Volumes with Local Replication”

© Hitachi Vantara LLC 2020. All Rights Reserved.

Administrator Local Replication

 Select “Clone Now”, enter REPLICATION GROUP NAME

 Click Submit
 The Job:
• Creates SVol
• Maps SVol to dummy HG (for
example “HID-DP-00”)
• Creates and suspends SI-Pair
• Creates a local replication
group in Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-4
Hitachi Ops Center Replication
Administrator Local Replication

 SN compared to Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Protecting volumes by creating snapshots:


• Useful for space efficient data protection
• Already attached volumes or in Attach Volume
dialog
• In case of more than 1 Snap pool, the least
utilized is used, pool needs to be created
separately

 Select Volume and select “Protect Volumes


with Local Replication” or Select Server and
“Create, Attach and Protect Volumes with
Local Replication”
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-5
Hitachi Ops Center Replication
Administrator Local Replication

 In “Protect Volume” section of the wizard:


• Select Replication Type, Schedule, Start; Replication Group settings, Snap
Pool selection and Snap Retention Policy(no. of snapshots)
• Enter Replication Group Name
• Click Submit

 Job:
• Creates and attach volume
• Creates replication group

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Snapshots are taken, if the schedule of a replication group requires.


Then:
 A secondary volume is created and mapped to a dummy host group
 The secondary volume is assigned to the primary volume
 A snapshot group is created in the storage subsystem (Name with appendix)
 The Thin Image pair is suspended

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-6
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Remote Replication

 If you choose a Thin (DP) pool or a Snap (HTI) pool replication


technology, Snap on Snap(“Cascade”) will be used

 If there is already a Snap volume created in an earlier version of Ops


Center Administrator then used replication technology, Snap will
continue

© Hitachi Vantara LLC 2020. All Rights Reserved.

Hitachi Ops Center Administrator Remote Replication


In this section, you will find information about Hitachi Ops Center Administrator remote
replication.

Administrator Remote Replication Overview

 You can use Administrator to provision volumes and create global-active


device pairs simultaneously

 Data Instance Director needs to be registered in Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-7
Hitachi Ops Center Replication
Administrator High Availability Setup

Administrator High Availability Setup

1. Hover over the gear in the


Storage Systems section.

2. Click on the gear in Storage


Systems, click on “High
Availability Setup”.

© Hitachi Vantara LLC 2020. All Rights Reserved.

3. Select primary and secondary storage system and the High


Availability Setup wizard shows, which steps are
complete/incomplete.

4. SN can be launched from wizard to complete the missing steps.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-8
Hitachi Ops Center Replication
Active Learning Exercise: Writing One-Minute-Paper

5. Confirm, that high availability setup is completed.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: Writing One-Minute-Paper

Topic: What are the uses and features of local and remote replication?

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-9
Hitachi Ops Center Replication
Administrator High Availability Setup

Administrator High Availability Setup

 Additional Data Instance Director configuration steps are required:


• A machine (known as an ISM) must be assigned that controls the Block
storage device. This node must be a Windows or RedHat Linux machine with
the HDID Client software installed.
• The storage systems for high availability must be registered in Data instance
Director. A Hitachi Block Device Node (Storage Node) needs to be added for
that.

© Hitachi Vantara LLC 2020. All Rights Reserved.

 When High Availability setup and Data Instance Director basic


configuration is completed, High Availability volumes can be provisioned
with Administrator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-10
Hitachi Ops Center Replication
Administrator Remote Replication

Administrator Remote Replication

 The wizard starts with the basic volume settings, like for local volumes

© Hitachi Vantara LLC 2020. All Rights Reserved.

 You can add a secondary server if required (optional)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-11
Hitachi Ops Center Replication
Administrator Remote Replication

 Then select secondary Storage System and Replication Group or enter


the name for a new Replication Group

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Configure Volume Paths, Host group and non preferred settings


(ALUA) for the primary storage system

Primary server (left) -> preferred pathes

Secondary server (right) -> non-preferred pathes

© Hitachi Vantara LLC 2020. All Rights Reserved.

ALUA (asymmetric logical unit access) is an industry standard protocol for identifying optimized
paths between a storage system and a host. ALUA enables the initiator to query the target
about path attributes, such as primary path and secondary path.

Page 10-12
Hitachi Ops Center Replication
Administrator Remote Replication

 Configure Volume Paths, Host group and non preferred settings (ALUA)
for the secondary storage system

© Hitachi Vantara LLC 2020. All Rights Reserved.

 After completing the settings, a job is created, which:


• Creates volume on the primary site
• Updates ALUA mode for the volumes
• Attaches volume to the server on primary storage
• Creates a replication pair by HDID (HDID creates, maps secondary site and
creates the GAD copy pair)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-13
Hitachi Ops Center Replication
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the replication functionality of individual Ops Center components
• Explain the requirements of Ops Center Administrator for replication
management
• Discuss basic replication management task, such as:
 Provisioning of volumes with local replication
 Provisioning of volumes with High Availability

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 10-14
11. Hitachi Ops Center Automator
Module Objectives

 When you complete this module, you should be able to:


• Give an overview of Hitachi Ops Center Automator
• Describe Ops Center Automator key features and architecture
• Discuss various Ops Center Automator use cases
• Recognize how to navigate the Automator dashboard
• Perform various actions through service management

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-1
Hitachi Ops Center Automator
Introducing Automator

Introducing Automator

Intelligent Automation
Best practices-based automation workflows

IT Service Catalog
Application-based services with abstracted
infrastructure requests

Infrastructure Services
Flexibility to create and customize services

Enabling software-defined infrastructure self-service

Storage focused intelligence + ITPA beyond storage


© Hitachi Vantara LLC 2020. All Rights Reserved.

ITPA: Information Technology Process Automation

Automator provides solutions and benefits for data center management by customizing service
catalog and integrating other management tools.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-2
Hitachi Ops Center Automator
Automator Features

Automator Features
This section covers Automator features.

DC Modernization With Advanced Management Software

Hitachi Ops Center Analyzer Hitachi Ops Center Automator


(Analyzer) (Automator)

IT Admin/
Operator

Daily
Operation
Cycle

Eliminate risk through simple management

© Hitachi Vantara LLC 2020. All Rights Reserved.

Orchestrated Resource Management

Service Catalog Service Builder


for Easy Consumption
1 Automate 2 Customize for Easy Customization

Automate delivery of any Create template with  Integration with existing asset
resource that has a plugin and configure
(such as Script)
REST API role-based service
 Integration with 3rd party tool

3 Integrate 4 Optimize

via API or CLI
Bunch of prebuilt plugins
Integrate with IT service Reduce manual
management tools for processes and free staff
greater savings to focus on strategy
Hitachi storage with Data
Protection and Quality of
Service (QoS) policies

Environment for
Digital Business

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-3
Hitachi Ops Center Automator
Service Catalog

Service Catalog

 Easily deploy services with customized settings for each user or user group
• Set and hide common items to eliminate human error and improve efficiency
• Preconfigure selectable values

 Result: Eliminate human error and improve efficiency for repeated


operations
(1) Create service based on
3 step provisioning:
(1) Enter # of volumes
(2) Select capacity
user specific requirements (2) Deploy service (3) Submit service
(3) Select host

Set and hide storage system


to be used for provisioning

Different services can be


deployed per user,
application, usage, and so on
Set selectable capacity
values for volumes © Hitachi Vantara LLC 2020. All Rights Reserved.

Simplified Workflow With HTML5

 Sophisticated HTML5 GUI


• Simple and easy usability
• Well-designed, considering
user’s action pattern
• Modern design

Note: Service Builder is not an HTML 5 based GUI.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-4
Hitachi Ops Center Automator
From HDvM to Configuration Manager REST API Automator Transition

From HDvM to Configuration Manager REST API Automator


Transition

 Automator dependencies on Hitachi Device Manager (HDvM) have been


replaced with Hitachi Configuration Manager REST API (CM REST)

 Automator migrates service templates from HDvM to CM REST


Device Manager Configuration Manager

Data Mobility
New*
© Hitachi Vantara LLC 2020. All Rights Reserved.

As Hitachi Command Suite (HCS) and Device Manager (HDvM) become legacy. Automator is
migrating service templates from HDvM to Configuration Manager REST API. These include
templates for smart provisioning, VMware, Oracle, 2DC and 3DC replication, and other
allocation workflows. These changes removed the dependency of Automator on Device Manager
(HDvM) and HCS.

Automator Configuration

 Configuration Manager (CM REST) is installed by default in the same


Ops Center server
 Configuration Manager needs to
be added to the Automator Management Client
Ops Center
Management Server

Hitachi Ops Center


Automator

GUI Hitachi Ops Center


API Configuration
API Manager

 Storage systems can be added to the Storage Systems Application Hosts

configuration Manager with Automator GUI


© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-5
Hitachi Ops Center Automator
Active Learning Exercise: Group Discussion

 Some service templates (Migration, Smart Provisioning, Host Selection)


require additional Web Service Connection to Ops Center Administrator,
Analyzer and HDID

 SSL settings required for some of the connections

© Hitachi Vantara LLC 2020. All Rights Reserved.

Active Learning Exercise: Group Discussion

Topic: Discuss the features of Automator

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-6
Hitachi Ops Center Automator
Automator Architecture

Automator Architecture
This section provides overview of Automator architecture.

Hitachi Ops Center Automator

Add new service template


Developer, Partner or Customer Customize expert knowledge
Expert Storage Admin
Execute services safety
Novice Storage Admin
Provide new
services
Infrastructure
Predefined Service Catalog Classes
Hitachi Templates/Services = Custom Services

Incorporate best practices Enterprise


Tier 1 Class

Midrange
Tier 2 Class - SAS

Midrange
Tier 3 Class - SATA

Network
Tier 3
Abstraction and Automation Engine Attached
Storage

Automator
© Hitachi Vantara LLC 2020. All Rights Reserved.

Automator software enables storage infrastructure self-service with intelligent automated


workflows that incorporate storage management best practices. Through infrastructure
abstraction, common and repeatable storage management tasks can be simplified, improving
reliability and helping to deliver new IT services quickly to the business. The automated storage
provisioning workflow is built on the foundation of standardizing both the storage service
request and service fulfillment. Storage requests come in many forms, which can complicate the
provisioning process.
With predefined service templates, Automator enables the catalog creation of standard IT
service requests, which streamlines and simplifies new storage capacity requests. With
advanced infrastructure abstraction of virtualized storage resources, Automator can create
standard classes of tiered storage resources with customized access for mutlitenancy to provide
efficient storage infrastructure deployments.
Automator is application focused. It supports integrated workflows of recommended storage
configurations for leading business applications, such as Oracle and Microsoft® Exchange, as
well as hypervisors, such as VMware or Microsoft Hyper-V®. It facilitates the automation of
storage-provisioning tasks for mission critical business applications by leveraging existing best
practice service templates. These service templates can be customized to add specific
management functions or to adhere to a specific user environment. Service template input
fields can be further restricted to minimize required input and allow various administrators to
access the template, depending on expertise or privileges.

Page 11-7
Hitachi Ops Center Automator
Key Terms

Key Terms

 Service template
• A deployment blueprint for the application-based storage capacity
provisioning process to encapsulate configuration settings, instructions and
tasks

 Service
• An instance of a service template configured to work with your needs

 Task
• An instance of a service that can be scheduled to run immediately or based
on a schedule and is created when a service is submitted

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Service template

o It is a deployment blueprint for the application-based storage capacity


provisioning process. It is designed to encapsulate configuration settings,
instructions and tasks needed to automate requests such as provisioning

• Service

o It is an instance of a service template that is configured to work with your needs.


When you create a new service, you are creating a copy of the selected template
and reusing the configuration settings, tasks, and processes defined in the
template. Many services can be customized from a service template. Each
Service Instance will generate a task

• Task

o It is an instance of a service. When you submit a service, Ops Center Automator


creates a corresponding task that can be scheduled to run immediately or based
on a schedule

Page 11-8
Hitachi Ops Center Automator
Service and Service Template

Service and Service Template

Each service instance


Many services can can be used to execute Task
be customized from several tasks Task
1-1
a Service Template 1-2
Task
Service Instance 1 1-3

Service Template
Task
Task
2-1
Service Instance 2 2-2
Task
2-3

© Hitachi Vantara LLC 2020. All Rights Reserved.

Key Terms

 Infrastructure group
• Organize storage resources and associate them with services and grant
access to users

 Service group
• A collection of services associated with a user group

 User group
• A set of users with a defined level of access to the services in the service
group to which they are associated

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Infrastructure group
o Organize storage resources and enable you to associate them with services and
grant access to users. Resource groups that contain pools for storage are
assigned to infrastructure groups

Page 11-9
Hitachi Ops Center Automator
Grouping Infrastructure and Access Control

Service group

o A service group is a collection of services. A service group is associated with a


user group and a role is assigned to give the users permission to use the services
in the service group

• User group

o A user group is a set of users with a defined level of access. User groups are
associated with service groups to enable users to access the services in the
service group

Grouping Infrastructure and Access Control

 Grouping infrastructure resources for access control


• Group of resource groups from multiple Administrator/REST-based API
• View capacity per storage profile

 Access control is done by grouping and association with access privilege

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-10
Hitachi Ops Center Automator
Automator Use Cases

Automator Use Cases


In this section, you will find information about some use cases of Automator.

Automator: Key Use Cases

Examples of solution images are:


• Smart Provisioning
• ServiceNow Integration
• 3rd Party Tools Integration
• Cloud Environment Management
• Online Migration

© Hitachi Vantara LLC 2020. All Rights Reserved.

Smart Provisioning

Issue Before allocation, a storage admin must select appropriate volume from many
pools in the storage system to consider performance

User Data center


Low
Which? BR=70% BR=10%
capacity

BR=15% BR=30%
User Storage admin
Requests more capacity At first, must select volume
for volume allocation

Solution Automator (with AI-assisted decisions) selects the appropriate resource


based on the best practice for optimized usage across the data center
Data center BR=10%
BR=15%
Automator BR=30%
Volume selection
BR=40%
time was BR=70%
decreased Storage admin
Automator
Performs volume allocation Selects pool before allocation
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-11
Hitachi Ops Center Automator
ServiceNow Integration

ServiceNow Integration
The storage admin is responsible for checking tickets issued by helpdesk, and for solving
Issue problem on tickets with many other IT staff members

Many Settings Helpdesk Users


Data center Speed

Tickets

Alert
Capacity
IT staff members Storage admin
Implement solutions Checks many tickets

Instead of IT staff members, ServiceNow executes solution programs which are on the
Solution
Automator with few settings
Less Settings
Data center Automate settings as service catalog Helpdesk Users Speed

Tickets

Alert
Capacity
Automator

© Hitachi Vantara LLC 2020. All Rights Reserved.

3rd Party Tools Integration


Issue Each infrastructure management tool has a lot of settings that are required to
manage a cloud environment
Settings on each tools’ pages
are complicated

Settings on each tool

Solution Automate and integrate with 3rd party cloud management platforms using
Automator REST API

Create workflow as one


service via REST API
Co-Creation
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-12
Hitachi Ops Center Automator
Cloud Environment Management

Cloud Environment Management


The VM admin struggled to reply to a user’s call, and then asked the storage admin for help. Because of
Issue communication difficulties, understanding and resolving the problem took time

I need help Data Center


User What can I do?
VM access is slow Please solve the problem
What’s wrong?
User has the problem now
What kind of trouble?
The user said, “Slow”
Storage VM admin Storage admin


Tries to explain the trouble Tries to understand the trouble


Analyzer dramatically improves communications between both admins. Also, the storage admin
Solution can detect and solve the problem quickly using Automator integrated with Analyzer

Data center
Executes
Analyzer Analyzer Allocate like
There is a capacity volume
Capacity shortage! shortage Capacity shortage!
VM admin
I saw that too. Storage admin Automator
Uses Analyzer to monitor I’m already working on it Uses HIAA to monitor
the storage system
the storage system
© Hitachi Vantara LLC 2020. All Rights Reserved.

Online Migration
Issue Users need to set a lot of settings for Data Migration. It is complicated and has a
risk of operational mistakes

A lot of steps are required for Data Migration

VSM Path Virtualiz- I/O Unalloca- Unvirtual-


Mapping Migration
Creation Setting ation Change tion ization
Data Migration

Existing Data Migration Workflow

Solution Automator provides two templates for Data Migration which include SAN Zoning
setting

Execute 2 services for migration


SAN zoning can be configured
GAD* Online SAN
Setup Migration addition to the migration
Data Migration

Online Migration Workflow by Automator


© Hitachi Vantara LLC 2020. All Rights Reserved.

• In the above slide, GAD is global-active device

Page 11-13
Hitachi Ops Center Automator
Active Learning Exercise: Brainstorming

Active Learning Exercise: Brainstorming

Topic: Automator as a Solution

© Hitachi Vantara LLC 2020. All Rights Reserved.

GUI Overview
This section shows how to navigate the Automator GUI.

Login Through Ops Center

After login into Ops Center, select the Automator on the Launcher tab

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-14
Hitachi Ops Center Automator
Login Directly to Automator

Login Directly to Automator

Automator URL: https://automation-software-server-address:22016/Automation/login.htm

© Hitachi Vantara LLC 2020. All Rights Reserved.

In a web browser, enter the Ops Center Automator URL:

http://automation-software-server-address:port-number/Automation/login.htm
Where:

• automation-software-server-address is the IP address or host name of the Ops Center


Automator server.

• port-number is the port number of the Ops Center Automator server. The default port
number is 22016.

Page 11-15
Hitachi Ops Center Automator
GUI Components

GUI Components
Search Global tabs area

Tools

Application
pane

Navigation pane

Global monitoring bar © Hitachi Vantara LLC 2020. All Rights Reserved.

Global task bar


The global task bar is always visible, regardless of which window is active. Its three menus
provide access to high-level actions and online help. The menus are:
• File: Click this menu to close the application or log out
• Tools: Click and choose from the following:
o Service Builder: Open Service Builder; this option is available to Admin and
Develop users
o User Profile: Open the user profile
o Device Manager: Click to open an instance of Device Manager
o Reset Preferences: If you have changed some display settings, such as
customized dashboard layout to display your preferred reports or modified the
column settings in the Services tab, and you want to undo your changes, you
can restore the display setting to the original (default) settings. To do so, select
Tools > Reset Preferences. This action will log you out of the current session.
You need to login again to view the default settings
• Help: Click to select one of the following options:
o Select Online Help and open Help with the navigation pane displayed
o Select About to open the About Automator window to view license information

Page 11-16
Hitachi Ops Center Automator
Instructor Demonstration

Global tabs

The Dashboard and Tasks tabs are always visible, regardless of which window is active. Access
to Services, Service Templates, and Administration tabs depends on the user role assigned. The
tabs provide access to services, tasks, administrative functions.
• Navigation pane – This pane varies with the active tab. From the navigation pane, you
can access resources and frequently used tasks
• Application pane – This pane varies with the active tab. The application pane shows
summary information, resource objects, and details about the current task
• Global monitoring bar – This bar is always visible, regardless of which window is active.
It provides links to information about submitted tasks
• Search – This box is available on the Service, Tasks, and Service Templates tab and
provides keyword and criteria-based search functions

Instructor Demonstration

Topic: Automator Overview

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-17
Hitachi Ops Center Automator
Services Management –Request (Run) a Service

Services Management –Request (Run) a Service


This section covers information on how to request a service.

Request a Service

 Submitting a service
• Runs the service by creating the tasks required to perform the service
• Select the service in Release status and click Create Request

© Hitachi Vantara LLC 2020. All Rights Reserved.

Submit a Service

 Click Submit and View Task to submit the service

© Hitachi Vantara LLC 2020. All Rights Reserved.

In the Submit Service Request window, in the Settings pane, configure the volume, host,
and task settings as required by the service.
• You can click Submit to submit the service immediately
• You can click Submit and View Task to submit the service and go to the Tasks tab

Page 11-18
Hitachi Ops Center Automator
Review a Service

Review a Service

 Select the Tasks tab to review the status of the tasks related to the
service

© Hitachi Vantara LLC 2020. All Rights Reserved.

You can verify the tasks that are associated with the submitted service, listed on the Tasks tab.

Manage Tasks

 Tasks
• Perform the function of the
service
• Generated automatically when a
service is submitted
• Monitored from the Dashboard
tab, Global Monitoring Area, or
Tasks tab

 The Tasks tab includes Tasks,


History and Debug tabs

© Hitachi Vantara LLC 2020. All Rights Reserved.

Tasks are generated automatically when a service is submitted. The tasks in Automator
correspond with the tasks that perform functions in Hitachi Command Suite without having to
manually enter the task each time. You can monitor the progress of a task as it executes its
function through completion.
The Tasks tab includes Tasks, History and Debug tabs:
• Tasks: Display the tasks associated with released services on the Tasks tab
• History: Include tasks that have been archived from the Tasks tab
• Debug: Display tasks that are generated from a service in debug, test, or maintenance
status. Available to users with modify (or higher) role

Page 11-19
Hitachi Ops Center Automator
Services Management – Create a Service

Services Management – Create a Service


This section helps you to create a service.

Service Creation

 Create a service

Login
Service Admin

System Admin

Developer

© Hitachi Vantara LLC 2020. All Rights Reserved.

1. Login with Service Admin, System Admin, or Developer role.

2. On the Services tab, in the Services pane, click Create to open the Select Service
Template window.

Page 11-20
Hitachi Ops Center Automator
Create a Service

Create a Service

 Select a template from


the Service template tab
and click Create Service

© Hitachi Vantara LLC 2020. All Rights Reserved.

3. In Service template view, select a template to open the service template preview.

4. Click Create Service to open the Create Service window.

Page 11-21
Hitachi Ops Center Automator
Create a Service

 Enter service details and


click Save and Close

© Hitachi Vantara LLC 2020. All Rights Reserved.

In the Settings pane of the Create Service window, enter the following information, which is
summarized in the General Settings area of the Navigation pane:
• Name of the service.
• Description of the service.
• State: Select Test for new services to allow only users in the Admin, Develop, or Modify
role to submit the service.
• Tags: Specify one or more tags for the service (to a maximum of 256 characters). The
tags you select for the service also apply to the tasks generated by the service.
• Service Group: Select the service group of users who can access the service.
• Service Template: The template on which service is based. Click the template name to
open the Template Preview, which includes detailed information about the template.
In the Template Preview, you can click View Flow to open the flow window for the
template.
• Expand Advanced Options and select the options you want:
o Scheduling Options:
 Immediate: Run the service when it is submitted
 Scheduled: Run the service once
 Recurrence: Run the service multiple times

Page 11-22
Hitachi Ops Center Automator
Create a Service

o Display Flow Detail for Submit User: Select to show the details of the service to
the service user

• In the Navigation pane, click each settings group and configure the required and
optional parameters. You can also navigate through the settings groups using the links
at the bottom of the Settings pane. You can choose to retain default settings from the
service or template you started with. For Volume Settings, you can choose whether to
allow users to change certain settings or to hide them altogether.

• After configuring the settings, do one of the following:

1. Click Preview to open a view of the service as it would appear to users. Then click
Save and Close to save the service.

2. Click Cancel to close the window without saving any changes.

 Verify new service in Services pane under Services tab


• New services are created in Test status

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-23
Hitachi Ops Center Automator
Instructor Demonstration

Instructor Demonstration

Topic: Create a Service

© Hitachi Vantara LLC 2020. All Rights Reserved.

Service Management – Service Builder


This section explains service builder.

Service Builder

Page 11-24
Hitachi Ops Center Automator
Automator Video – Create Service Template

 Service templates are based on plug-ins that serve as the building blocks for
running scripts
• Modify service templates to fit into each
customer's data center operations or
environment
• Create new plug-ins (which can be used as
steps in service templates) using their own
existing automation scenarios (implemented
in scripts)
• The Service Templates and plug-ins can be linked together as a sequence of steps
that dictate the flow of operations

© Hitachi Vantara LLC 2020. All Rights Reserved.

Automator provides some canned services as built-in services, which are based on our best
practices and provides general orchestration plug-in such as: call REST API, invoke CLI, transfer
the file to remote server, send email notification, and so on.
Users can also create or customize their own service template to fit in their environment,
operation policy and workflow by utilizing existing homegrown script.

Automator Video – Create Service Template

Automator Video about creating Service Templates (15minutes)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-25
Hitachi Ops Center Automator
Module Summary

Module Summary

 In this module, you should have learned to:


• Give an overview of Hitachi Ops Center Automator
• Describe Ops Center Automator key features and architecture
• Discuss various Ops Center Automator use cases
• Recognize how to navigate the Automator dashboard
• Perform various actions through service management

© Hitachi Vantara LLC 2020. All Rights Reserved.

Appendix
Let’s learn more know.

Smart Provisioning Overview


This section provides an overview of Smart Provisioning.

Smart Provisioning
 Automator selects provisioning resources based on built-in best practices and
user specified policies for optimized resource usage across the data center

Data Center Resources

User Performance
Requirements
policies SAN

Smart
provisioning

Built-in Availability
Best practices Main site DR site
Optimization

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-26
Hitachi Ops Center Automator
Smart Provisioning Overview

Smart Provisioning Overview

 Storage provisioning is based on best practice using intelligent engine


Service Request Infrastructure Group: Production
Database Volumes: Tier 1
Log Volumes: Tier 3
Host: OrAct_153

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Automates the array and pool selection process during provisioning

o Intelligent engine process

 Requested storage profile

 Current utilization

 Current performance

 Requested set of infrastructure

o Manages and maximizes the utilization of the infrastructure based on best


practices

o Removes the need for understanding of infrastructure details

Page 11-27
Hitachi Ops Center Automator
Smart Provisioning (Allocate Volumes)

Smart Provisioning (Allocate Volumes)

 Smart provisioning is an intelligent engine of volume allocation based on best


practice
• Number of volumes and capacity for
each usage, based on best practice
for each application
 Example: 1200GB x 6 volumes for DB data
 + 600GB x 4 volumes for DB log

• Intelligent pool selection


(tier or capacity or performance)
• Supports Microsoft Exchange,
Oracle Database, XenDesktop,
SQL Server, and Generic Application
• Supports “all-flash” drives
© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 11-28
12. Migration Capabilities
Module Objectives

 When you complete this module, you should be able to:


• Review the migration capabilities

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 12-1
Migration Capabilities
Migration Capabilities NDM – UVM / GAD

Migration Capabilities NDM – UVM / GAD


In this module you will review migration capabilities of nondisruptive Migration (NDM) – Hitachi
Universal Volume Manager (UVM) and global-active device (GAD).

Migration Capabilities

• USP VM - Hitachi Universal Storage Platform VM

• VSP – Hitachi Virtual Storage Platform

• HUS VM - Hitachi Unified Storage VM

• UVM - Hitachi Universal Volume Manager

• HCS – Hitachi Command Suite

• HSA - Hitachi Storage Advisor

Page 12-2
Migration Capabilities
UVM NDM Migrations

UVM NDM Migrations

 UVM/HTsM – using HCS/HSA/CLI

 TC/HUR – supported arrays VSP G1x00, VSP G350/ VSP 370/ VSP
700/ VSP 900, VSP/HUS VM

 Nondisruptive migration (NDM) – Source VSP/HUS VM only


• Scripts are available for VSP 5000 NDM and supports ADR target
• Have capability to maintain RPO/RTO if customer has HUR or TC on source
array. Need to validate the bandwidth to support additional replication as
each migration host will require double bandwidth to maintain RPO/RTO

© Hitachi Vantara LLC 2020. All Rights Reserved.

• HTSM - Hitachi Tiered Storage Manager

• HUR - Hitachi Universal Replicator

• RPO - Recovery Point Objective

• RTO - Recovery Time Objective

• ADR - Adaptive Data Reduction

Page 12-3
Migration Capabilities
GAD NDM Migrations

GAD NDM Migrations

 GAD NDM – VSP G1x00, VSP G370/ VSP G700/VSP G900/ VSP F370/
VSP F700/VSP F900/
• Scripts are developed (Similar to UVM NDM process)
• ADR is supported
• Traditional quorum, internal loopback and quorum less
• No capability to maintain RPO/RTO for TrueCopy. HUR only supported(*1)

© Hitachi Vantara LLC 2020. All Rights Reserved.

(*1) – GAD Scripts doesn’t have the functionality to configure HUR replication on migration
target arrays

Page 12-4
Migration Capabilities
GAD NDM Migrations

 GAD HAD/HDID – ITPro is enhancing HAD template that can be used in


the field (requires HDID)
• Term keys can be provided for customer who do not have GAD, HAD or
HDID
• ADR(*2) is not supported if the source volume is non-ADR

© Hitachi Vantara LLC 2020. All Rights Reserved.

• (*2) – ITPro is enhancing the template to include ADR

• HAD – Hitachi Automation Director

• HDID – Hitachi Data Instance Director

Page 12-5
Migration Capabilities
Module Summary

Module Summary

 In this module, you should have learned to:


• Review the migration capabilities

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 12-6
13. VSP 5000 Series SOM Changes
Module Objectives

 Upon completion of this module, you should be able to:


• Review SOM changes and new SOMs
• Explain the removed SOMs

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-1
VSP 5000 Series SOM Changes
SOM Changes

SOM Changes

 SOM’s can be set by RAIDCOM

 Public SOM changes:


• 4 new SOMS
• 2 meaning changes
• 27 removed

 Refer to document `Virtual Storage Platform 5100/5500 SOM, HMOs,


function switch SET UP Guide`

 https://teamsites.sharepoint.hds.com/sites/ECCP/Public/SitePages/Home.aspx

© Hitachi Vantara LLC 2020. All Rights Reserved.

New Default Settings on VSP 5000

VSP – Hitachi Virtual Storage Platform

Page 13-2
VSP 5000 Series SOM Changes
New SOMs

New SOMs

 SOM1097 (Class: Public, Default: OFF)


• (HM800 and HM850 have been already available.)
• To reduce the maintenance work, this mode disables the warning LED to
blink when SIM=452XXX, 462XXX, 3077XY, 4100XX, or 410100 is reported
 ON: The warning LED does not blink
 OFF: The warning LED blinks

© Hitachi Vantara LLC 5


2020. All Rights Reserved.

 SOM1118 (Class: Public, Default: OFF)


• (HM800 and HM850 have been already available.)
• This mode is used to disable the ENC reuse function for the customers who
prefer replacement rather than reuse, when a failure occurs in the expander
chip mounted on a controller board (CTLS, CTLSE) or an ENC board
 ON: The reuse function does not work but SIM=CF12XX is reported and the ENC is
blocked
 OFF: The reuse function works

© Hitachi Vantara LLC 6


2020. All Rights Reserved.

Page 13-3
VSP 5000 Series SOM Changes
New SOM 1168

New SOM 1168

© Hitachi Vantara LLC 7


2020. All Rights Reserved.

New SOM 1169

© Hitachi Vantara LLC 8


2020. All Rights Reserved.

Page 13-4
VSP 5000 Series SOM Changes
SOM 868 Meaning Changed SOMs

SOM 868 Meaning Changed SOMs

 SOM868 (Class: TS, Default: ON)


• We will change the default setting from OFF at RAID800 to ON at RAID900 because
SOM868 of the newly shipped storage system even for R800 is set to ON by the
factory
• ON:
 If RAID type of a local (internal) VOL is RAID1, the RMF report displays “RAID-10”
 If the VOL is an external VOL, the RMF report displays “RAID-10” as well

• OFF:
 If RAID type of a local (internal) VOL is RAID1, the RMF report displays “RAID-5”
 If the VOL is an external VOL, the RMF report displays “RAID-5” as well

© Hitachi Vantara LLC 9


2020. All Rights Reserved.

Page 13-5
VSP 5000 Series SOM Changes
SOM 1115 Meaning Changed

SOM 1115 Meaning Changed

 When this mode is set to ON, data is initialized without using metadata at LDEV format for a virtual
volume with capacity saving enabled

 We will change the default setting from OFF at RAID800 to ON at RAID900 because the formatting
speed without using the metadata is faster than the one which uses metadata in R900 anymore

 R800: Only the “Comp only” VOL formats the data without using the metadata
R900: Both the “Comp only” and “Comp/Dedup” VOL formats the data without the metadata
When this mode is set to ON, data is initialized without using metadata at LDEV format for a virtual volume with Capacity Saving
enabled.

Mode 1115 = ON:


When LDEV format is performed for a virtual volume whose capacity saving setting is Compression, the data is initialized without
using the metadata.

Mode 1115 = OFF (default):


When LDEV format is performed for a virtual volume whose capacity saving setting is Compression, normal formatting is performed,
but if one of the following conditions is met, the data is initialized without using metadata.
- There is a pinned slot.
- The capacity saving status is “Failed”.
- The virtual volume is blocked (Normal restore cannot be performed).
© Hitachi Vantara LLC 2020. All Rights Reserved.

Note:
1. (1) The mode is applied to recover a blocked pool volume in a pool to which a virtual
volume whose capacity saving setting is Compression belongs.
2. For the information of setting timing, refer to the procedure for blocked pool volume
recovery in the Maintenance Manual.
3. (2) The processing time increases with increase in pool capacity. (*1)
4. (3) Do not change the mode setting during LDEV format for a virtual volume whose
capacity saving setting is Compression. If the setting is changed, the processing cannot
be performed correctly and may end abnormally depending on the timing.
5. (4) The mode is effective only for LDEV format for a virtual volume whose capacity
saving setting is Compression, so that there is no side effect in relation to user data, but
the processing may take more time than that when the mode is set to OFF depending
on the pool capacity. Therefore, basically do not use the mode for cases other than pool
volume blockage recovery.
• *1: Estimate of processing time
• Processing time (minute) = (pool capacity (TB)/40) + 5
• * If the result of dividing pool capacity by 40 has decimal places, round it up to an
integral number.
• * The processing finishes early if there is less capacity of allocated pages.

Page 13-6
VSP 5000 Series SOM Changes
SOMs Removed

SOMs Removed

 Removed total 27 of public and TS SOMs


• Some target functions of the SOMs are not available or not needed at
RAID900 (for example FMC, DBV, version down, and so on)
• Some functions are provided by default
• Some functions are available by another UI without SOMs (for example User
SOM, GUI setting)

112020. All Rights Reserved.


© Hitachi Vantara LLC

Removed SOMs

No SOM Function
1 SOM218 <The target function is not available.>
DB Validation Enabler SIM option for limited purpose
2 SOM219 <The target function is not available.>
DB Validation Enabler SIM option for limited purpose
3 SOM292 <The target phenomenon doesn’t occur.>
Issuing OLS when Switching Port:
In case the mainframe host (FICON) is connected with the CNT-made FC switch (FC9000
etc.), and is using along with the TrueCopy S/390 with Open Fibre connection, the
occurrence of Link Incident Report for the mainframe host from the FC switch will be
deterred when switching the CHT port attribute (including automatic switching when
executing CESTPATH and CDELPATH in case of Mode114=ON).
Mode292=ON: When switching the port attribute, issue the OLS (100ms) first, and then
reset the Chip.
Mode292=OFF (default): When switching the port attribute, reset the Chip without issuing
the OLS.

122020. All Rights Reserved.


© Hitachi Vantara LLC

Page 13-7
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function
4 SOM448 <This function is available by another UI.>
Mode 448 = ON:
After a physical path failure (such as path disconnection) is detected, a mirror is split (suspended) one
minute after the detection. On MCU side, the mirror is suspended one minute after read journal commands
from RCU stop. On RCU side, the mirror is suspended one minute after read journal commands fail.
Mode 448 = OFF (default):
After a physical path failure (such as path disconnection) is detected, a mirror is split (suspended) if the path
is not restored within the path mornitoring time set by the mirror option.
5 SOM449 <This function is available by another UI.>
This mode is used to enable and disable detection of communication failures between MCU and RCU.
The default setting is ON.
Mode 449 = ON
When a physical path failure is detected, the pair is not suspended. On MCU side, checking read journal
command disruption from RCU is disabled, and monitoring read journal command failures is disabled on
RCU side.
Mode 449 = OFF
When a physical path failure is detected, the pair is suspended after the path monitoring time set by the
mirror option has passed or after a minute. Detecting communication failures between MCU and RCU is
enabled. When the mode is set to OFF, the SOM448 setting is enabled.

132020. All Rights Reserved.


© Hitachi Vantara LLC

No SOM Function

6 SOM466 <This function is available by another UI.>


Universal Replicator recommends the line band of 100 Mbps or more for the path between Main and Remote.
However, when a customer uses the line band of around 10 Mbps, operation on UR cannot be properly
processed. As a result, many retries occur and UR may suspend. This mode is provided to guarantee the line
band of at least 10 Mbps for proper system operation.
Mode ON: The line band of 10 Mbps or more is available. The JNL read is performed with 4-multiplexed read
size of 256 KB.
Mode OFF: The same as the conventional operation. The line band of 100 Mbps or more is available. The
JNL read is performed with 32-multiplexed read size of 1 MB by default.
7 SOM495 <The target function is not available.>
That the secondary volume where SVol Disable is set means the NAS file system information is imported in
the secondary volume. If the user has to take a step to release the SVol Disable attribute in order to perform
the restore operation, it is against the policy for the guard purpose and the guard logic to have the user
uninvolved. In this case, in the NAS environment, Mode 495 can be used to enable the restore operation.
Mode 495 = ON:
The restore operation (Reverse Copy, Quick Restore) is allowed on the secondary volume where SVol
Disable is set.
Mode 495 = OFF (default):
The restore operation (Reverse Copy, Quick Restore) is not allowed on the secondary volume where SVol
Disable is set.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-8
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function

8 SOM530 <This function is available by another UI.>


When a UR for z/OS pair is in the Duplex state, this option switches the display of Consistency Time (C/T)
between the values at JNL restore completion and at JNL copy completion.
Mode 530 = ON:
C/T displays the value of when JNL copy is completed.
Mode 530 = OFF (default):
C/T displays the value of when JNL restore is completed.
9 SOM664 <This function is provided by default.>
This mode changes SPM upper limit control method for WWN control.
In a host configuration where the queue depth is greater than 8, the multi-path software is used, or the
striping function is used, traffics might be controlled much lower than the upper limit set for non-prioritized
WWNs. The mode can be used to improve the upper limit control method.
Mode 664 = ON:
I/Os are controlled by managing the total number of I/Os (controlling the upper limit by monitoring the
performance of non-prioritized WWN in real time).
Mode 664 = OFF:
The current upper limit control (assigning upper limit in advance based on the non-prioritized WWNs
performance monitored immediately before) is applied for I/O control.
To set an upper limit for the traffic of non-prioritized WWNs for the first time, setting the mode to ON is
recommended.

© Hitachi Vantara LLC 2020. All Rights Reserved.

No SOM Function
10 SOM696 <The target function is not available.>
This mode is available to enable or disable the QoS function.
Mode 696 = ON:
QoS is enabled. (In accordance with the Share value set to SM, I/Os are scheduled. The Share value setting
from RMLIB is accepted)
Mode 696 = OFF (default):
QoS is disabled. (The Share value set to SM is cleared. I/O scheduling is stopped. The Share value setting
from host is rejected)
11 SOM791 <This function is available by another UI.>
This mode enables multiple JOBs of ShadowImage Resync--Normal Copy to be executed.
Mode 791 = ON:
Up to 24 JOBs of Resync--Normal Copy are executed at a time.
Depending on ShadowImage option setting, the maximum number of JOBs for a pair varies. For details, see
the "SOM791" sheet.
Mode 791 = OFF (default):
<R600, R700, HM700, R800 earlier than 80-05-41, HM800 earlier than 83-04-41>
A resync copy job is performed for one pair (default).
<R800 80-05-41 and later, HM800 83-04-41 and later>
A resync copy job is performed for one pair (default), but if local replica option #26 is set to ON, resync copy
jobs can be performed with the same multiplicity as those when SOM791 is set to ON. (see the "SOM791"
sheet)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-9
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function
12 SOM857 <The target phenomenon doesn’t occur.>
This option enables or disables to limit the cache allocation capacity per MPB (RAID700/RAID800) or MPU
(HM700/HM800) to within the prescribed capacity (*) except for Cache Residency.
*: 128GB (RAID700/HM700/HM800 H model), 256GB (RAID800), 64GB (HM800 M model), 16GB (HM800 S
model).
Mode 857 = ON:
The cache allocation capacity is limited to within the prescribed capacity.
Mode 857 = OFF (default):
The cache allocation capacity is not limited to within the prescribed capacity.
13 SOM897 <It is difficult to apply this function to RAID900.>
By the combination of SOM897 and 898 setting, the expansion width of Tier Range upper I/O value (IOPH) can
be changed as follows.
Mode 897 = ON:
SOM898 is OFF: 110%+0IO
SOM898 is ON: 110%+2IO
Mode 897 = OFF (Default)
SOM898 is OFF: 110%+5IO (Default)
SOM898 is ON: 110%+1IO
By setting the SOMs to ON to lower the upper limit for each tier, the gray zone between other tiers becomes
narrow and the frequency of page allocation increases.

© Hitachi Vantara LLC 2020. All Rights Reserved.

No SOM Function
14 SOM898 <It is difficult to apply this function to RAID900.>
By the combination of SOM898 and 897 setting, the expansion width of Tier Range upper I/O value (IOPH) can
be changed as follows.
Mode 898 = ON:
SOM897 is OFF: 110%+1IO
SOM897 is ON: 110%+2IO
Mode 898 = OFF (Default):
SOM897 is OFF: 110%+5IO (Default)
SOM897 is ON: 110%+0IO
By setting the SOMs to ON to lower the upper limit for each tier, the gray zone between other tiers becomes
narrow and the frequency of page allocation increases.
15 SOM1015 <This function is available by another UI.>
When a delta resync is performed in TC-UR delta configuration, this mode is used to change the pair status to
PAIR directly and to complete the delta resync.
Mode 1015 = ON:
The pair status changes to COPY and then PAIR when a delta resync is performed in TC-UR delta configuration.
Mode 1015 = OFF (default):
The pair status changes directly to PAIR (not via COPY) in TC-UR delta configuration.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-10
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function
16 SOM1046 <The target function is not available.>
To enable connection of Brocade 8G FCSW in mode=3.
Mode 1046 = ON:
Connection of Brocade 8G FCSW with firmware Ver6.X.X series is enabled.
Mode 1046 = OFF (default):
Connection of Brocade 8G FCSW with firmware Ver7.X.X series is enabled.
17 SOM1047 <The target phenomenon doesn’t occur.>
This mode can switch to support or not to support zHPF enhanced functions.
Mode 1047 = ON:
The storage system returns “not supported” for each enhanced function for Read Feature Codes from
channels, but can accept zHPF enhanced I/Os even the mode is ON.
Mode 1047 = OFF
The storage system returns “Support” for each enhanced function.

192020. All Rights Reserved.


© Hitachi Vantara LLC

No SOM Function
18 SOM1050 <This function is available by another UI.>
This mode enables creation of pairs using user capacity in excess of 1.8 PB per system by managing
differential BMP in hierarchical memory for pair volumes whose capacity is 4 TB (OPEN) or 262,668 Cyl
(Mainframe) or less.
Mode 1050 = ON:
For pair volumes of 4 TB (OPEN)/262,668 Cyl (Mainframe) or less, differential BMP is managed in
hierarchical memory that performs caching to CM/PM using HDD as a master and enables creation of pairs
using user capacity in excess of 1.8 PB per system.
Mode 1050= OFF (default):
For pair volumes of 4TB (OPEN)/262,668 Cyl (Mainframe) or less, differential BMP is managed in SM as
usual so that the user capacity to create pairs is limited to 1.8 PB per system. Also, differential MPB
management can be switched from the hierarchical memory to SM by performing a resync operation for pairs
whose volume capacity is 4 TB (OPEN)/ 262,668 Cyl (Mainframe) or less.

202020. All Rights Reserved.


© Hitachi Vantara LLC

Page 13-11
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function
19 SOM1058 <This function is available by another UI.>
This mode can change differential BMP management from SM to hierarchical memory so that the number of pairs to be created on
a system and user capacity used for pairs increase.
- For Mainframe systems, all pairs can be managed in hierarchical memory so that pairs can be created by all LDEVs.
- For OPEN systems, pairs that can only be managed in SM use SM so that the number of pairs that can be created using non-
DP VOLs increases.
Mode 1058 = ON:
<SOM1050 is set to ON>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from SM to hierarchical
memory. (Hierarchical memory management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from SM to
hierarchical memory. (Hierarchical memory management remains as is.)
<SOM1050 is set to OFF>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
Mode 1058 = OFF (default):
<SOM1050 is set to ON>
- The differential BMP management does not change by resynchronizing pairs.
<SOM1050 is set to OFF>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)

© Hitachi Vantara LLC 2020. All Rights Reserved.

No SOM Function
20 SOM1081 <This function is provided by default.>
The value of Initiation Delay Time on PRLO (Process Logout) frame is changed.
Mode 1081 = ON:
"Initiation Delay Time" on PRLO frame sent from RAID800 is 1 sec.
Mode 1081 = OFF (default):
"Initiation Delay Time" on PRLO frame sent from RAID800 is 4 sec.
21 SOM1093 <The target function is not available.>
This mode is used to disable background unmap during microcode downgrade from a version that supports
pool reduction rate correction to a version that does not support the function.
Mode 1093 = ON:
Background unmap cannot work.
Mode 1093 = OFF (default):
Background unmap can work.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-12
VSP 5000 Series SOM Changes
Removed SOMs

No SOM Function
22 SOM1108 <The target function is not available.>
This mode is used to extend the processing time for updating metadata managed by the data deduplication and
compression (capacity saving) function.
Mode 1108 = ON:
The internal processing time used to update metadata managed by the data deduplication and compression
function is extended.
Mode 1108 = OFF (default):
The internal processing time used to update metadata managed by the data deduplication and compression
function does not change.
23 SOM1119 <The target phenomenon doesn’t occur.>
The mode is used to disuse the control information added with 80-05-05-00/00 (R800)/ 83-04-03-x0/00 (HM800)
when capacity saving is enabled, so that downgrading the microcode as follows is enabled.
Mode 1119 = ON:
The control information is not used when capacity saving is enabled.
Mode 1119 = OFF (default):
The control information is used when capacity saving is enabled.

© Hitachi Vantara LLC 2020. All Rights Reserved.

No SOM Function
24 SOM1120 <The target phenomenon doesn’t occur.>
This system option mode disables TI pair creation with DP pool specified and releases cache management devices
to enable the microcode downgrade with the following versions.
Mode 1120 = ON:
TI pair creation with DP pool specified is disabled. Also, if any cache management devices are reserved while there
is no TI pool on the storage system, all of them are released.
Mode 1120 = OFF (default):
No action
25 SOM1122 <The target function is not available.>
This mode can change the operating speed of BGU.
Mode 1122 = ON:
The BGU speed becomes up to 10GB/s.
Mode 1122 = OFF (default):
The BGU speed is up to 42MB/s.

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 13-13
VSP 5000 Series SOM Changes
Advanced System Settings

No SOM function

26 SOM1133 <This function is provided by default.>


By setting the mode, the round-robin method is enabled to select a path to be used to run a WRFBA command.
Mode 1133 = ON:
The round-robin method is used when a path is selected to run a WRFBA command.
Mode 1133 = OFF (default):
The round-robin method is not used when a path is selected to run a WRFBA command.
27 SOM1147 <This function is provided by default.>
This mode is used to prevent the performance of write I/O to a UR PVol from degrading when the path condition
is unstable or RCU site is overloaded in a configuration where high-speed drives (HAF/SSD) are used for journal
volumes.
Mode 1147 = ON:
The caching rate of journal data created at a write I/O to a URz PVol is limited to 50%. (equal to that in Open
environment)
(The UR pair might be suspended as journal data tends to accumulate compared to the OFF setting)
Mode 1147 = OFF (default):
The caching rate of journal data created at a write I/O to a URz PVol is limited to 70%. (same as usual)
(Compared to the ON setting, the performance of host write I/Os might be affected when CWP reaches 70%)

© Hitachi Vantara LLC 2020. All Rights Reserved.

Advanced System Settings

Page 13-14
VSP 5000 Series SOM Changes
SOM’s Converted to Advanced System Settings

SOM’s Converted to Advanced System Settings

SOM Changes

Page 13-15
VSP 5000 Series SOM Changes
Active Learning Exercise: One Minute Paper

Active Learning Exercise: One Minute Paper

Module Review

Page 13-16
14. Best Practices and Information Sources
Module Objectives

 Upon completion of this module, you should be able to:


• Discuss best practices for:
 Adaptive Data Reduction (ADR)
 Pool
 Parity Group and Spare Drive
 Replication
 Encryption

• Discuss Hitachi Ops Center information sources

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-1
Best Practices and Information Sources
Best Practices ADR

Best Practices ADR


This section explains the best practices of ADR.

ADR Notes

 Read the performance documents in Salesforce.com (SFDC) Resource


Center

 Use CPK to verify performance characteristics with ADR enabled

 Use ADR calculator to verify capacity requirements to meet effective


capacity goals

 Not every device is a candidate for ADR

 ADR does have an impact to array and is a trade off between


performance and capacity reduction

© Hitachi Vantara LLC 2020. All Rights Reserved.

 The use case for capacity savings option (Dedupe and Compression) is
Office, virtual desktop infrastructure (VDI) and Backup. The
deduplication is effective due to many identical file copies, OS area
cloning and backups

 The use case for capacity savings option (Compression) is database.


Deduplication is not effective as it has unique information for each block

 It is strongly recommended to enable capacity savings option when the


result is expected to be 20% or higher

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-2
Best Practices and Information Sources
Best Practices Pool Recommendations

Best Practices Pool Recommendations


This section explains the best practices to be followed for pool.

Pool Recommendations

 HDP Maximum Pool Capacity 16.6PB for Open and 15PB for MF

 Pool to expand across all CBX pairs in a multi CBX pair configuration. There are
exceptions but cross CBX pairs provides best performance and reduce BE cross
DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol
DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol

CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
DKB

DKB
DKB

DKB

DKB

DKB
DKB

DKB
CBX0 CBX1 CBX2 CBX3 CBX4 CBX5
CBX0 CBX1 CBX2 CBX3 CBX4 CBX5

PG PG PG PG (HDD) PG (HDD) PG (HDD)

POOL POOL POOL


POOL

Pool(s) Defined to a CBX pair Pool(s) Defined across CBX pairs


© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-3
Best Practices and Information Sources
Best Practices Parity Group and Spare Drive Recommendations

Best Practices Parity Group and Spare Drive Recommendations


This section explains the best practices for parity group and spare drive recommendations.

PG and RAID layout – Single Pair Controller Block


 DBS2 and FMD3 are
two logical trays in a
single physical tray to
provide redundancy.
Although they do have
a common backplane, it
is passive and highly
reliable

PG #1: (7D+1P) x 2

PG #1: (7D+1P) x 2
PG #0: 14D+2P
PG #0: 14D+2P

All RAID levels are

PG #4: 2D+2D
PG #2: 6D+2P

PG #3: 7D+1P

C
protected from single
B

PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2
logical tray failure
PG #0: 14D+2P
PG #0: 14D+2P

PG #2: 6D+2P

PG #5: 3D+1P

PG #3: 7D+1P
P Fixed PG assignment method (same policy as R800’s spec)
a  PG with 16 drives:
i - Taking 2 Drives from 8 sequential DB#s (Ex: DB#0~#7)
r - Starting even# of Slot# (Ex: Slot#0 and 1, #2 and 3,,,)

 PG with 8 drives:
- Taking 1 Drive from 8 sequential DB#s (Ex: DB#0~#7, DB#8~#15)

 PG with 4 drives:
- Taking 1 Drive from 4 sequential even# of DB#s or odd# of DB#s (Ex: DB#0/2/4/6,
DB#1/3/5/7)

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Raid Config’s - 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P

Spare Drive Qty

Max number of spares is also limited by 8 spares per media chassis


CBX configuration Max Spare drive number [ / CBX Pair = 2CBXs] (*1) Max Spare drive number
CBX0~CBX1 CBX2~CBX3 CBX4~CBX5 [ / System]
(CBX Pair 0) (CBX Pair 1) (CBX Pair 2)
2 CBXs 64 -- -- 64
4 CBXs 64 64 -- 128
6 CBXs 64 64 64 192
*1: Spare drive can be assigned for data-drives (of same type) in a different CBX Pairs. For example, Global Spare

Recommended quantity
Drive type Recommendation for Spare Drive quantity
SAS (10k) 1 Spare Drive for every 32 Drives
NL-SAS (7.2k) 1 Spare Drive for every 16 Drives
SSD 1 Spare Drive for every 32 Drives
FMD 1 Spare Drive for every 24 Drives

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-4
Best Practices and Information Sources
Best Practices Replication

Best Practices Replication


This section explains the best practices for replication.

Local Replication

 When the same pool has both PVol and SVol of ShadowImage, if ADR
is set for both volumes, physically only one data item is saved because
deduplication is performed between PVol and SVol. To protect the data,
it is recommended to use separate pools for PVol and SVol

 SI/HTI PVol and SVol are assigned to same microprocessor unit (MPU)
prior to paircreate. If they are not on the same MPU, array will shift
SVol MPU to same MPU as PVol

 Can not have ADR devices in dedicated Hitachi Thin Image (HTI) Pool

© Hitachi Vantara LLC 2020. All Rights Reserved.

Remote Replication

 PVol data is hydrated prior to replication which takes memory and MP


utilization

 Do not enable ADR on Hitachi Universal Replicator (HUR) Journals

 Can not grow a replicated device when ADR is enabled or in Hitachi


Dynamic Tiering (HDT)

 It is recommended to balance the DPVol and Journal group workload


among MPUs

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-5
Best Practices and Information Sources
Best Practices Encryption Recommendations

Best Practices Encryption Recommendations


This section explains the best practices to be followed for encryption.

Encryption Recommended

 Encryption hardware:
• Enabling and disabling DARE is controlled at the parity group level (that is, all
drives in a parity group are either encrypting or non-encrypting)
• While it is possible to have both encrypting and non-encrypting parity groups
configured on an EBEM, it is recommended to encrypt all parity groups on an
EBEM
• It is important to note that different spare drives are used for encrypting and
non-encrypting parity groups

© Hitachi Vantara LLC 2020. All Rights Reserved.

 Backup of encryption keys :


• The creation and secure storage of backup keys must be included as part of
your corporate security policy
• It is strongly recommended that you back up each encryption key or group of
keys immediately after you create them and that you schedule regular
backups of all encryption keys to ensure data availability
• You are responsible for storing the secondary backup keys securely

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-6
Best Practices and Information Sources
Active Learning Exercise: Group Discussion

Active Learning Exercise: Group Discussion

Topic: What Best Practices have you found?

© Hitachi Vantara LLC 2020. All Rights Reserved.

Information Sources
This section highlights the information sources.

Hitachi Ops Center Information Sources

 https://support.hitachivantara.com
• Documentation
• Hitachi Data Instance Director (HDID) support matrix

 https://community.hitachivantara.com
• Products  Storage Management Community
• Developers  OpsCenter Automator community
• Developers  HDID community

 Internal: https://hitachivantara.sharepoint.com/sites/OpsCenterInfo

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Admin role is not required to run this command.

Page 14-7
Best Practices and Information Sources
Hitachi Ops Center Information Sources

 OpsCenter Info Sharepoint site:


• Contains a useful “documents” section:
 SSL Setup Procedures -> Link
 OVA Deployment Guide -> Link

© Hitachi Vantara LLC 2020. All Rights Reserved.

The Admin role is not required to run this command.

 MS Teams Space “Jupiter Service Community of Practice”


• Search for the Name in Teams or use this -> Link

• There you find a presentation, which contains useful information on


challenges found during OpsCenter Deployment and how best to deal with
them
 TEAMS link: -> Link
 Sharepoint link: -> Link

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-8
Best Practices and Information Sources
Hitachi Ops Center

Hitachi Ops Center

 Ops Center Analyzer Detail view license must be requested separately


before the software is deployed. There is request form for it on the
cumulus page and an order number is needed for it

 http://cumulus-systems.com/hdcalicense
• User Name : hdcalic
• Password : hdcalic123
• A confirmation E-Mail is sent after submitting the form. The license keys are
sent separately. That may takes 1-2 days

 Calculate time for that process

© Hitachi Vantara LLC 2020. All Rights Reserved.

Ops Center Common Challenges / Issues

 There is a presentation on the MS Team Space Jupiter Service


Community of practice, which contains useful information on challenges
found during OpsCenter Deployment and how best to deal with them
• TEAMS link: -> Here
• Sharepoint link: -> Here

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-9
Best Practices and Information Sources
Module Summary

Module Summary

 In this module, you should have learned to:


• Discuss best practices for:
 Adaptive Data Reduction (ADR)
 Pool
 Parity Group and Spare Drive
 Replication
 Encryption

• Discuss Hitachi Ops Center information sources

© Hitachi Vantara LLC 2020. All Rights Reserved.

Page 14-10
Best Practices and Information Sources
Your Next Steps

Your Next Steps

Follow us on
Validate yourknowledge
Validate your knowledgeandand skills
skills withwith certification
certification. social media:

Check your progress in the Learning Path.


@HitachiVantara

Review the course description for supplemental courses, or register,


enroll and view additional course offerings.

Get practical advice and insight with Hitachi Vantara white papers.

Get more knowledge, bulletins, downloads and product documentation.

Join the conversation with your peers in the Hitachi Vantara Community.

© Hitachi Vantara LLC 2020. All Rights Reserved.

• Certification: https://www.hitachivantara.com/en-us/services/training-
certification.html#certification
• Learning Paths:
o Employees: https://connect.hitachivantara.com/en_us/user/employee-
center/my-learning-and-development/global-learning-catalogs.html

o Partners: https://partner.hitachivantara.com/
o Customers: https://www.hitachivantara.com/en-us/pdf/training/global-learning-
catalog-customer.pdf
• Hitachi University / Hitachi Vantara Learning Center
o Employees: Hitachi University -
https://hitachi.csod.com/client/hitachi/default.aspx
o Partners / Customers: Hitachi Vantara Learning Center -
https://hitachi.csod.com/client/hitachi/default.aspx
• Hitachi White Papers:
https://www.hitachivantara.com/search?filter=0&q=white%20papers&site=hitachi_insig
ht&client=hitachi_insight&proxystylesheet=hitachi_insight&getfields=content-type
• Hitachi Support Connect: https://support.hitachivantara.com
• Hitachi Vantara Community: https://community.hitachivantara.com/s/
• Hitachi Vantara Twitter: http://www.twitter.com/HitachiVantara

Page 14-11
Best Practices and Information Sources
We Value Your Feedback

We Value Your Feedback

Page 14-12
Communicating in a Virtual Classroom:
Tools and Features
Virtual Classroom Basics
This section covers the basic functions available when communicating in a virtual classroom.

 Chat

 Q&A

 Feedback Options
• Raise Hand
• Yes/No
• Emoticons

 Markup Tools
• Drawing Tools
• Text Tool
© Hitachi Vantara Corporation 2020. All Rights Reserved.

Page V-1
Communicating in a Virtual Classroom: Tools and Features
Reminders: Intercall Call-Back Teleconference

Reminders: Intercall Call-Back Teleconference

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Synchronizing Your Audio to the WebEx Session

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Page V-2
Communicating in a Virtual Classroom: Tools and Features
Feedback Features — Try Them

Feedback Features — Try Them

Raise Hand Yes No Emoticons

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Markup Tools (Drawing and Text) – Try Them

Pointer Text Writing Drawing Highlighter Annotation Eraser


Tool Tools Colors

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Page V-3
Communicating in a Virtual Classroom: Tools and Features
Intercall (WebEx) Technical Support

Intercall (WebEx) Technical Support

Call 800.374.1852

© Hitachi Vantara Corporation 2020. All Rights Reserved.

Page V-4
Evaluating This Course
Please use the online evaluation system to help improve our
courses.

1. Sign in to Hitachi University.

https://hitachiuniversity/Web/Main

2. Click on My Learning. The Transcript page will open.

Page E-1
Evaluating This Course

3. On the Transcript page, click the down arrow in the Active menu.

4. In the Active menu, select Completed. Your completed courses will display.

5. Choose the completed course you want to evaluate.

6. Click the down arrow in the View Certificate drop down menu.

7. Select Evaluate to launch the evaluation form.

Page E-2

You might also like