0% found this document useful (0 votes)
513 views78 pages

Data Protection Product Guide

Uploaded by

Sajan Nair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
513 views78 pages

Data Protection Product Guide

Uploaded by

Sajan Nair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Dell EMC Integrated Data Protection

Product Guide

July 2021
Rev. 5.0
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2013 -2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Revision history..........................................................................................................................................................................5

Chapter 1: Introduction................................................................................................................. 8

Chapter 2: Understanding the architecture................................................................................... 9


System overview................................................................................................................................................................. 9
System architecture and components..........................................................................................................................10
Integrated Data Protection and Vscale Architecture.................................................................................................11
Planning for backup and recovery.................................................................................................................................. 11
Advantages of Integrated Data Protection converged backup systems.............................................................. 11
Converged backup systems connectivity overview.................................................................................................. 12
Dedicated deployment in a converged backup system with network switches........................................... 12
Converged backup system connectivity to a single Converged System........................................................12
Converged backup system connectivity to multiple Converged Systems..................................................... 14
Dedicated deployment in a converged backup system without network switches..................................... 19
Shared deployment..................................................................................................................................................... 20

Chapter 3: Backup and recovery.................................................................................................. 21


Converged backup systems components.................................................................................................................... 21
Avamar................................................................................................................................................................................. 22
NetWorker...........................................................................................................................................................................24
Data Domain....................................................................................................................................................................... 25
Data Domain Standalone ................................................................................................................................................ 26
Data Domain Boost for Enterprise Applications ..................................................................................................26
Data Domain and third-party application native backup utilities..................................................................... 26
Data Domain and third-party backup applications.............................................................................................. 26
Data Domain High Availability ........................................................................................................................................27
Data Domain High Availability features.................................................................................................................. 27
Data Domain High Availability architecture........................................................................................................... 27
Data Domain Virtual Edition in the Public Cloud........................................................................................................ 28
Dell EMC Cloud Disaster Recovery...............................................................................................................................29
Dell EMC Cloud Disaster Recovery architecture................................................................................................. 29
Converged backup system sizing...................................................................................................................................31
Recovery types.................................................................................................................................................................. 32
Replication of backup data to a second site...............................................................................................................32
Cloud tier support............................................................................................................................................................. 33
Cloud tiering with Avamar or NetWorker and Data Domain............................................................................. 33
IP-based data backup.......................................................................................................................................................33
NDMP backup.................................................................................................................................................................... 34
Enterprise deployment..................................................................................................................................................... 34
Use Oracle RMAN with converged backup systems................................................................................................ 36
ProtectPoint for VMAX3 and VMAX All Flash arrays with Data Domain ........................................................... 36
Dell EMC PowerProtect Data Manager.......................................................................................................................40
VMware backup and recovery.................................................................................................................................. 41
Performance and scalability.......................................................................................................................................41

Contents 3
Encryption..................................................................................................................................................................... 42
Cyber Recovery................................................................................................................................................................. 42
Cyber Recovery architecture................................................................................................................................... 42
Use Enterprise Hybrid Cloud with converged backup systems............................................................................. 44
Avamar configurations for Enterprise Hybrid Cloud........................................................................................... 44

Chapter 4: Business continuity and disaster recovery..................................................................46


VPLEX.................................................................................................................................................................................. 46
Enterprise Hybrid Cloud and VPLEX for continuous availability...................................................................... 49
RecoverPoint......................................................................................................................................................................49
RecoverPoint components and licensing...............................................................................................................49
RecoverPoint use cases............................................................................................................................................ 50
RecoverPoint replication........................................................................................................................................... 53
RecoverPoint consistency groups...........................................................................................................................55
RecoverPoint storage awareness............................................................................................................................55
RecoverPoint auto-provisioning and auto-matching for XtremIO arrays......................................................56
RecoverPoint multi-site support..............................................................................................................................56
Enterprise Hybrid Cloud and RecoverPoint for disaster recovery........................................................................ 58
RecoverPoint and VMware vCenter Server Site Recovery Manager.................................................................. 59
AMP Protection........................................................................................................................................................... 60
Orchestration and automation................................................................................................................................. 60
RecoverPoint for Virtual Machines............................................................................................................................... 62
RecoverPoint for Virtual Machines architecture................................................................................................. 62
RecoverPoint for VMs multisite support............................................................................................................... 64
Orchestration and automation................................................................................................................................. 65
RecoverPoint for Virtual Machines Cloud Solution.............................................................................................66
RecoverPoint for Virtual Machines cloud solution use cases .......................................................................... 67
RecoverPoint and VPLEX................................................................................................................................................67
MetroPoint..........................................................................................................................................................................69
MetroPoint operation................................................................................................................................................. 70

Chapter 5: Data Protection Management..................................................................................... 74


Data Protection Central................................................................................................................................................... 74
Data Protection Search................................................................................................................................................... 74
VMware disk space size requirements................................................................................................................... 75
Replication.....................................................................................................................................................................76
Data Protection Advisor.................................................................................................................................................. 76
Determine Data Protection Advisor datastore and application server sizing................................................77
Backup of the Data Protection Advisor datastore ............................................................................................. 78
Data Protection Advisor Agents.............................................................................................................................. 78

4 Contents
Revision history
Date Document Description of changes
revision
July 2021 5.0 Updating the Data Protection Support matrices links in Data Domain and
third-party backup applications.
June 2021 4.9 Document updates include:
● Dell EMC PowerProtect Data Manager 19.7 support
● AMP-3S naming - see the Introduction for details.
March 2021 4.8 Document updates include:
● Metro node support
● Dell EMC PowerProtect Data Manager 19.6 support
December 2020 4.7 Document updates include:
● Linux support for Dell EMC Networker
● Dell EMC RecoverPoint for Virtual Machines 5.3
September 2020 4.6 Updated document for:
● Dell EMC PowerProtect Data Manager 19.4 support
● Data Protection software version 19.3 support
● PowerStore storage arrays support
● Removal of Dell EMC PowerProtect X400 content
June 2020 4.5 Updated document for:
● Dell EMC PowerProtect Data Manager 19.3
● Dell EMC PowerProtect X400
● Data Protection software Version 19.2
March 2020 4.4 Updated for:
● Data Domain models 6900, 9400, and 9900
● Data Domain OS 7.0
December 2019 4.3 Updated for AMP Central and for Dell EMC Cloud Disaster Recovery 19.2.
September 2019 4.2 Updated for RecoverPoint for Virtual Machines 5.2.2. and for Dell EMC
Cyber Recovery.
August 2019 4.1 Added new content to support Cyber Recovery.
June 2019 4.0 Added support for Data Domain Virtual Edition and Dell EMC Cloud Disaster
Recovery.
March 2019 3.9 Added support for Data Domain High Availability
January 2019 3.8 Added support for VPLEX 6.1.1 and Dell EMC PowerMax storage
November 2018 3.7 Added support for Data Protection Suite 18.1

September 2018 3.6 Added support for IPv6 onVxBlock System 1000

August 2018 3.5 Added support for:


● Cisco Nexus 9336C-FX2 switch
● Dell EMC PowerMax 2000 and 8000 arrays
● AMP-3S
Updated labels and connections in connectivity diagrams

May 2018 3.4 Added support for Data Domain 3300

March 2018 3.3 Added support for XtremIO X2

Revision history 5
Date Document Description of changes
revision
February 2018 3.2 Added support for VxBlock System 1000 and AMP-VX

January 2018 3.1 Added support for RecoverPoint for Virtual Machines 5.1.1.1
December 2017 3.0 Added support for VMware vSphere 6.5
November 2017 2.9 ● Minor updates to support Avamar 7.5 and Networker 9.2.
● Minor updates to ProtectPoint section
● Removed reference to VPLEX GeoSynchrony 6.0 SP1 P5 update.
August 2017 2.8 Added support for:
● Data Domain Standalone
● VMAX 950F
Expanded VMware Site Recover Manager for RecoverPoint content

June 2017 2.7 Added support for Avamar 7.4.1.


May 2017 2.6 Added support for:
● NetWorker 9.1
● Cisco Nexus 93180YC-EX switch
March 2017 2.5 Added support for Vscale.
February 2017 2.4 Added support for:
● VPLEX GeoSynchrony 6.0 and new VPLEX VS6 hardware
● Data Domain OS 6.0 and new Data Domain Controller models: 6300,
6800, 9300, 9800
December 2016 2.3 Added support for ProtectPoint for VMAX3 and VMAX All Flash arrays with
Data Domain.
November 2016 2.2 ● Added support for Avamar Virtual Edition.
● Returned physical planning information previously moved to the
Converged Systems Physical Planning Guide.
September 2016 2.1 ● Added support for Avamar 7.3 software
● Added support for Avamar Gen 4T hardware
May 2016 2.0 ● Added support for NetWorker 9.0.
● Moved physical planning information to the Converged Systems Physical
Planning Guide.
April 2016 1.9 Added support for converged backup systems in a switchless configuration.
February 2016 1.8 ● Added support for RecoverPoint for Virtual Machines
● Added support for Data Domain DS60 storage shelf
● Expanded VPLEX content
December 2015 1.7 ● Added support for XtremIO
● Added support for Enterprise Hybrid Cloud
● Added support for VxRack Systems
September 2015 1.6 ● Added support for VxBlock Systems
● Added support for Data Domain DD2200 Controller and Data Domain
DD9500 Controller
● Added support for MetroPoint
● Added information on the Intelligent Physical Infrastructure appliance
added to cabinets.
March 2015 1.5 Added Avamar 7.1 and Data Domain 5.5 support.
October 2014 1.4 ● Added Oracle RMAN support
● Added Vblock System 240 support

6 Revision history
Date Document Description of changes
revision
May 2014 1.3 Added a RecoverPoint section
December 2013 1.2 Added multi-node support and Data Domain 4500 physical specifications
November 2013 1.1 Added Data Domain DD4500
October 2013 1.0 Initial release

Revision history 7
1
Introduction
This document describes the Integrated Data Protection options for Converged Systems.
The target audience for this document includes field personnel, partners, and customers responsible for planning or managing
data protection for a Converged System. This document is designed for people familiar with Dell EMC Data Protection solutions.
References to AMP Central (unless stated otherwise) cover AMP Central with Unity XT for Single System Management
(formerly known as AMP-3S), AMP Central with Unity XT for Multi-System Management, and AMP Central VSAN.
See the Glossary for terms, definitions, and acronyms.

8 Introduction
2
Understanding the architecture

System overview
This section describes the various features of Integrated Data Protection.
Integrated Data Protection provides:
● Daily backup
● Data replication
● Business continuity
● Workload mobility (flexibility)
● Extended retention of backups

Daily backup
Once-daily backups provide minimal required data insurance by protecting against data corruption, accidental data deletion,
storage component failure, and site disaster. The daily backup process creates fully recoverable, point-in-time copies of
application data.
Successful daily backups ensure that, in a disaster, a business can recover with not more than 24 hours of lost data. The best
practice is to replicate the backup data to a second site to protect against a total loss of data in the event of a full site disaster.
Most daily backups are saved for 30 to 60 days.

Data replication
Most businesses have some datasets that are too valuable to risk losing up to 24 hours of data. Additionally, if disaster strikes,
these more valuable datasets must be recovered quickly.
For datasets that are more valuable, data replication achieves a higher level of data insurance. Multiple snapshots of application
data can be created throughout the day. Snapshots are used to restore data to a point in time, to retrieve an individual file, or to
copy application data to a different server for testing, data mining, and so on.
Retrieving a copy of the data from an offsite location reduces the worst-case data loss from 24 hours to the time since the
last snapshot. Data can be copied synchronously, where the data is updated locally and remotely simultaneously. It can also be
copied asynchronously, where there may be a time lag in updating the data remotely.
Typically, data replication is done in addition to daily backup. Replication cannot always protect against data corruption, because
a corrupted file replicates as a corrupted file. The best level of data protection is achieved by combining daily backup and
continuous replication methodologies.

Business continuity
Business continuity provides application availability insurance by ensuring zero data loss and near-zero recovery time for
business-critical data. Data and applications protected with a business continuity product should still use daily backup to provide
multi-day point-in-time copies.

Workload mobility
Workload mobility provides data protection by moving the workload to another site in anticipation of a disaster. For example, if
a tropical storm is heading for a data center and the decision is made to move critical applications to a data center out of the
storm’s path, a good workload mobility design allows that movement to occur easily and with zero downtime.

Understanding the architecture 9


In addition to data protection purposes, workload mobility is also used for moving applications and data to balance workloads
across Converged Systems.

Extended retention of backups


Most businesses have a subset of data with a copy-retention period that spans multiple years. Those data copies are usually
retained to meet a specific legal compliance. In the past, long-term retention periods were addressed by placing backup copies
on tape and then storing the tapes in an offsite location. Certain Data Domain systems offer a disk-based extended retention
feature that delivers a cost-effective alternative to tape.
If there is a business or legal requirement to occasionally export data backups to tape, both Avamar and NetWorker offer
solutions. Avamar 7.3 and higher no longer support Avamar Extended Retention with an Avamar Media Access Node. Avamar
offers Avamar Data Migration Enabler to export Avamar backups to tape as an alternative solution. For more information about
this solution see the Avamar Data Migration Enabler User Guide in the reference materials section of this document.
NetWorker can perform backup or clones to tape within a native NetWorker deployment. For more information about this
solution, see the NetWorker Administration Guide in the reference materials section of this document.
PowerProtect Data Manager performs backup to Data Domain, it cannot clone to tape devices. For extended retention, backups
can be moved from Data Domain to Cloud Tier.

System architecture and components


Integrated Data Protection products are extensions of the Converged System, built by Dell EMC in Dell EMC cabinets.
Integrated Data Protection products are pre-architected, assembled, and supported by Dell EMC. While you are free to use any
data protection system for your Converged System, Integrated Data Protection products provide systems that are faster to
deploy, are optimized for Converged Systems, and come with a single-call support model.
The following table describes how Integrated Data Protection products provide backup and recovery, business continuity, and
disaster recovery for the Converged System:

Capability Product Product description


Backup and recovery Avamar (including Avamar Virtual ● Advanced deduplication backup.
Edition), NetWorker, PowerProtect Data ● Uses high-speed technology to
Manager, and Data Domain reduce the backup data storage
footprint at the target device by
10 to 30 times. Backup times
are shortened while full backups
are available for rapid, single-step
restores.
Business continuity (disaster avoidance) VPLEX ● Shares, protects, or load-balances
and workload mobility providing infrastructure resources across
availability (flexibility) multiple Converged Systems in the
same data center or different data
centers within a campus or urban
area.
● Moves live VMs between locations,
avoiding migration downtime.
● Handles unplanned events
automatically with zero data-loss
and zero to near-zero application
recovery time.
Replication with continuous local and RecoverPoint, RecoverPoint for Virtual ● Data rollback to any point in time.
remote protection Machines, and VMware Site Recovery ● Quickly restores critical applications
Manager and data.
● Helpful when migrating data from
one virtualized Converged System to
another.

10 Understanding the architecture


Integrated Data Protection and Vscale Architecture
Integrated Data Protection can protect systems that are part of a Vscale Architecture.
Vscale Architecture is a framework that enables organizations to build data center scale IT systems comprised of resources that
are logically connected using a Vscale Fabric to form logical systems.
For more information on Vscale Architecture, refer to the Dell EMC Vscale Architecture Overview.

Planning for backup and recovery


Data backup and recovery planning are critical elements of the comprehensive effort to deploy a Converged System. The use of
traditional tape to back up and recover data in a Converged System is effective, but legacy methodology inhibits maximizing the
potential of the Converged System.
The early planning stages of Converged System design is the time to consider new backup methodologies and current best
practices that are specifically designed for a VMware environment. Implementing new backup and recovery processes at this
time offers the following advantages:
● Provides a logical cut-over point. Applications moving into a Converged System can be moved off the old backup
infrastructure at the same time.
● Allows for scaling up the amount of protected data, without increasing administration staff.
● Meets data protection service level agreements (SLAs) with plenty of room in the backup window for continued data
growth.
● Eliminates tape throughput problems, tape management problems, and security issues associated with tapes.
● Leverages advanced VMware features such as change block tracking, which is a feature specifically designed for data
protection.
In some cases, budget constraints, lease expirations on existing backup components, recent license renewals, or staffing
constraints make it impractical to simultaneously roll out a Converged System and a new backup infrastructure. In those
situations, it is recommended that you continue to investigate new backup methodologies as part of the Converged System
planning efforts. Education during the planning stage allows you to develop a strategy for how to migrate from an older backup
system infrastructure.

Advantages of Integrated Data Protection converged


backup systems
This section describes the advantages of using an Integrated Data Protection solution on converged backup systems.
While not mandatory, using an Integrated Data Protection solution provides the following benefits:
● Cost savings
● Workforce savings
● Designed specifically for a Converged System
● Scales painlessly
● Converged System integration accelerates time from delivery to full production use and reduces deployment risk factors
● Offloads the backup network resulting in reduced labor and a path to retiring the backup network
● Provides ability to back up more data in less time. Average nightly backups decrease from hours to minutes

Understanding the architecture 11


Converged backup systems connectivity overview
A converged backup system is a pre-integrated backup system, built and supported by Dell EMC, and containing a combination
of Avamar or NetWorker or PowerProtect Data Manager, Data Domain, and Cisco switches.

Dedicated deployment in a converged backup system with network


switches
A dedicated deployment provides one converged backup system dedicated to a single Converged System.
The Converged System network extends into the Integrated Data Protection cabinet. Backup data travels directly between the
Converged System and the converged backup system with no impact on the regular flow of data on the customer’s network.
Dell EMC does not support Avamar Virtual Edition with this configuration because it is not a practical option. See Dedicated
deployment in a converged backup system without network switches for supported Avamar Virtual Edition configurations.
For Converged Systems, replicate backup data from the Avamar or NetWorker or PowerProtect Data Manager and Data Domain
components to a similar converged backup system located in a separate facility. Backup replication traffic traverses through the
converged backup system switches to the Converged System switches and uses the Converged System uplinks to the customer
network. Replication data crosses the customer network to the secondary location.
To replicate backup data from a site using Avamar and Data Domain, the destination site must also have Avamar and Data
Domain. To replicate from a site using NetWorker and Data Domain, the destination site requires only Data Domain. If the main
site is unavailable due to an outage, you must restore the NetWorker environment at the secondary site to recover data there.
To replicate from a site using PowerProtect Data Manager and Data Domain, the destination site requires only Data Domain. If
the main site is unavailable due to an outage, you must restore the PowerProtect Data Manager environment at the secondary
site to recover data there

Converged backup system connectivity to a single Converged


System

VxBlock System with 10 GB connections


The following figure shows the connections of a converged backup system connected to a single VxBlock System with 10 GB
connections:

12 Understanding the architecture


VxBlock System 1000 with 10 GB connections
The uplinks from the dedicated data protection network switches are configured for 10 GB, leveraging the 40 GB to 10 GB
breakout capability of the VxBlock System network switches.
The following figure shows the connections of a converged backup system connected to a single VxBlock System 1000 with 10
GB connections:

Understanding the architecture 13


Converged backup system connectivity to multiple Converged
Systems
The following figure shows the connections of a converged backup system connected to multiple VxBlock Systems with 10 GB
connections:

14 Understanding the architecture


Integrated Data Protection for Converged Infrastructure and AMP Central
AMP Central is a management platform for management of multiple VxBlock Systems. AMP Central supports the management
of VxBlock 1000 and legacy VxBlock Systems such as the VxBlock System 340, VxBlock System 350, VxBlock System 540 and
VxBlock System 740.
Integrated data protection for Converged Infrastructure supports backup of multiple VxBlock Systems with a pair of dedicated
network switches, which are required to support both AMP Central stand-alone and integrated. Besides the required dedicated
network switches, there is a requirement for uplinks from those switches to the external network. You require uplinks to support
backup of VxBlock System Production data or third-party data over Layer 3. The uplinks also provide a means to replicate
backup data bypassing the VxBlock System and AMP Central stand-alone physical networks.
Support for data protection for the AMP Central stand-alone core management workload, requires uplinks from the dedicated
data protection network switches to the AMP Central stand-alone ToR switches.

Integrated Data Protection network routing topology


Integrated Data Protection configured for a VxBlock System managed by AMP Central supports two routing topology
configurations, depending on where Layer 3 traffic is routed. One configuration uses Layer 3 routing protocols and the other
uses Layer 2 links. The two configurations are explained as follows:
Integrated Data Protection implements the OSPF, EIGRP, or static routing protocol to provide Layer 3 routing services for
east-west traffic within the Cisco Nexus 93180YC-EX Data Protection ToR switches and north-south flows for traffic leaving
the Integrated Data Protection System. This reduces the amount of traffic traversing the uplinks to the external network for
Integrated Data Protection inter-VLAN communications and improves overall performance.

Understanding the architecture 15


The Integrated Data Protection System is uplinked to the external network with Layer 2 links. In this case, routing for the
Integrated Data Protection System and external VLANs occurs at the external network. This scenario applies if you need Layer
2 adjacency between endpoints both inside and outside of the Integrated Data Protection. This allows for workload mobility
without requiring a network overlay, such as VXLAN, since a subnet can be consumed both inside and outside of the Integrated
Data Protection System.

VxBlock Systems managed by AMP Central stand-alone


The following figure shows the connections of a converged backup system that is integrated with multiple VxBlock Systems that
an AMP Central stand-alone system manages:

16 Understanding the architecture


Understanding the architecture 17
VxBlock Systems Managed by AMP Central integrated
The following figure shows the connections of a Converged backup system that is integrated with multiple VxBlock Systems
that an AMP Central Integrated system manages:

18 Understanding the architecture


Dedicated deployment in a converged backup system without
network switches
The dedicated deployment in a converged backup system without network switches is similar to the dedicated deployment with
network switches, except there are no dedicated network switches for the converged backup system.
This configuration allows Avamar and Data Domain to connect directly to the Converged System management switches
(for IPMI or iDRAC connections) and Converged System production switches (for management, backup, and replication
connections). A NetWorker and Data Domain converged backup system requires physical connections to the Converged System
production switches for the Data Domain system only.
The following table shows components used in a converged backup system in a switchless configuration for Avamar, Networker,
and PowerProtect Data Manager systems:

Converged backup system Component


Avamar ● Avamar Virtual Edition/M1200/M2400 Single Node (metadata storage only)
● Avamar NDMP Accelerator (optional - Physical or Virtual)
● A supported Data Domain system
Networker ● NetWorker
● A supported Data Domain system
Data Domain Standalone ● ProtectPoint for VMAX
● Data Domain Boost for Enterprise Applications
● Third-party applications and backup applications unsupported by Dell EMC
ProtectPoint for VMAX ● VMAX3 or VMAX All Flash Array
● A supported Data Domain system
PowerProtect Data Manager ● PowerProtect Data Manager virtual appliance
● A supported Data Domain system

The switchless configuration is available for all RCM-supported Converged Systems, provided that the number of available ports
is sufficient. For guidance on the number of ports see the following table:

Converged Number of ports for Number of ports Number of ports for In- Number of ports for
backup system BRS OOB (1 GigE) for In-Band Band management (10 backup on data plane
components on management plane management (1 GigE) on data plane switch
switch GigE) on data plane switch
switch
Data Domain 0 DD2200/2500 1 0 2
2200/2500/3300/
6300 Controller 1 DD3300/6300

Data Domain 0 DD4200/4500/7200 1 0 4


4200/4500/6800/
7200/9300/9500 1 DD6800/9300/9500
Controller 10 GigE
Data Domain 2 2 0 8
6800/9300
Controller in
High Availability
Configuration 10
GigE
Data Domain 1 0 1 8
6900/9400/9900
10 GigE
Data Domain 2 0 2 16
6900/9400/9900
10 GigE in
High Availability
Configuration

Understanding the architecture 19


Converged Number of ports for Number of ports Number of ports for In- Number of ports for
backup system BRS OOB (1 GigE) for In-Band Band management (10 backup on data plane
components on management plane management (1 GigE) on data plane switch
switch GigE) on data plane switch
switch
Data Domain 1 0 1 4
6900/9400/9900
25 GigE
Data Domain 2 0 2 8
6900/9400/9900
25 GigE in HA
Configuration
Data Domain 9800 1 1 0 8
Controller 10 GigE
Data Domain 9800 2 2 0 16
Controller in HA
Configuration 10
GigE
Data Domain 9900 1 0 1 2
100 GigE
Data Domain 9900 2 0 2 4
100 GigE in HA
Configuration
Avamar Metadata- 1 1 0 3
only Node 10 GigE
Avamar Virtual 0 0 0 0
Edition
For each Physical 1 0 1 2
NDMP Accelerator
10 GigE

To qualify for a switchless configuration with an Avamar environment:


● The converged backup system must back up only a single Converged System.
● The projected 5-year backup requirement must not need more than a single Avamar Node.
● Backup clients must consist of systems that reside only within the Converged System.
● All replication traffic must traverse the Converged System production switches.
To qualify for a switchless configuration with Data Domain standalone:
● Must back up only a single Converged System.
● The projected 5-year backup requirement must not need more than a single Data Domain system.
● Backup application and clients must consist of systems that reside only within the Converged System.
● All replication traffic must traverse the Converged System production switches.

Shared deployment
A shared deployment provides a backup system that can protect more than the directly connected Converged Systems.
You can back up the following systems by connecting their Avamar or NetWorker or PowerProtect Data Manager clients to the
converged backup system:
● VxBlock and Vblock Systems 200 series
● Third-party systems
These clients use the connections from the converged backup system to the customer network to communicate with Avamar
or NetWorker or PowerProtect Data Manager and send data to the backup repository. NetWorker can also perform image-level
backups from virtual environments not on a Converged System by adding VMware Backup Appliances to those environments.

20 Understanding the architecture


3
Backup and recovery
An Integrated Data Protection converged backup system solution uses one of the following arrangements of Dell EMC
technology and components:
● Avamar
● Avamar with Data Domain
● NetWorker with Data Domain
● PowerProtect Data Manager with Data Domain

Converged backup systems components


Dell EMC has a series of products for backup and recovery design for Converged Systems. The correct product to use depends
on your specific requirements.
Dell EMC combines the following components to provide data backup protection for Converged Systems:

Product Component
Avamar ● Avamar M1200/M2400 Single Node (metadata storage only)
● Avamar M1200/M2400 Data Store (in a grid configuration)
● Avamar Virtual Edition (metadata storage only)
● Avamar NDMP Accelerator Node (Physical and Virtual)
● Avamar VMware Image Backup/FLR Appliance
NetWorker ● NetWorker server
● NetWorker storage node
● NetWorker Management Console server
● NetWorker vProxy Appliance
ProtectPoint for VMAX ● VMAX3 or VMAX All Flash Array
● Data Domain DD6300 or higher
Data Domain ● Data Domain DD2200 Controller
● Data Domain DD2500 Controller
● Data Domain DD3300 Controller
● Data Domain DD4200 Controller
● Data Domain DD4500 Controller
● Data Domain DD6300 Controller
● Data Domain DD6800 Controller
● Data Domain DD6900 Controller
● Data Domain DD7200 Controller
● Data Domain DD9300 Controller
● Data Domain DD9400 Controller
● Data Domain DD9500 Controller
● Data Domain DD9800 Controller
● Data Domain DD9900 Controller
● Data Domain Boost
PowerProtect Data Manager ● PowerProtect Data Manager virtual appliance
● PowerProtect Data Manager VM Direct Protection Engines appliance
● Supported Data Domain

All converged backup systems include the following features:

Backup and recovery 21


● Factory integrated by Dell EMC
● Delivered in Dell EMC cabinets
● Dell EMC support
● Backup data never leaves the Converged System network (does not apply to VxBlock and Vblock Systems 200 series).

Determining which converged backup system to use


Proper converged backup system sizing requires an understanding of the backup capacity and the backup and restore speeds.
Proper sizing includes current backup requirements with a plan to seamlessly scale up to meet the capacity requirements for
three to five years. Proper component selection requires the use of appropriate sizing tools.

Avamar
Avamar provides a data backup and recovery solution with deduplication technology.
Avamar comes in both physical and virtual editions. References to Avamar in this guide apply to both editions. Avamar Virtual
Edition is a single node system that integrates with all supported Data Domain Systems.

Avamar hardware
The Avamar Server manages the Avamar backups and, depending on the configuration, targets Avamar datastore nodes or a
Data Domain or both for backup storage. The metadata that Avamar software maintains is stored on the configured disk drives
in the Avamar datastores.

Avamar software
Avamar software provides the following features for Converged Systems:
● VMware vSphere Web Client plug-in
● Instant Access VM restore from Data Domain
NOTE: This feature is not supported when integrated with the Data Domain 3300.
● Self-service file restore
● Multiple simultaneous backups per proxy
● 24x7 backups
● Cloud Tier support when integrated with a Cloud Tier-supported and a Cloud Tier-enabled Data Domain System
● Ability to meet high service level agreements (SLAs) expected with applications running on a Converged System
● Avamar Backup and Recovery Manager, which provides real-time monitoring of activities and events, backup reports,
systems, and configurations. Optionally, use Avamar Backup and Recovery Manager to configure basic Avamar replication.

Avamar Virtual Edition with Data Domain


Avamar Virtual Edition with Data Domain is a data backup and recovery solution with deduplication technology. It includes a
VMware virtual appliance and Avamar software.
Avamar Virtual Edition provides the following features:
● One Avamar instance is deployed as a VM on the management cluster in the AMP-VX to protect management workloads.
● May be deployed on the AMP-VX management cluster, as a separate instance, to protect production workloads.
● May be deployed on the VMware vCenter production cluster where AMP-2S is the converged infrastructure management
platform to protect production workloads.
● May be deployed on the VMware vCenter management cluster on an AMP-3S or AMP Central to protect production
workloads.
● Supports the backup of VMs through Avamar clients or one or more Avamar proxies.
● Integrates with all supported Data Domain systems for the Avamar backup target.
● Replicates with one of the following:

22 Backup and recovery


○ Another Avamar Virtual Edition with a Data Domain system
○ An Avamar Single Node or Grid with a Data Domain system
Avamar Virtual Edition integrates with the latest version of Avamar software to provide the same features for Converged
Systems as Avamar physical systems provide.

Avamar Virtual Edition Limitations


The following limitations apply to Avamar Virtual Edition:
● As with other Avamar with Data Domain systems, a Dell EMC Professional must size the solution.
● Avamar Virtual Edition with Data Domain is supported for backing up a single Converged System. A single Avamar Virtual
Edition instance cannot be used for backing up multiple Converged Systems.
● Avamar Virtual Edition is not supported for backups of non-Converged Systems, either physical or virtual, outside of the
supported Converged System.
● Avamar Virtual Edition is not scalable to a multinode Avamar server.
● Resizing of the Avamar Virtual Edition VM is not supported.
● Scalability of Avamar Virtual Edition is limited to deploying other separately managed Avamar Virtual Edition VMs. Each
Avamar Virtual Edition instance requires a separate pane of glass for management.
● Avamar Virtual Edition has a single network interface. All management, backup, and replication traffic traverse a single
interface on a single VLAN.

Avamar Virtual Edition Resource Requirements


Planning for Avamar Virtual Edition requires appropriately sizing the virtual environment. See the following table to determine
the minimum system requirements for each Avamar Virtual Edition configuration:

Use cases for Avamar Virtual Edition with Data Domain


The following table lists use case details for an Avamar Virtual Edition configuration:

AVE Configuration CPU Memory Disk Space


.5 TB 2x 2GhZ 6 GB 935 GB
1 TB 2x 2GhZ 8 GB 1685 GB
2 TB 2x 2GhZ 16 GB 3185 GB
4 TB 4x 2GhZ 36 GB 6185 GB
8 TB 8x 2GhZ 48 GB 12185 GB
16 TB 16x 2GhZ 96 GB 24185 GB

Use case Description


Single Site ● Small customer sites (nonenterprise) with a single Converged System
● Backup and recovery of VMs, through Avamar Client or Avamar Proxy
● Backup of Converged System bare metal machines through Avamar Client
● Lower capacity and lower change rate environments
● No resiliency for Avamar Virtual Edition with Data Domain environments
NOTE: Replicate Avamar Virtual Edition and Data Domain systems to a similar
system at a remote location for redundancy.

Multisite ● Remote Office or Branch Office sites


● Single Converged System per site
● Backup and recovery of VMs, through Avamar Client or Avamar Proxy
● Backup of Converged System bare metal machines through Avamar Client
● Lower capacity and lower change rate environments

Backup and recovery 23


Use case Description

NOTE: Replicate each remote site to a central location with a physical Avamar and
Data Domain system or another Avamar Virtual Edition and Data Domain system.

Avamar Fitness Analyzer


Avamar Fitness Analyzer is a portal that provides reports and analytics of Avamars health and functionality. You can use the
information you obtain from Avamar Fitness Analyzer for troubleshooting, system optimization, and planning.
Access Avamar Fitness Analyzer from the following:
● Avamar Web User Interface (AUI)
● Directly through a web browser using this URL: https://<Avamar_Server_FQDN>/eagleeye
Avamar Fitness Analyzer reports on the following:
● Maintenance and backup window optimization
● Capacity utilization
● Replication performance
● Job completion times
● Proxy utilization
● Policy and client organization
● Troubleshooting of system health or configuration issues

NetWorker
NetWorker is a software-based backup and recovery system that runs on either Microsoft Windows Server or CentOS Virtual
Machines when integrated with VxBlock systems. NetWorker requires separate storage hardware as a backup target. A
converged backup system with NetWorker uses Data Domain for storage.
NetWorker software provides the following features for Converged Systems:
● NetWorker is deployed as a minimum of three VMs on the management cluster on AMP-VX or AMP Central.
● NetWorker is deployed as a minimum of three VMs on a production cluster on the AMP-2S.
● NetWorker is supported on either Microsoft Windows Server or CentOS Linux operating systems when you configure as
Integrated Data Protection for VxBlock Systems.
● Ability to back up a wide variety of physical and virtual compute environments, applications, and database, at image and file
level.
● VMware vSphere Web-Client plug-in
● Instant Access VM restore from Data Domain

NOTE: This feature is not supported when integrated with the Data Domain 3300.

● Self-service file restore


● Multiple simultaneous backups per proxy
● 24x7 backups
● Cloud Tier support when integrated with a Cloud Tier-supported and a Cloud Tier-enabled Data Domain System
● Ability to meet high SLAs expected with applications running on a Converged System
The NetWorker server, storage node, and management console server reside on three separate VMs. The NetWorker vProxy
Appliance is a Linux-based virtual appliance that is deployed to VMware vCenter Server using an OVA.
The following table describes NetWorker components:

Component Description
NetWorker server Provides the services to back up and recover data. Includes an index database of backup
history.
NetWorker storage node Maintains the physical connection to the backup target device (in Integrated Data
Protection, this device is a Data Domain system). The storage node offloads from

24 Backup and recovery


Component Description
the NetWorker server much of the data movement involved in a backup or recovery
operation.
NetWorker Management Console Manages, monitors, and provides reports on all NetWorker servers and clients in the
server backup environment.
NetWorker vProxy Appliance Acts as a proxy to perform image-level backups for VMware VMs. Integrates with
VMware vSphere Web Client.

Data Domain
Data Domain systems provide data storage targets for converged backup systems. Data Domain supports deduplication of
backup data before writing it to storage, thus minimizing storage requirements by storing only unique data.

Data Domain Boost


Data Domain Boost (DD Boost) extends the optimization capabilities of Data Domain solutions. It also simplifies disaster
recovery procedures, and increases performance by distributing parts of the deduplication process to the backup server.

Data Domain Extended Retention


The Data Domain Extended Retention software option is a great feature for storing and managing backups that must be kept
for long periods of time. Retention periods spanning years are commonly required to meet business requirements such as
compliance, e-discovery, or archiving. Data Domain Extended Retention is a two-tiered file system that dedicates storage to
active and archive functions. Backups stored on the Data Domain Controller are first placed on the active tier. The data, in the
form of complete files, is later moved to the archive tier, based on user-defined data-movement policies.
NOTE: Data Domain Extended Retention is not supported in Data Domain OS version 7.0 and later.

Data Domain Cloud Tier


Data Domain Cloud Tier sends deduplicated data directly from the Data Domain to a public, private, or hybrid cloud for long-term
retention. The Cloud Tier feature provides the following benefits:
● Scalable, native, automated, and policy-based cloud tiering
● Storage of up to twice the maximum active tier capacity in the cloud for long-term retention
● Sends only unique, deduplicated data to cloud storage for reduced storage footprint and network bandwidth
● Allows migration of offsite storage from tape to the cloud
Supported Cloud services include:
● Elastic Cloud Storage
● Google Cloud Platform
● Microsoft Azure
● Amazon Web Services
● Alibaba

Data Domain Metadata on Flash


The Data Domain Metadata on Flash feature creates caches for the Data Domain file system metadata. The caches are created
on solid-state drives (SSD) and accelerate access to metadata and backup data through low latency and high IOPS. This feature
improves performance for both traditional and random workloads.

Backup and recovery 25


Data Domain Standalone
In a converged environment where backups using Avamar, NetWorker, or ProtectPoint are not wanted, the alternative is Data
Domain Standalone.
Data Domain Standalone offers Data Domain without the requirement of Avamar, NetWorker, or ProtectPoint. When integrated
with a Converged System, Dell EMC recommends that Data Domain uses a Data Domain virtual Ethernet interface and LACP
for IP network connectivity, except the DD3300. For Data Domain systems that do not support LACP, such as the DD3300, use
IFGROUPS to provide for load balancing across two 10GbE interfaces. It also introduces support for Data Domain Boost over
Fibre Channel with the addition of 8 Gb or 16 Gb Fibre Channel IO modules. The use cases for this solution are as follows:
● Data Domain Boost for Enterprise Applications (Dell EMC supported)
○ Backup using Data Domain Boost over IP
○ Backup using Data Domain Boost over FC
● Third-party application Native Backup Utilities (Compatible but not Dell EMC supported). See the Avamar, Data Domain,
DDBEA, NetWorker and ProtectPoint Compatibility Guide for a list of compatible applications.
○ Backup to Data Domain CIFS or NFS Share
○ Microsoft SQL Management Studio
○ Oracle RMAN
○ Other third-party applications that include a native backup utility with the ability to backup to disk (CIFS\NFS)
● Third-Party Backup Applications (Compatible but not Dell EMC supported). See the Avamar, Data Domain, DDBEA,
NetWorker and ProtectPoint Compatibility Guide for a list of compatible applications.
○ Veeam, NetBackup, HP DataProtector
○ Backup using DD Boost over IP
○ Backup using DD Boost over FC
○ Backup to disk (CIFS\NFS)

Data Domain Boost for Enterprise Applications


Perform backups from third-party applications using Data Domain Boost (DDBoost).
In certain environments, a database administrator or application owner can monitor backups and restores with DDBoost for
Enterprise Applications (DDBEA) on the application host. The DDBEA agent integrates with the applications native management
utility for efficient backups from the application host to the Data Domain using DDBoost. Restores may also be performed in
the same fashion. For most applications, these backups and restores are supported over both IP and FC. For FC backups and
restores, the following is required:
● Data Domain must contain two 2 or 4 port FC IO modules
● The application host must contain two FC HBAs connected to the SAN fabric and zoned correctly
● Data Domain must be licensed for the DDBoost feature.

Data Domain and third-party application native backup utilities


This topic describes how third-party applications can perform backups to Data Domain.
Database applications such as Oracle or Microsoft SQL (as well as other third-party applications that are not supported with
DDBEA) may also perform backups using their native management utilities by performing them directly to Data Domain. As long
as these management utilities support backup to disk over IP, these backups may be directed to either CIFS or NFS shares
created on the Data Domain system. These backups will not be as efficient as those using the DDBoost protocol and application
side deduplication, but may benefit from target side deduplication on the Data Domain. If all backups are targeted for CIFS or
NFS shares only, then a DDBoost feature license is not required. The DDBoost license may be added at a later time, if DDBoost
backups are required.

Data Domain and third-party backup applications


Third-party backup applications may also use a Data Domain system as a target for its backups. These backups may be backed
up to disk using CIFS or NFS shares on the Data Domain. Also, If the backup application supports the DDBoost you can use
that. Any backup application not using DDBoost achieves the benefit of deduplication if it has its own proprietary client-side
deduplication and/or target side deduplication on the Data Domain. The Data Domain must be licensed for the DDBoost feature
if backups are performed using DDBoost.

26 Backup and recovery


There are several third-party backup applications that contain the DDBoost libraries which support DDBoost backups over both
FC and IP. For more information see the following:
● Data Protection support matrices here - https://elabnavigator.emc.com/eln/modernHomeDataProtection
● Data Domain support matrices here - https://elabnavigator.emc.com/eln/modernHomeAutomatedTiles?page=Data_Domain

Data Domain High Availability


Data Domain offers a high availability option which supports nearly all currently supported Data Domain features and
functionality, and integrates with all Data Domain supported converged infrastructure systems.
Data Domain High Availability (HA) is also supported in a switchless configuration when integrated with converged
infrastructure systems.
Data Domain HA is supported on the following Data Domain systems:
● DD6800
● DD6900
● DD9300
● DD9400
● DD9800
● DD9900

Data Domain High Availability features


In a Data Domain HA configuration, two identical Data Domain controllers (nodes) are configured as an Active-Standby pair. This
pair allows for redundancy in the event of a system failure, such as loss of power, or system crash of the Active node.
Data Domain HA provides the following features:
● Automatic failover requires no user intervention
● Fully redundant design with no single point of failure within the system
● No loss of performance on failover
● Failover within 10 minutes for most operations (Data Domain Boost applications may take longer than this. CIFS, DD VTL and
NDMP must be manually restarted)
● Ease of management and configuration through DD OS CLIs
● Alerting for hardware failures and malfunctions
● Maintains single-node performance and scalability within an HA configuration in both a healthy and degraded state
● Support of the same feature set as single node Data Domain systems (with the exception Data Domain Extended Retention
and vDisk)
● Supports systems with all SAS drives
● No impact to the ability to scale the Data Domain system
● Support for non-disruptive software updates and upgrades

Data Domain High Availability architecture


Data Domain HA functionality is supported with IP and FC protocols. To ensure high availability, the Active and Standby nodes
must have access to the same IP networks and fiber channel SANs.
For IP connectivity, the Data Domain HA system utilizes a floating IP address to ensure data access to the Data Domain HA
system, regardless of which physical node is the Active node.
For FC connectivity, NPIV is used by the Data Domain HA system to move the FC WWNs between nodes, providing the FC
initiators can re-establish connections after a failover event.
The following figure shows the IP network and FC connectivity for a Data Domain High Availability system:

Backup and recovery 27


Data Domain Virtual Edition in the Public Cloud
Data Domain Virtual Edition (DD VE) is a software-based Data Domain system that provides protection storage for any
compatible data protection backup application. DD VE runs the same version of the Data Domain Operating System (DDOS) as
its physical counterpart, so it supports many of the same features and functionalities such as DD Boost, deduplication, and Data
Domain replication.
Data Domain Virtual Edition is not supported for direct deployment to the VxBlock System 1000 system nor the Advanced
Management Platform for management of the VxBlock 1000. However, a DD VE instance deployed to the cloud may be
leveraged as a replication partner for a VxBlock 1000 Integrated Data Protection system, which includes a Data Domain system.

28 Backup and recovery


Dell EMC Cloud Disaster Recovery
The Dell EMC Cloud Disaster Recovery (CDR) solution integrates with an on-premise Avamar and Data Domain for copying
Avamar VM image backups to the cloud on either the Amazon Web Services (AWS), AWS GovCloud, or Microsoft Azure cloud
platform.
The solution allows for "in the cloud" recovery of one or many on-premise VMs that are protected with CDR.
In addition to full recovery and failover of VMs in the cloud, you can use this solution for disaster recovery testing and complete
failback of VMs to an on-premise VMware vCenter server after a failover to the cloud. Disaster recovery testing and failover
may be performed manually, on a per VM level, through the CDR Server UI or the Avamar UI. Disaster recovery (DR) testing and
failover may also be performed through the creation and execution of a DR plan. During a DR test, the VM backup is rehydrated
in the cloud and ready for disaster recovery validation or failover. If a failover is performed, the VM is rehydrated and once
running in the cloud, is ready for production use.
NOTE: During a failover of a recovered VM in the cloud, ensure the on-premise production VM is shut down to prevent
user access and accidental data loss.
After a failover of a VM, the VM is eligible for failback to an on-premise vCenter. Failback of a VM or VMs that have been failed
over to the cloud, is the automated process of copying the recovered VMs back to an on-premise vCenter system.
CDR can operate in one of two modes when configured with Amazon Web Services (AWS), Standard Mode and Advanced
Mode. When configured with AWS GovCloud or Microsoft Azure, CDR operates only in Standard Mode.
Standard mode for AWS, AWS GovCloud, and Microsoft Azure allows for crash consistent recovery in the cloud. To achieve
application consistent recovery of VMs in the cloud, this solution requires CDR to operate in advanced mode, which is only
supported with AWS.
CDR requirements for AWS and AWS GovCloud Standard Mode include:
● Integrated Data Protection with Avamar and Data Domain (on-premise)
● VMware vSphere environment
● Network connectivity between on-premise systems and AWS
● AWS IAM user with appropriate permissions along with Access Key ID and Secret Access Key
● S3 bucket on a supported region
● Protected VMs must meet AWS Import/Export requirements
CDR requirements for AWS Advanced Mode include:
● All requirements for AWS Standard Mode
● Avamar Virtual Edition with integrated Data Domain Virtual Edition running as AWS instances
● VPN connectivity between on-premise Avamar with integrated Data Domain and AWS instance of Avamar Virtual Edition
with integrated Data Domain Virtual Edition
● Avamar replication configured from on-premise to Avamar Virtual Edition AWS instance
CDR requirements for Azure (Standard Mode only) include:
● Integrated Data Protection with Avamar and Data Domain (on-premise)
● VMware vSphere environment
● Network connectivity between on-premise systems and Azure
● Azure subscription, Directory ID, Application IS, and key value
● Storage account on a supported location

Dell EMC Cloud Disaster Recovery architecture


This topic describes the Dell EMC Cloud Disaster Recovery (CDR) architecture for Amazon Web Services (AWS), AWS
GovCloud, and Microsoft Azure using Standard and Advanced modes.

CDR Standard Mode for AWS, AWS GovCloud, and Microsoft Azure
This configuration includes:
● An Integrated Data Protection system consists of an Avamar system integrated with a Data Domain deployed and configured
to perform VM image backups.
● A CDR Add-on VM on the AMP management platform configures the CDR environment and deploys the CDR Server VM to
the cloud.

Backup and recovery 29


● The CDR Add-on VM uploads of the image backups to the cloud provider.
● The CDR Server VM converts the VM image backups to the cloud providers VM instance type.
The following figure displays the architecture of a VxBlock System with Avamar and Data Domain Integrated Data Protection
configured with CDR support for AWS or AWS GovCloud in standard mode:

The following figure displays the architecture of a VxBlock System with Avamar and Data Domain integrated data protection
configured with CDR support for Microsoft Azure in standard mode:

CDR Advanced Mode for AWS


This configuration includes:
● An Integrated Data Protection system consisting of an Avamar system integrated with a Data Domain deployed and
configured to perform VM image backups.

30 Backup and recovery


● A CDR Add-on VM deployed on the AMP management platform configures the CDR environment and deploy the CDR
Service VM to the cloud.
● The CDR Add-on VM uploads the image backups to the cloud provider.
● An Avamar Virtual Edition VM integrated with Data Domain Virtual Edition running on the cloud.
● A VPN tunnel between the Integrated Data Protection Avamar/Data Domain system and the Avamar Virtual Edition/Data
Domain Virtual Edition cloud instance.
● The Integrated Data Protection Avamar system is configured for replication to the Avamar Virtual Edition cloud instance over
the VPN tunnel.
● The CDR Server VM converts the VM image backups to the cloud providers VM instance.
● The CDR Server VM triggers Avamar Virtual Edition to restore an application consistent backup with an Avamar client on the
converted/restored instance.
The following figure displays the architecture of a VxBlock System with Avamar and Data Domain Integrated Data Protection
configured with CDR support for AWS in Advanced Mode:

Converged backup system sizing


The specific configuration of a converged backup system is developed using a sizing process. To ensure the right size solution,
trained specialists use a sizing tool designed to factor in how well various data types deduplicate. It is recommended that you
size the backup system to meet current business needs and to plan up front for non-disruptive scalability to meet business
requirements for three to five years.
For example, if you required protection for 50 TB of data, but in 36 months, you might need to protect as much as 150 TB,
the backup system would have to scale to meet that need. If the backup system could not scale to meet the requirement, the
original backup system would have to be replaced, or a second backup entity added. Proper initial planning will position a single
backup entity to add capacity as needed.

Backup and recovery 31


Recovery types
Converged backup systems built upon the combination of Avamar or NetWorker and Data Domain deliver several recovery
types.
The following recovery types are available:
● Guest-level recovery
● Image-level recovery
● File-level recovery from an image backup

Guest-level recovery
The process of guest-level recovery is the same in this solution for VMs as for traditional recovery from a backup application.
You can recover directories, files, and applications in the Avamar Administrator GUI or NetWorker Management Console.

Image-level recovery
To recover data from an image-based backup, you can:
● Recover to the original VM
● Recover to an existing VM
● Recover to a new VM
● Use Instant Access Recovery
NOTE: This feature is not supported on the Data Domain 3300.

File-level recovery from an image backup


You can perform file-level recoveries (FLR) from an image-based backup using Avamar and NetWorker.

Replication of backup data to a second site


Converged backup systems support replication of backup data to a second site.

Replication using Avamar


The Avamar management feature set includes replication between primary and secondary Data Domain systems. The replication
policy that is applied to each dataset in the Avamar Administrator controls the replication. Typical Avamar replication scenarios
that are supported for datasets that are targeted to Data Domain include:
● Many-to-one, one-to-many, cascading replication
● Extension of data retention times
● Root-to-root replication
The replication process that is configured in Data Domain is automated in the Avamar framework and is transparent to the
backup administrator. This replication functionality requires a remote Avamar server and a remote Data Domain system.

Replication using NetWorker


The NetWorker clone feature replicates save sets from one Data Domain Boost (DD Boost) device to another storage device.
The clone is a complete and separate copy of the backup data, which is used for recovery operations or for creating other
clones. The clone feature can act on single save sets or an entire volume of a DD Boost device. The original NetWorker browse
and retention policies are maintained in the clone, although the policies can be changed in the clone.
It is recommended for Converged Systems that you replicate NetWorker backups to another Data Domain Boost (DD Boost)
device to improve replication performance and to maintain deduplication at the recovery site.

32 Backup and recovery


Replication of Data Domain Standalone with Data Domain Boost for
Enterprise Applications, Application Native Backup Utilities, and third-party
backup applications
Backups from Data Domain with Data Domain Boost for Enterprise Applications (DDBEA) can be replicated using the Data
Domain Replicator. In backups of Microsoft Applications, the application agent does not initiate or monitor the replication. The
Microsoft application can restore from the replicated copy. The restore is done by pointing the restore operation to the Data
Domain where the replicated copy resides. DDBEA for Oracle enables for replication either through the Data Domain Replicator
or through RMAN-managed file replication. Managed file replication is the combination of Data Domain replicator and RMAN
being configured to manage Data Domain replication. Managed file replication is done by defining a backup.cmd file in RMAN
to enable the remote Data Domain system as the target for replication. For these solutions, and any other DDBEA solution, see
the Dell EMC documentation for specific and complete details.
Backups using applications native backup utilities, when not used along with DDBEA may be replicated using the Data Domain
replicator. These backups can be accomplished by configuring replication of the collection, a directory, or Mtree. Replication
of the collection replicates the entire Data Domain system to another Data Domain. Replication of a directory or Mtree only
replicates that which is defined in the replication pair.
For replication of backups from a third-party backup application, see vendor documentation on Data Domain replication (where
available).

Cloud tier support


Converged backup systems support cloud tiering.

Cloud tiering with Avamar or NetWorker and Data Domain


When integrated with a Cloud tier-supported Data Domain system, Avamar and NetWorker provide a secure and seamless
method to tier data to the cloud for long-term storage. The backup administrator can create operations to move backups
from the Data Domain to a public, private, or hybrid cloud service. After backups are in the cloud, administrators can perform
seamless recoveries, as if the backups were located within the Data Domain system.

IP-based data backup


Backup data travels between a converged backup system and a Converged System over the IP links that directly connect the
two systems. This backup data replicates to an alternate data center using the customer's IP infrastructure between the two
sites. The result is a copy of critical data protected locally on the converged backup system and a duplicate copy of the data
safely stored in a secondary location.
The following figure shows the IP-based, data backup with replication to a second data center:

Backup and recovery 33


NDMP backup
Converged backup systems use network data management protocol backups to provide a backup and recovery solution for IP
storage, such as Isilon, PowerMax/VMAX eNAS, VNX, VNXe, or PowerStore products.
The converged backup system reads NDMP data from the IP storage device, deduplicates it, and writes it to a converged
backup system storage device. If the storage device is non-NDMP, such as Data Domain, the backup data is first converted to
a compatible format. The storage device or other component, depending on the configuration, deduplicates the data prior to it
being written to the storage device.

Related information
IP-based data backup

Enterprise deployment
A corporation with multiple branch offices might have multiple isolated backup systems. The best practice is to deploy a
centrally-managed backup architecture. Converged backup systems also protect against site disasters by replicating the daily
backups offsite. This solution is fast, network efficient, and eliminates the risks and costs associated with tape backups.
Ports are allocated in the Integrated Data Protection cabinet to support up to four directly-connected Converged Systems.
Applications and data in remote offices back up to a local converged backup system. Converged backup systems in remote
offices replicate data to the converged backup system in the regional data center. This provides an off-site copy in case of
disaster at the remote office.
Applications and data created in the regional data center are backed up to the converged backup system in the regional data
center. Only the data that was created in the regional data center is replicated to the primary data center. The converged
backup system in the primary data center replicates to the converged backup system in the secondary data center. The
converged backup system in the secondary data center replicates to the converged backup system in the primary data center.
The following figure shows the global deployment of a converged backup system designed to provide rapid local file recovery:

34 Backup and recovery


This figure shows:
● The remote offices replicate backups to the regional data center for disaster recovery.
● Only data created in the regional data center is replicated at the primary data center.
● If the primary data center and secondary data center are in an active/active configuration, then the most common
deployment is to cross-replicate backups between the data centers for disaster recovery.
● If the two data centers are in an active/passive configuration, then the backup replication is one way.
● In some countries, laws prevent backups from crossing regional boundaries. This is important to understand when developing
remote backup or replication solutions.

Backup and recovery 35


Use Oracle RMAN with converged backup systems
There are two common methods for using a converged backup system with Oracle databases.
Use one of these systems to back up Oracle databases:
● An Avamar agent or a NetWorker Module for Databases and Applications (NMDA) to redirect data to Data Domain.
● Oracle Recovery Manager (RMAN) and Data Domain Boost to back up directly to Data Domain.

Converged backup systems are directly integrated into the Converged System network. As a result, Oracle backup traffic is
completely offloaded from the customer's backup network.
A converged backup system implemented for other applications already has the network connections between the Converged
System and the Data Domain controller in place.
The following figure shows the high-level connectivity between a Converged System and converged backup system for Oracle
database native backup and recovery directly to Data Domain:

Related information
Converged backup systems connectivity overview

ProtectPoint for VMAX3 and VMAX All Flash arrays


with Data Domain
ProtectPoint for VMAX3 and VMAX All Flash arrays with Data Domain is a data protection solution. It integrates primary storage
on a VMAX array with protection storage on a Data Domain system.

NOTE: This ProtectPoint section uses VMAX array as a generic term for VMAX3 and VMAX All Flash arrays.

36 Backup and recovery


ProtectPoint offers the following features:
● Uses FC to perform block level copies from application source LUNs to a Data Domain to create incremental backups.
● Provides data protection where small or nonexistent backup windows exist.
● Meets extremely demanding recovery time objectives (RTO) or recovery point objectives (RPO), where traditional backups
might not.
● Provides backups with little or no impact on application servers.
● Enables direct backup from VMAX array (primary storage) to Data Domain (protection storage).
● Sends only unique data from primary storage to protection storage. The impact on the local area network is eliminated and
the impact and the impact on the storage area network is minimized.
● Uses Data Domain as protection storage, which reduces backup storage requirements.

ProtectPoint components and data flows


The following figure shows the components in a ProtectPoint solution:

The following points describe the ProtectPoint solution components:


● AR Host is the application or recovery host.
○ The application host is where the application being backed up is installed. The data being backed up is on production
storage that is presented to the application host from the VMAX array.
○ The recovery host is where data is restored to. This host can be the application host or a separate host.
● The primary storage software features used for ProtectPoint are SnapVX and FAST.X.
● The primary storage production device is the storage that is presented to the application host from the VMAX array.
● The primary storage backup device is Data Domain storage encapsulated by the VMAX to display as VMAX local storage.
● Data Domain block service for ProtectPoint enables the efficient creation of a static image of the primary storage device.
● Data Domain static images are the stored images of the ProtectPoint backups.
The following figure shows a detailed backup data flow for ProtectPoint:

Backup and recovery 37


The following sequence of events occurs during backup:
● The primary storage feature SnapVX is initiated from the AR Host.
● SnapVX creates a snapshot of the primary storage production device.
● Change Block Tracking (CBT) copies the changed data to an encapsulated Data Domain storage device.
● Data Domain creates and stores a static image of the snapshot.
The following figure shows detailed recovery data flow for ProtectPoint:

There are two types of restore available with ProtectPoint:


● Object-level (from VMAX or Data Domain).
● Full-application rollback
The following table lists the sequence of events and possible use cases for each restore type:

ProtectPoint Sequence of restore events Use case


restore type
Object-level ● The Data Domain writes the static image to the ● The Data Domain is inaccessible from the
restore from encapsulated storage device, making it available on AR Host.
VMAX primary storage using FAST.X restore device. ● Prolonged access to recovered data is
● The FAST.X restore device is mounted to the AR host required.
by the application administrator.

38 Backup and recovery


ProtectPoint Sequence of restore events Use case
restore type
● The application administrator uses operating system ● Access to recovered data is required from
and application-specific tools and commands to restore the production host.
specific objects to a recovery device on the VMAX
array.
Object-level ● Data Domain restore device is made available to the AR ● The VMAX is inaccessible from the AR
restore from host. Host.
Data Domain ● The application administrator uses operating system ● Transitory access to recovered data is
and application-specific tools and commands to restore required.
specific objects directly to the AR host. ● Access to recovered data is required from a
host other than the production host.
● Instant access to recovered data is
required.
● The ability to perform third-party tape out
is required.
● The ability to perform in-place data
integrity verification is required.
Full-application ● The Data Domain writes the static image to the Full recovery of the production LUN to a
rollback encapsulated storage device, making it available on specific point in time is required.
primary storage.
● The encapsulated recovery device is copied to the
production device (overwriting the production device).

ProtectPoint workflows
The application administrator initiates ProtectPoint workflows to protect applications and data. Before the workflow is
triggered, the application must be quiesced to ensure that the snapshot on the Data Domain system is application consistent.
ProtectPoint database application agents work with the application being protected to automatically quiesce the application.
The application administrator is also responsible for retaining and replicating copies, restoring data, and recovering applications.
NOTE: When ProtectPoint runs in a VMware virtual environment, only VMs running Microsoft or Linux operating systems
with RDM storage are supported.

ProtectPoint File system agent


The ProtectPoint File system agent is compatible with the following operating systems:
● Microsoft Windows
● Red Hat Enterprise Linux
● SuSE Linux Enterprise Server
● Oracle Linux
● HP-UX
● Oracle Solaris
● IBM AIX

ProtectPoint Database application agent


ProtectPoint Database application agent is compatible with the following applications:
● Oracle with RMAN
● SAP on Oracle using BR*Tools
● IBM DB2 using Advanced Copy Services (ACS)

Backup and recovery 39


Dell EMC PowerProtect Data Manager
PowerProtect Data Manager enables users to protect, manage, and recover data in on-premises, virtualized, or cloud
deployments.
The PowerProtect Data Manager platform provides centralized governance that helps mitigate risk and assures compliance of
SLAs and SLOs through protection workflows.
PowerProtect Data Manager enables automated discovery of virtual machines (VMs), databases, centralized protection for
databases and containers (Kubernetes), and integrated storage. Key features include:
● Integrated deduplication and replication
● Self-Service backup and recovery operations from native applications
● Multi-cloud optimization with integrated cloud tiering
● SaaS-based monitoring and reporting
● Modern services-based architecture for ease of deployment, scaling, and upgrading
● Enables backup administrators from large-scaled database environments to schedule backups from a central location on the
PowerProtect Data Manager server
● Provides governed self-service and centralized protection by:
○ Monitoring and enforcing SLOs
○ Identifying violations of RPOs
○ Applying retention locks on backups for all asset types
● Supports PowerProtect Search which enables backup administrators to quickly search for and restore file copies. The
PowerProtect Search software indexes VM file metadata to enable searches. Once you add the indexing to protection
policies the assets automatically index as they back up.
● Supports protection of Kubernetes workloads. Through the PowerProtect Data Manager user interface you can discover
Kubernetes clusters, protect, and recover Kubernetes persistent volumes, containers. Namespaces, and storage claim.
● Supports deploying an external VM direct appliance (vProxy)
● Supports manual backups of VMs in the vSphere Client

PowerProtect Data Manager 19.6 enhanced features:

● Deploy in Azure, Azure Government and AWS GovCloud to protect in cloud workloads.
● Storage policy based management for VMware VMs.
● Protect VMware Cloud Foundation infrastructure.
● Support for vRA 8.2.
● Agentless app consistent protection of PostgreSQL and Cassandra in Kubernetes.
● Protect Kubernetes clusters in multi cloud environments.
● Backup Kubernetes cluster-level resources.
● Protect Kubernetes in cloud with Data Manager in AWS and Azure.
● Enhanced resiliency
PowerProtect Data Manager deploys as a VM from an OVA in a VMware vSphere environment, and stores all backups and
data on a Data Domain. The Data Domain can be one you order together with the PowerProtect Data Manager solution, or a
previously purchased Data Domain. The Data Domain Boost (DD Boost) protocol provides advanced integration with backup and
enterprise applications for increased performance. DD Boost distributes parts of the deduplication process to the backup server
or application clients, enabling client-side deduplication for faster and more efficient backup and recovery.
PowerProtect Data Manager appliance provides the following features for Converged Systems:
● PowerProtect Data Manager deploys as a virtual appliance on the management cluster on AMP Central.
● Ability to back up a wide variety of physical and virtual compute environments, applications, and database, at image and file
level.
● VMware vSphere Web-Client plug-in
● Instant Access VM restore from Data Domain. Review PowerProtect Data Manager documentation on the features and
limitations of the Instant Access process.

Power Protect Data Manager 19.7 features


PowerProtect Data Manager 19.7 includes the following features:
● Enhancements to Data Domain support, including: Support for Data Domain systems with High Availability (HA) enabled.
Active-Standby configuration provides redundancy in the event of a system failure. When an active Data Domain HA system

40 Backup and recovery


fails over to its standby Data Domain HA system, all in-progress PowerProtect Data Manager operations including backup,
restore, replication, and cloud tier continue unaffected.
● The ability to create, configure, and reuse storage units on a Data Domain system for use with multiple protection and
replication policies. When you create or edit a protection policy, PowerProtect Data Manager provides new options to select
a storage unit for the protection or replication target. The PowerProtect Data Manager Administration and User Guide
provides more information about working with storage units, including important limitations and security considerations.
● Support for the manual replication of individual or multiple assets in a protection policy with a replication stage by using the
new Protect Now wizard.
● The PowerProtect Search Engine software supports virtual networks (vLANs). The PowerProtect Data Manager
Administration and User Guide provides more information, including the steps you must take after an upgrade if you add
a virtual network to an environment that already had the search engine configured.
● The Changed Block Tracking (CBT) feature for VM backups is enhanced to provide the option to disable CBT. If there are
high change rates on the VM, CBT can cause backups to take longer than expected sometimes. CBT can be disabled for
VMs if the backups take too long to complete. Also, if an issue encountered with CBT, it can be disabled on individual VMs.
● The quiescing of VMs can be enabled or disabled with APIs.
● Customer-signed certificates are supported by a new user interface.
● A new chapter in the PowerProtect Data Manager Administration and User Guide documents how to use a provided script
to back up VMware Cloud Foundation (VCF) assets.
● A restructured PowerProtect Data Manager Security Configuration Guide provides updated information for port usage,
default credentials, credential changes and expiry, certificate replacement, and account lockout behavior.
● Credentials can be set at the asset level for databases and applications from the user interface.
● Assets without copies that are deleted are automatically cleaned up.
● Enhancements to job statistics and reporting information to include job group metrics and details. This information appears
when viewing the Task Summary in the Jobs window.
● Enhancements to error messages provide more detailed descriptions and, where applicable, resolutions or workarounds.

PowerProtect Data Manager Compatibility


Review the PowerProtect Data Manager compatibility information for all the latest supported environments related to VMware,
applications, databases, file systems, and cloud services: https://elabnavigator.emc.com/eln/modernHomeDataProtection

VMware backup and recovery


Through integration with vSphere vCenter, PowerProtect Data Manager manages, protects, and recovers virtual machines
(VMs). PowerProtect Data Manager automates data retention SLA compliance by ensuring the correct number of backup copies
are stored in the correct location and at the required protection level.

Performance and scalability


The following are the targeted performance and scalability numbers for the PowerProtect Data Manager software:

PowerProtect Data Manager Software


VMware Crash/App consistent 10,000 assets
VMware App Aware 10,000 DBs (400VM * 25DB)
ADM-SQL Application Direct 2,500 assets (50 Hosts * 50DB)
ADM-Oracle Application Direct 200 assets (40 Hosts * 5DB)
File System 1,000 FS with 40 Million files
Blended Scale/Jobs per day 100K jobs (Including vProxy, ADM, and FS assets)
Number of vCenter Servers supported with a single 12
PowerProtect Data Manager server
Number of external VM Direct engines supported with a single 40
PowerProtect Data Manager server

Backup and recovery 41


PowerProtect Data Manager Software
Number of DD systems supported per PowerProtect Data 10
Manager server
Number of concurrent NBD + Preferred Hot Add backups per 48
ESXi host
Concurrent VMDK backups per vCenter Server 180 (recommended count)
Number of proxies per vCenter Server 25 (7 recommended)
Number of files/directories per file level recovery 200,000

Encryption
PowerProtect Data Manager offers three types of encryption.
● Inline encryption of data in rest.
● Encryption of data in flight through DDBoost software using TLS.
● Encryption of data in flight for replicating data over the WAN between sites.

Cyber Recovery
Cyber Recovery (CR) integrates with the Integrated Data Protection backup solution to maintain mission-critical business data
in a secure vault environment for data recovery. Use the management software to create writable copies for data validation and
analytics.
Data Domain replicates MTree data over an air-gapped link to a physically isolated Cyber Recovery Vault (CR Vault)
environment. The CR Vault data can be analyzed and altered in the case of tampering. If the copied data is acceptable, it
is saved as an independent full backup copy which can be retention-locked and can be restored, if needed. If this data must be
restored, data can be replicated out of the CR Vault and back to the production environment.
Cyber Recovery enables access to the CR Vault only long enough to replicate the data from the production Data Domain to the
CR Vault Data Domain. Otherwise, the CR Vault Data Domain is secured and off the network.
To minimize time and expedite the replication, deduplication is performed on the production Data Domain.
Cyber Recovery can use the Retention Lock software that is located on the Data Domain inside the CR Vault environment to
provide data immutability for a specific time. Retention Lock is enabled on a per-MTree basis and the retention time is set on a
per-file basis.

Cyber Recovery architecture


The Cyber Recovery (CR) solution replicates backup data from the production Avamar and Data Domain to the CR Vault Avamar
and Data Domain. This process is achieved through a dedicated MTree replication connection.
The following figure provides an overview of the Cyber Recovery solution:

42 Backup and recovery


Production environment
The customers production data is protected and managed by applications such as Avamar which store the backup data in
MTrees on the Data Domain system. In a CR solution, Data Domains replicate the MTree data to a Data Domain in the CR Vault.

Cyber Recovery Vault environment


The Cyber Recovery Vault for environment includes each of the following:
● Cisco Nexus 93180YC-EX switch
● Avamar Single Node system (required in the CR Vault for the backup administrator to recover data)
● Data Domain
● Cisco UCS C220 M5 server (runs the CR software on a Red Hat Enterprise Linux operating system)
CR software disables and enables the replication interface on the CR Vault Data Domain. While the replication interface is
enabled, the production Data Domain replicates the MTree data to the CR Vault Data Domain.

Backup and recovery 43


Physical Security
A Cyber Recovery solution must be designed for sufficient physical security. The Cyber Recovery Vault is a secure environment
where physical security is as important as logical segmentation. The Cyber Recovery Vault is designed for external attacks and
internal bad actors who may take advantage of weak physical security.
The cabinet for the Cyber Recovery solution contains the Dell EMC Intelligent Physical Infrastructure (IPI) G5 Network
Controller and associated PDUs. The IPI G5 Network controller provides an intelligent gateway to secure the cabinet through
access cards. The IPI G5 Network Controller collects information about power, thermals, and alerts for all components within
each cabinet.

Use Enterprise Hybrid Cloud with converged backup


systems
The Enterprise Hybrid Cloud is a solution based on a defined reference architecture, which integrates products and services
from Dell EMC and VMware. The solution is based on a number of Dell EMC technologies and the VMware vRealize Suite.
Two major features of the Enterprise Hybrid Cloud solution are automation and self-service provisioning of virtual machines. By
combining the Dell EMC-provided customized workflows with a converged backup system (built with Avamar and Data Domain
components), Backup as a Service (BaaS) can be implemented to automate and provision backup data protection.
The use of Avamar for backup and recovery of Enterprise Hybrid Cloud provides the following benefits:
● Abstracts and simplifies backup and restore operations for cloud users
● Uses VMware Storage APIs for Data Protection, which provides Changed Block Tracking for faster backup and restore
operations
● Provides full image backups for running virtual machines
● Eliminates the need to manage backup agents for each virtual machine
● Minimizes network traffic by deduplicating and compressing data

Avamar configurations for Enterprise Hybrid Cloud


Several Avamar configuration use cases are possible with Enterprise Hybrid Cloud.

Standard Avamar configuration


The following table lists use case details for a standard Avamar configuration:

Use case type Description


Primary Single-site with single VMware vCenter Enterprise Hybrid Cloud deployment
Alternate ● Continuous Availability dual-site/single VMware vCenter Enterprise Hybrid Cloud deployment
● Disaster Recovery dual-site/dual VMware vCenter Enterprise Hybrid Cloud deployment
Caveats for alternate use cases:
● Provides no resiliency for the second site. If the site that hosts the Avamar instances is lost, there is
no ability to restore from backup.
● In a Continuous Availability dual-site/single VMware vCenter environment, virtual machines that reside
on the site with no Avamar instances will back up across the WAN.
● In a disaster recovery dual-site/dual VMware vCenter topology, virtual machines that reside on the
recovery site (registered with a different VMware vCenter) have no ability to back up.

Redundant Avamar configuration with single VMware vCenter Server


The following table lists use case details for a redundant Avamar configuration with single VMware vCenter:

44 Backup and recovery


Use case type Description
Primary Dual-site/single VMware vCenter Enterprise Hybrid Cloud deployment.
Alternate Single-site topology. This provides a backup infrastructure that can tolerate the loss of a physical Avamar.
NOTE:
The redundant Avamar/single VMware vCenter configuration should not be used in a disaster recovery
dual-site/dual VMware vCenter topology. The redundant Avamar/dual VMware vCenter configuration
better suits the disaster recovery dual-site topology without the need for extra components.

Redundant Avamar configuration with dual VMware vCenter Server


The following table lists use case details for a redundant Avamar configuration with dual VMware vCenter:

Use case type Description


Primary Dual-site/dual VMware vCenter Enterprise Hybrid Cloud deployment.
Alternate No valid alternate use cases for this configuration.

Refer to the Enterprise Hybrid Cloud: Concepts and Architecture Solution Guide for more information.

Backup and recovery 45


4
Business continuity and disaster recovery
Integrated Data Protection provides business continuity services through VPLEX and disaster recovery services through
RecoverPoint, and their related products and technologies.

VPLEX
VPLEX delivers enhanced availability (zero data loss, near-zero downtime) of applications and data. VPLEX also delivers
enhanced mobility of applications and data (in other words, the migration of applications and data between systems without the
burden of work, planning, and downtime associated with traditional migrations).
A VPLEX cluster resides in the data path between the Converged System servers and storage, where it can create data copies.
Copies can be created locally or over distance and can be read and written simultaneously.
VPLEX enables dynamic workload mobility and continuous availability within and between Converged Systems over distance.
VPLEX also provides simultaneous access to storage devices at two sites through the creation of VPLEX distributed virtual
volumes, supported on each side by a VPLEX cluster.
The VPLEX Integrated Data Protection solution for Converged Systems uses the following components and technologies:
● One or more Converged Systems
● VPLEX
● VMware vCenter Server
● VMware vSphere High Availability
● VMware vSphere vMotion
Dell EMC supports the following VPLEX systems for Integrated Data Protection solutions:
● VPLEX Local
● VPLEX Metro
VPLEX Local consists of a single VPLEX cluster, which provides the ability to manage and mirror data between multiple
Converged Systems from a single interface within a single data center.
VPLEX Metro consists of two VPLEX clusters that are connected with intercluster links over distance. VPLEX Metro enables
concurrent read and write access to data by multiple hosts across two locations. By mirroring data between two sites, VPLEX
Metro provides nonstop data access in the event of a component failure or even a site failure.
The VPLEX Witness is an optional component for VPLEX Metro implementations. It can be deployed at a third site to improve
data availability in the presence of cluster failures and intercluster communication loss. The VPLEX Witness deploys as a
virtual machine, and its VMware ESXi host must reside in a separate failure domain from both VPLEX clusters to eliminate the
possibility of a single fault affecting both a cluster and VPLEX Witness.
The VPLEX Witness observes the state of the clusters and can distinguish between an outage of the intercluster link and
a cluster failure. The VPLEX Witness then uses this information, together with the preconfigured detach-rules, to guide the
clusters to either resume or suspend I/O.

VPLEX use cases


The following table describes VPLEX use cases:

Use case Description


Mobility Enables data mobility and relocation between storage arrays that are located in
Converged Systems in a single data center, or between Converged Systems in two
remote data centers, without impacting users.
Collaboration VPLEX provides efficient real-time data collaboration over distance for big data
applications.

46 Business continuity and disaster recovery


Use case Description
VPLEX Metro HA without cross- Reduces the recovery time objective (RTO) by combining VPLEX Metro HA with VMware
connect HA. Virtual machines automatically restart after a failure.
VPLEX Metro HA with cross- Eliminates RTO for most of the failure scenarios. The only single failure scenario that
connect causes a virtual machine restart is a host failure.
MetroPoint MetroPoint combines VPLEX Metro and RecoverPoint to create a three-site or four-
site solution that provides the best of both products for Converged Systems, including
continuous availability, operational and disaster recovery, and continuous data protection.

Metro node
Metro node is the next generation hardware platform based on the Dell PowerEdge R640 server platform that is specifically
designed with embedded management services, I/O path simplification, and 32 GB fibre channel connectivity.
Metro node is currently only co-sold as a feature for PowerStore and Unity XT arrays. Customers with PowerMax, VMAX,
XtremIO, and other VPLEX supported arrays are directed to look at the VPLEX VS6.
Metro node supports the same use cases as the VPLEX VS2 and VPLEX VS6 platforms with the following exceptions:
● No support for RecoverPoint.
● No support for MetroPoint.
● No support for the Cluster Witness Server (CWS).
● No IPv6 support.

NOTE: Support for the Cluster Witness Server and IPv6 is scheduled for a future metro node release.

Metro is available in a two-node cluster only (single engine), whereas the VPLEX VS2 and VPLEX VS6 scale out to dual and
quad engines. Similar to the VPLEX VS2 and VPLEX VS6 metro node support both the Local and Metro configuration.

VPLEX hardware models


The VPLEX product is offered in the following models:
● VPLEX VS2
○ The VS2 hardware platform is positioned for cost conscious customers that expects limited growth in their environment.
The VS2 is the best fit for customers that want the business continuity benefits of VPLEX for small-medium hybrid array
environments.
○ The VS2 hardware platform provides front-end and back-end speeds at a maximum of 8 Gbps over FC. It uses 8 Gbps FC
for intra cluster communication and offers 10 GbE IP or 8 Gbps FC for WAN replication.
● VPLEX VS6
○ The VS6 is positioned for customers that have a performance intensive environment and are expecting to scale out.
These customers probably run all-flash or a hybrid mix of storage arrays and might want to refresh their VPLEX hardware
from the VS2 to the VS6 platform.
○ The VS6 platform provides front-end and back-end speeds at a maximum of 16 Gbps over FC. It uses 40 Gbps InfiniBand
for intracluster communication and offers 10 GbE IP or 16 Gbps FC for WAN replication.
● VPLEX for All-Flash
○ VPLEX for All-Flash is a solution for All-Flash storage. This includes Dell EMC Unity AF, XtremIO, and VMAX AF.
○ It provides the customer with an unlimited capacity license for as many All-Flash Arrays (AFA).
○ VPLEX for All-Flash can be ordered with both VS2 and VS6 hardware. However, the VS6 platform is created for
All-Flash. Single, dual, and quad engine configurations are supported on both models.
● Metro node
○ Metro node positions to replace the aging VPLEX VS2 platform with a new Dell PowerEdge based server and introduces
critical software and hardware enhancements. Metro node is introduced as a hardware “add-on” feature for the
PowerStore and Unity XT array.
○ Metro node has a small footprint (2 RU) and does not require a dedicated Data Protection cabinet. The metro node
nodes are racked in the VxBlock System cabinets.
○ Metro node provides front-end and back-end speeds at a maximum of 32 Gbps over FC and 10 GbE for IP WAN
replication.

Business continuity and disaster recovery 47


○ Metro node is available in a two-node cluster only (single engine), the VPLEX VS2 and VPLEX VS6 scale out to dual and
quad engines

VPLEX performance and scale


The following table compares the specifications for VPLEX VS2 and VS6 hardware:

Specification item VS2 VS6 metro node


Processor Generation Intel Westmere Single Quad- Intel Haswell Dual 6-Core per Dual Intel Xeon Silver 4208
Core per director director CPUs (8 cores per CPU) per
node
DRAM Capacity 36 GB per director 128 GB per director 64 GB per node
Front end/Back end 8 Gb FC 16 Gb FC 32 GB FC
Interconnect 8 Gb FC 40 Gb InfiniBand N/A
WAN 8 Gb FC, 10 GbE 16 Gb FC, 10 GbE 10 GbE
Management Server Rack mount 1U server Integrated internal server Management services run
module, dual MMCS on metro node nodes. No
separate management server
required
Rack Configuration Single, Dual, or Quad Engine Single, Dual, or Quad Engine Single
Rack Density 22U for Single and Dual, 34U 20U for Single and Dual, 25U 2U for single
for Quad for Quad

VPLEX components
A VPLEX cluster consists of the following hardware components:
● One, two, or four VPLEX engines
○ Each engine contains two directors.
○ Each engine is protected by backup power:
○ ■ VS2: Standby Power Supply (SPS), external to the engine
■ VS6: Battery Backup Unit (BBU), internal to the engine
● One management server
○ VS2: In-rack 1U server external to the VPLEX engine.
○ VS6: Embedded two Management Module and Control Stations (MMCS) in VS6 base engine with internal storage. These
are named MMCS-A and MMCS-B.
● In the case of a dual or quad engine cluster, the cluster also contains:
○ VS2: Two 1U FC switches for communication between the directors in the engines
○ VS6: Two half-width 1U InfiniBand switches for communication between the directors in the engines
○ Two uninterruptible power supplies to provide backup power to:
■ VS2: The FC switches and the management server
■ VS6: The InfiniBand switches
● VPLEX Cluster Witness
○ Resides in a third failure domain up to one second away
○ Lightweight VM
■ 1 vCPU and 1 GB RAM
■ 2.54 GB disk
○ IP connectivity
● Metro node is available in single engine only.
○ Each ‘engine’ contains two nodes.
○ As Global Cache is removed with metro node there is no longer a requirement for a dedicated battery backup. Best
practice is to supply backup power to the data center through battery backup or a power generator.

48 Business continuity and disaster recovery


○ With Global Cache removed and the availability of powerful CPUs, a dedicated management server is no longer required.
The management services are run on both metro node nodes.
○ Inter cluster communication is done over four peer-to-peer connections.
○ The engine context is no longer used with metro node. It is used in this document as a reference point.

Enterprise Hybrid Cloud and VPLEX for continuous availability


Enterprise Hybrid Cloud uses VPLEX for continuous availability.
VPLEX is used in a dual-site/single VMware vCenter Enterprise Hybrid Cloud environment when there is a requirement for
continuous availability. Requirements for VPLEX remain the same as for non-Enterprise Hybrid Cloud environments, such as the
requirement for stretched Layer 2 VLANs or support of VXLANs, and that there is latency of less than 10 ms between the two
sites.

Dual-site/single VMware vCenter Enterprise Hybrid Cloud deployment


The following components are used in a dual-site/single VMware vCenter Enterprise Hybrid Cloud deployment:
● VPLEX in Metro configuration
● VMware vSphere HA
● VMware vSphere vMotion
● VMware vSphere Metro Storage Clusters
Refer to Enterprise Hybrid Cloud: Concepts and Architecture Solution Guide for more information.

RecoverPoint
RecoverPoint is a data replication and recovery product that enables customers to roll back data to potentially any point in time.
RecoverPoint is a key component of the Integrated Data Protection product series, enhancing operational recovery, disaster
recovery processes, and reducing potential data loss. RecoverPoint is typically deployed to protect specific business applications
and data that need more data protection than a once-per-day backup provides.
RecoverPoint can protect data through the following methods:

RecoverPoint protection method Description Use case


Local copy and Local replication Data is replicated to a local volume. Operational recovery
Remote copy and Remote replication Data is replicated to a remote volume. Disaster recovery
Local and remote copy and Local and Data is replicated concurrently to local Operational and disaster recovery
remote replication and remote volumes.

RecoverPoint components and licensing


Details are provided for RecoverPoint components and licensing.
RecoverPoint systems consist of the following hardware and software components:
● RecoverPoint software provides the interface that customers use to manage replication and recovery.
● RecoverPoint Appliances (RPAs) run the RecoverPoint software. RPAs are physical Linux-based servers. RPAs also manage
the replication process.
● RecoverPoint splitter software runs on storage arrays, where it intercepts data writes and splits them into two copies.
One copy is passed to an RPA cluster for transfer to a replica copy volume and the other is written to its locally attached
production copy.
NOTE: XtremIO, VMAX3/All flash drive, and PowerMax are exceptions. XtremIO, VMAX3/All flash drive, and PowerMax
use a splitterless option called snap-based replication.
● RecoverPoint Storage Replication Adapter is an optional software module that integrates RecoverPoint with VMware
vCenter Site Recovery Manager.

Business continuity and disaster recovery 49


RecoverPoint use cases
RecoverPoint has several use cases.
Typically, RecoverPoint implements the following use cases:
● Supplement daily backups
● Operational recovery
● Disaster recovery and disaster recovery testing
● Data repurposing, development, and testing

Supplementing daily backups


RecoverPoint is useful for business applications that cannot tolerate a loss of up to 24 hours of data. For those business
applications, the question is whether to use RecoverPoint to replace or to supplement once-per-day backups.
Use RecoverPoint to supplement once-per-day backups, for the following reasons:
● You commonly keep once-per-day backups for 30-60 days or even archive them indefinitely. A file from any one of those
backups can be recovered. RecoverPoint retains data for short periods and does not support long-term data archiving and
recovery.
● A corrupted file that is replicated is still a corrupted file. If using RecoverPoint alone, recovery from a corrupted file is
possible only if the corruption is detected early enough. The detection of corruptions must occur while a snapshot containing
the uncorrupted file is still in the RecoverPoint journal. Using once-per-day backups with RecoverPoint delivers a level of
insurance above that of either method alone.
The following figure shows the combination of daily backups and RecoverPoint replication on Converged Systems:

Operational recovery
RecoverPoint enables you to quickly recover from operational failures through its ability to roll back to any point-in-time (PiT).
Recovery from an operational disaster using RecoverPoint can occur through the following methods:

50 Business continuity and disaster recovery


Method Description
Production recovery Recovers the entire production dataset from an uncorrupted point in time on the replica copy
volume.
File recovery Copies single or multiple files from the replica copy volume to replace corrupted files. Makes the
replica copy volume accessible, first, by placing it in one of the available image access modes
(physical, virtual, and direct).

By default, RecoverPoint snapshots are crash-consistent. To ensure write-order consistency across specific production copy
volumes, add the volumes to the same consistency group. RecoverPoint can provide application consistent snapshots for
compatible applications, such as Microsoft Exchange and Oracle.

Disaster recovery
The quickest way to resume services after a major infrastructure failure is often to transfer the services to another facility.
Remote replication enables you to maintain up-to-date replica data in remote sites over any distance.
Use synchronous replication when the distance between two Converged Systems is short enough. Data in the disaster recovery
site is fully synchronized with the production site. The disaster recovery site can resume operations should a disaster occur at
the production site. XtremIO requires the use of VPLEX in front of RecoverPoint to provide synchronous replication.
Use asynchronous replication when the distance between Converged Systems is too far to support synchronous replication.
Data in the disaster recovery site is synchronized as closely as possible with the production site. Any data lag depends on
network bandwidth, data change rate, and the distance between the production and disaster recovery sites.
The following figure shows disaster recovery using traditional methods:

Business continuity and disaster recovery 51


When a disaster occurs, corrupted data might be replicated to the replica copy volume. With RecoverPoint, you can choose any
point in time and check it to ensure data integrity before doing a full failover of the production site. Checking beforehand avoids
the delay of having to redo failovers that used corrupted data.
RecoverPoint supports multisite topologies. Consider multisite designs to consolidate or diversify disaster recovery sites.
Disaster recovery plans are typically tested before a real disaster recovery situation to ensure the plans function correctly. With
RecoverPoint, you can replicate data at a disaster recover site offline. They can use the data in disaster recovery testing, as
required. There is no impact to production data and no delay in restoring data at the disaster recovery site from a tape backup.
Disaster recovery failover and testing is a manual process when using RecoverPoint alone. RecoverPoint integrates with
VMware vCenter Site Recovery Manager to provide automated disaster recovery operations. You benefit from simplified failover
and testing to any point in time.

Data repurposing, development, and testing


Use replicas of your production data for repurposing, developing, and testing. RecoverPoint facilitates these use cases by
supporting the following features:

Feature Description
Up to four copies of a single production Having multiple copies means that:
volume
● You can allocate some copies to recovery and others to alternative activities
● The risk of copy data being unavailable for recovery due to other activities being
performed on it is minimized.
● You can work with up-to-date copies of production data
Host application access to replica copy Various image access methods enable customers to access copy data while
volumes RecoverPoint continues to buffer production writes on the replica copy journal. You
can also roll back changes that are made during image access when image access
is suspended (if using logged access). You can perform tests and other actions on
copy data with minimal risk to production data, even where only single copies of
production data exist.
Recovery of replica copy volumes to an You can restore data from a replica copy volume to any appropriate volume. The
alternative location restoration enables you to repurpose data for alternative production requirements.
Examples include analysis and reporting, or re-creating production environments in a
testing or development setting. The data is made available quickly, is up-to-date, and
can be manipulated with no impact to production data.

52 Business continuity and disaster recovery


RecoverPoint replication
RecoverPoint supports data replication over any distance.
]
RecoverPoint replicates data in one of the following replication modes:
● Asynchronous mode
● Synchronous mode (not supported with XtremIO)

Asynchronous mode
Asynchronous mode is the default replication mode for RecoverPoint. The application initiates a write and does not wait for
the acknowledgment from the remote RecoverPoint Appliance (RPA) before initiating the next write. The data of each write
is stored on the production RecoverPoint cluster, which acknowledges the write. Based on the lag policy, system loads, and
available resources, the remote RPA cluster decides when to transfer the writes.
The main advantage of asynchronous replication is its ability to provide synchronous-like replication without degrading the
performance of host applications.
Asynchronous mode might not be the best option for all situations. For example, a high data change rate can increase data that
is stored between transfers and can cause data loss if a disaster occurs.
RecoverPoint replicates asynchronous only in situations in which doing so enables superior host performance without resulting in
an unacceptable level of potential data loss.

Asynchronous snap-based replication mode (XtremIO is not supported)


RecoverPoint provides VNX users with an alternative asynchronous replication option called snap-based replication mode. This
mode is particular useful during periods where a high load occurs or at user-configured intervals.
During a high load situation, the replication process cannot keep up with the incoming writes, which means that replication
cannot continue and initialization is required. If high load continues during the initialization, RecoverPoint might be unable to
create snapshots and the recovery point objective (RPO) requirements might be jeopardized.
Snap-based replication improves performance under high load periods and uses write-folding to reduce WAN traffic. The longer
the gaps between the snapshots, the higher the bandwidth reduction, and the greater the protection window due to less journal
space being consumed.
In VNX snap-based asynchronous replication mode, the splitter still captures and provides the writes to the RPA journal.
The Dell EMC Unity array does not support snap-based replication.

Asynchronous XtremIO-supported snap-based replication mode


For both non-XtremIO and XtremIO, snap-based replication uses array-based snapshots and replicates the delta between two
snapshots to the remote RecoverPoint cluster. However, snap-based replication for XtremIO is implemented differently to that
of non-XtremIO, which is why both are discussed separately. Snap-based replication for XtremIO is implemented in different
ways, depending on the types of array where the production volume and replica copy reside.
The following table describes how XtremIO snap-based replication is implemented:

Production copy Replica copy array Snap-based replication


array type type
XtremIO XtremIO The journal contains only pointers to snapshots and metadata. The journal does
not contain any replicated data. The journal size can be small—10 GB for regular
consistency groups and 40 GB for distributed consistency groups.
XtremIO Non-XtremIO A splitter is used at the replica and XtremIO snapshots are stored in the journal
of the target array for distribution and point-in-time access. After the entire
snapshot has been replicated, it is bookmarked. This bookmarked snapshot is
available for image access.

Business continuity and disaster recovery 53


Production copy Replica copy array Snap-based replication
array type type
Non-XtremIO XtremIO Writes coming from the production array splitter are stored as XtremIO
snapshots. If snap-based replication is enabled on the production array, a snap
is created at the remote XtremIO array upon completion of the production
replication snapshot. When continuous replication is configured, snapshots are
created at the remote XtremIO array once every minute.

When XtremIO is at the target, image access and recovery time objective (RTO) to any point in time is instantaneous.

Asynchronous VMAX3/AF and PowerMax snap-based replication mode


VMAX3/AF snap-based replication uses the native VMAX3/AF snap capabilities to create point-in-time consistent snaps of
consistency group production volumes. It uses the snaps to synchronize the production volumes with the copy volumes.
This solution supports manual, continuous, and periodic snapshots, with a minimum period of 1 minute.
The following table describes the different replication options available:

Production copy Replica copy array Snap-based replication


array type type
VMAX3/AF and VMAX3/AF and RecoverPoint production volumes on VMAX3/AF and PowerMax arrays are
PowerMax PowerMax replicated using VMAX3/AF and PowerMax snap-based replication without a
RecoverPoint write splitter. The copy uses RecoverPoint journaling, enabling
point-in-time failover.
VMAX3/AF and Non-VMAX3/AF and RecoverPoint production volumes on VMAX3/AF and PowerMax arrays are
PowerMax PowerMax replicated using VMAX3/AF and PowerMax snap-based replication without a
RecoverPoint write splitter. The copy (except XtremIO) uses RecoverPoint
journaling, enabling point-in-time failover.
Non-VMAX3/AF and VMAX3/AF and The VMAX3/AF and PowerMax copy uses RecoverPoint journaling, enabling
PowerMax PowerMax point-in-time failover.

In protecting VMAX3/AF and PowerMax arrays, the entire VMAX3/AF or PowerMax storage group is protected rather than the
individual devices.
RecoverPoint supports using TimeFinder SnapVX to clone a copy, enabling you to access the clone rather than using
RecoverPoint image access to access the copy directly.
Cloning a copy with TimeFinder SnapVX is recommended in the following situations:
● Image access is needed for an extended time.
● The accessed needs to support a heavy write workload.
For RecoverPoint for VMAX3/AF and PowerMax limitations, see to the RecoverPoint 5.1 with VMAX3/AF/PowerMax Technical
notes document on Dell EMC support.

Synchronous mode
In synchronous mode, the application initiates a write, which is replicated to the remote RPA cluster. The write is acknowledged
when it reaches the remote RPA memory. The remote copy is always up-to-date with its production copy. Application
performance may be impacted when the solution is not properly sized. Poor WAN performance, SAN performance, and array
performance, and high RPA load can all cause the application performance to degrade.
By default, new consistency groups are created with asynchronous mode enabled, and can be set to replicate synchronously
through the Link policies.

Dynamic synchronous mode


You can enable dynamic synchronous mode for environments that replicate over longer distances or that experience high
latencies at times on their WAN link. When dynamic sync is enabled, you can configure the replication policy to switch from
synchronous to asynchronous replication when latency or throughput thresholds are exceeded. When the value drops below the
threshold again, RecoverPoint returns to synchronous replication.

54 Business continuity and disaster recovery


Placeholder
The following table compares the two modes:

Replication characteristics Synchronous replication Asynchronous replication


Supports local replication Yes No
Supports remote replication Yes Yes
Distance restrictions Yes Yes (over IP)
Transmission mediums ● Fibre Channel ● Fibre Channel
● IP ● IP
Frequency that data is written to disk. Immediately for each write. Can store and delay multiple writes until
conditions are suitable for writing.
Write acknowledgment By the remote RPA when it receives the By the local RPA when it receives the
data (or when the data is written to the data.
remote copy, depending on settings).
Advantages Production and copy data is always fully No performance impact on applications.
synchronized.
Disadvantages Can impact performance if host If a system failure occurs after the
applications delay writes pending writes are acknowledged but before they
acknowledgment of previous writes. are written to disk, multiple data writes
can be lost.

NOTE: Fibre Channel communications over a wide area network can use fiber-optic cabling or copper cabling. Fibre
Channel over copper cabling is called Fibre Channel over IP, or FC-IP.

RecoverPoint consistency groups


RecoverPoint protects data volumes using consistency groups. By replicating changes to production volumes in the correct
write order, consistency groups ensure that copies are always consistent and available for failover or restore.
Each consistency group can have an individual recovery point objective (RPO) and recovery time objective (RTO).
NOTE: An RPO is the maximum tolerable data loss, measured in time, resulting from a data failure. An RTO is the maximum
tolerable time that data can remain offline following a data failure. For example, if data must be restored to within one hour
of when a failure occurs, and it takes two hours to restore the data, then the RPO is one hour and the RTO is two hours.

RecoverPoint storage awareness


RecoverPoint storage awareness provides a unified interface for storage integration and management from the RecoverPoint
cluster.
It enhances RecoverPoint so customers can provision journal volumes on the storage array directly from the RecoverPoint
console.
To enable RecoverPoint snap-based replication, register the VNX array in the RecoverPoint console. Dell EMC Unity arrays
auto-register with RecoverPoint.
Registering the storage or VMware vCenter Server makes it possible to monitor connectivity and removes the requirement to
repeatedly enter credentials to collect system information.

Business continuity and disaster recovery 55


RecoverPoint auto-provisioning and auto-matching for XtremIO
arrays
RecoverPoint 5.1.x and XtremIO version 6.0 and above can automatically provision volumes on XtremIO arrays.
When XtremIO arrays are used at both the production and copy site, RecoverPoint can automatically provision the production
and copy journal volumes, and the copy data volume. The end user must only create the production data volume that is being
replicated by RecoverPoint.

Copy volume auto-matching (from the GUI only)


If the production volumes reside on an XtremIO array at the replica site, and contain exposed volumes with the same names and
sizes as the production volumes, RecoverPoint will automatically match these volumes to the production volumes according to
name and size.

RecoverPoint multi-site support


RecoverPoint allows a single RecoverPoint cluster to replicate to multiple clusters, or multiple clusters to replicate to a single
cluster.
RecoverPoint supports the following multi-site designs:
● Fan-out replication from a local site to up to four remote copies, or to one local and three remote copies. Each copy contains
data from the same consistency group.
● Fan-in replication from up to four remote sites to one central site. Each copy contains data from a different consistency
group.
NOTE: For fan-in topologies, one remote copy per consistency group can be synchronous and the remaining three (or all
four) can be asynchronous. As each link is independent, customers can access different points in time (PiT) in different
locations. Fan-out replication for XtremIO supports a maximum of three copies, consisting of one production and two
copies. These copies can be local, remote, or a mix of both.
The following figure shows RecoverPoint in a fan-out topology:

56 Business continuity and disaster recovery


The following figure shows RecoverPoint in a fan-in topology:

Business continuity and disaster recovery 57


Uses for multi-site topologies
The following table shows uses for multi-site topologies:

Use Description
Multi-level protection Fan-out topologies allow you to implement multi-level protection for core data. You can
choose how many copies of data to make and at how many locations to store those copies.
Because each copy in a fan-out topology is independent of other copies, you can allocate
copies to different roles. For example, you can allocate one copy to disaster recovery and
another to testing and development.
Disaster recovery site Fan-in topologies allow you to reduce the complexity and costs of managing multiple remote
consolidation sites, by allowing you to replicate multiple remote sites through one RecoverPoint cluster in
one central location. You can manage and maintain all replica data in one place and avoid the
need for multiple clusters or multiple sites. You can also maintain a single disaster recovery
site to protect multiple production sites.

Enterprise Hybrid Cloud and RecoverPoint for


disaster recovery
Enterprise Hybrid Cloud uses RecoverPoint for disaster recovery.
RecoverPoint is used in a dual-site/dual vCenter Enterprise Hybrid Cloud environment when there is a requirement for disaster
recovery. Typically, VMware vCenter Site Recovery Manager is part of the disaster recovery solution. RecoverPoint, as with
typical environments, provides continuous data protection for any point in time (PiT) recovery. When paired with VMware
vCenter Site Recovery Manager, it can also provide centralized recovery plans, automated failover and failback, non-disruptive
testing of disaster recovery, and planned migration.

Dual-site/dual VMware vCenter Enterprise Hybrid Cloud deployment


The following components are used in a dual-site/dual VMware vCenter Enterprise Hybrid Cloud deployment:
● RecoverPoint
● VMware vCenter Site Recovery Manager

58 Business continuity and disaster recovery


Refer to Enterprise Hybrid Cloud: Concepts and Architecture Solution Guide for more information.

RecoverPoint and VMware vCenter Server Site


Recovery Manager
VMware vCenter Site Recovery Manager (SRM) integrates with VMware vCenter Server and vSphere Web Client. This
integration simplifies disaster recovery (DR) management by automating the testing and orchestration of centralized recovery
plans.
VMware SRM automates the orchestration of the failover process to the recovery site and also the failback to the production
site. Failover and failback automation eliminates both complexity and errors inherent to manual processes. This level of
automation enables you test recovery plans nondisruptively and increase the predictability of recovery time objectives (RTOs).
RecoverPoint is responsible for all data replication between the production and recovery site. VMware SRM works along with
RecoverPoint to automate and orchestrate the process of migration, recovering, testing, reprotecting, and fail-back of virtual
machines. VMware SRM communicates with RecoverPoint through a Storage Replication Adapter (SRA) that is installed on the
VMware SRM server.
The Virtual Storage Integrator (VSI) plug-in provides the additional ability to select recovery from any point in time from
RecoverPoint. The VSI runs on a virtual machine that is deployed through an OVA into the virtual environment.

Use cases
VMware vCenter Site Recovery Manager supports several use cases and provides significant capability and flexibility to Dell
EMC customers.
The following use cases for VMware vCenter Site Recovery Manager with Converged Systems are supported:

Use case Description


Disaster recovery VMware SRM was designed for disaster recovery.
Planned migration/Disaster avoidance Moving VMs and applications between sites during planned migration and disaster
avoidance exercises are two common use cases for VMware SRM. VMware SRM
gracefully shuts down the VMs at the protected site and restarts them at the
recovery site in a predefined order. It supports full testing of the migration in a
manner nondisruptive to production systems. If any issues are discovered during
the migration, this feature provides an opportunity to correct them.
Upgrade and patch testing The VMware SRM test environment provides an ideal location for operating system
and application upgrade and patch testing. These test environments are complete
copies of the production environments. Test environments exist in an isolated
network, which ensures that testing does not impact production workloads or
replication.

The following figure shows how Converged Systems use VMware SRM and RecoverPoint to provide automated failover with
any point in time (point-in-time) recovery:

Business continuity and disaster recovery 59


VMware SRM creates recovery plans that automate the failover of resources when a disaster occurs. VMware SRM recovery
plans use protection groups to define specific items to move from a protected site to a recovery site. Align the VMware SRM
protection groups with RecoverPoint consistency groups. For the failover of a VMware SRM protection group, RecoverPoint
datasets resources use are automatically included.
The following advantages apply when using RecoverPoint with VMware SRM:
● VMware SRM can use the point-in-time (point-in-time) recovery abilities of RecoverPoint.
● VMware SRM removes the need to interact with the RecoverPoint console during a DR situation. VMware SRM can
automate the DR workflow to the point of pressing a single button.
● RecoverPoint removes distance limitations between production and DR sites through its ability to replicate data
asynchronously while maintaining write-order consistency.

AMP Protection
VMware vCenter Site Recovery Manager uses RecoverPoint to replicate the datastores on which the virtual machines reside.
RecoverPoint connects to the Converged System storage array over FC. The physical servers that are used in the AMP are not
configured with HBAs and cannot connect to the storage array over FC. Because of this and other reasons, VMware vCenter
Site Recovery Manager cannot protect the AMP. To protect one or more of the AMP element managers, move the VMs to the
VMware ESXi hosts that run on the Cisco blade servers.

Orchestration and automation


VMware vCenter Site Recovery Manager provides the vAdmin with integrated automation and orchestration capabilities to
simplify the recovery of virtual machines. vAdmins can access these capabilities through the VMware vSphere Web Client
snap-in.
The following table shows the automation capabilities and orchestration capabilities available with VMware vCenter Site
Recovery Manager:

Group Capability Description


Automation Inventory mapping Allows resource mapping, folder mapping, and network mapping.
These mappings provide default settings for recovered VMs. For
example, a VM can be connected to Network-A on the protected

60 Business continuity and disaster recovery


Group Capability Description
site, and get automatically connected to Network-B on the recovery
site. Networks for use during testing can also be configured in the
same area.
Automation Protection Groups with Datastore Datastore Groups: Uses a manual process to protect/unprotect
Groups or Storage Policy Protection VMs, but with more flexibility.
Groups
Storage Policy Protection Groups: Enables the automatic protection
of VMs that are associated with a storage policy. More limitations
than Datastore groups.

Automation Deep integration with other VMware VMware vSphere PowerCLI provides Microsoft Windows
solutions PowerShell functions to administer VMware SRM or to create
scripts that automate SRM tasks.
The VMware vRealize Orchestrator plug-in for Site Recovery
Manager enables you to automatically create of a VMware SRM
infrastructure. You can add virtual machines to protection groups
and configure recovery settings of virtual machines.

Orchestration Recovery Plans Recovery plans are like an automated run book, controlling all
the steps in the recovery process. A VM can be part of multiple
recovery plans.

Orchestration Priority Groups (Start-up Sequence) There are five priority groups in VMware SRM. VMs in group one
are recovered first and VMs in group five are recovered last. All
VMs in a priority group are started simultaneously and the next
priority group is started only after all the previous VMs are booted
and responding. VMware tools must be installed.
Orchestration Dependencies When more granularity is needed for startup order, you can use
dependencies on a per-VM level.
Orchestration Shutdown and Startup actions Shutdown actions apply to the protected VM at the protected site
during a recovery plan run. Shutdown actions are not used during
recovery plan testing.
Startup actions apply to a VM that recovered by VMware SRM.
Powering on a recoverd VM is the default setting. Sometimes, it
might be desirable to recover a VM, but to leave it powered off.

Orchestration Pre and Postpower on steps VMware SRM can run commands or scripts from the VMware SRM
server at the recovery site before and after powering on a VM.
Running a script inside a VM is also supported as a postpower-on
step.
VMware SRM can also display a visual prompt as a pre- or
postpower-on step.

Orchestration IP customization The most commonly modified VM recovery property is the IP


settings. Most companies have different IP networks at the
protected and recovery sites.
IP customization can be used in testing, failover, and fail-back
operations.
There are three methods to customize network settings:
● Create an IP customization rule that maps one range of IP
addresses to another.
● Configure IP customization manually on each VM.
● Configure IP customization using the DR IP Customizer Tool.
For more information regarding this tool, see the VMware SRM
documentation.

Business continuity and disaster recovery 61


Group Capability Description
Orchestration Reprotect and Failover VMware SRM can not only fail over VMs to a recovery site, but also
fail them back to their original site. The original protected site must
still be intact for fail over, and a reprotect workflow must be run.
The workflow reverses replication and sets up the recovery plan in
the opposite direction.

Orchestration Reporting When workflows such as test and recovery plans are run, history
reports are automatically generated. These reports contain items
such are the workflow name, the time the workflow was run, the
duration, successful and failed operations, and any error messages.
These history reports can be exported and used for various
purposes, including internal auditing, proof of disaster, recovery
testing for regulatory requirements, and troubleshooting.

RecoverPoint for Virtual Machines


RecoverPoint for VMs is a software-only product that protects virtual machines that reside in a VMware vSphere environment.
It provides local and remote replication capabilities with virtual machine-level granularity.
NOTE: Due to the requirement for secure boot, RecoverPoint for VMs 5.3 is not supported on new installations of VMware
vSphere 7.0 in VxBlock Systems. This will be addressed in a future release of RecoverPoint for VMs. There are no issues
installing RecoverPoint for VMs 5.3 on vSphere 6.7 U1 and above.
RecoverPoint for VMs targets the VMware administrator, whereas the RecoverPoint product targets the storage administrator.
The RecoverPoint for VMs virtual RecoverPoint Appliances (vRPA) are installed in the VMware vSphere environment. The
appliances then provide the RecoverPoint for VMs plug-in into the VMware vSphere Web Client.
The VMware vRPAs use the IP protocol to communicate with the VMware vSphere datastore. Each VMware vSphere ESXi host
that participates in protecting VMs requires RecoverPoint for VM installed splitter.
VMware administrators can use the VMware vSphere Web Client to take an active role to protect and recover VMs to any point
in time with integrated orchestration and automation capabilities. RecoverPoint for VMs fully supports all standard RecoverPoint
operations:
● Test Copy
● Recover Production
● Fail Over
RecoverPoint for VMs provides automation and orchestration capabilities, hence you do not require VMware vCenter Site
Recovery Manager (SRM). Use VMware SRM only with RecoverPoint Classic.

RecoverPoint for Virtual Machines architecture


Describes and illustrates the RecoverPoint for VMs system architecture.
The RecoverPoint for VMs system consists of the following components:
● One or more RecoverPoint for VMs clusters. See the latest documentation for the maximum number of clusters that a
RecoverPoint for VMs system can contain.
● A minimum of one virtual RecoverPoint Appliance (vRPA) and a maximum of eight vRPAs for each RecoverPoint for VMs
cluster. These vRPAs manage all aspects of the data replication process.
NOTE: Even though a single vRPA can be used to create a RecoverPoint for VMs cluster, the best practice for
Converged Systems is to deploy a minimum of two vRPAs.

● RecoverPoint for VMs splitter is installed on the VMware ESXi hypervisor. The RecoverPoint for VMs splitter is installed
on every VMWare ESXi host in the VMWare vSphere cluster that hosts VMs protected by RecoverPoint for VMs. The
RecoverPoint for Virtual Machines splitter splits the write coming from the host and sends it to the vRPA and the virtual
machine VMDK. The vRPAs handle all traffic to the journals and replicas as they do in a physical RecoverPoint system. All
storage traffic between the vRPAs and the VMware vSphere datastores uses the IP protocol.

62 Business continuity and disaster recovery


● VMware vSphere Web client plug-in, this provides the user interface for managing the RecoverPoint for VMs system. The
vAdmin can manage all RecoverPoint for VMs operations from this interface. With the introduction of RecoverPoint for VMs
5.3 the plugin comes in two versions:
○ The vSphere Flex plug-in for vSphere 6.7 and earlier.
■ The vSphere Flex plug-in is the user interface for managing the RecoverPoint for VMs system. The plug-in
communicates directly with the vRPA clusters.
○ The vSphere HTML5 plug-in for vSphere 6.7 U1 and later.
■ The vSphere HTML5 plug-in is the user interface for managing the RecoverPoint for VMs system. The plug-in does
not communicate directly with the vRPA clusters. It communicates with the vRPA clusters through the HTML5
plug-in server.
NOTE: With the vSphere 6.7 release VMware deprecated the vSphere FLEX plug-in. The recommendation is to use
the vSphere HTML5 plug-in as the primary client and use the vSphere Flex plug-in for features that are currently not
supported through the vSphere HTML5 plug-in.
● The RecoverPoint for VMs HTML5 plug-in server is a dedicated plug-in server that provides replication management for one
or more RecoverPoint for VMs systems and communicates through the REST API.
The following figure shows the RecoverPoint for VMs system architecture with the vSphere Flex plugin:

The following figure shows the RecoverPoint for VMs system architecture with the vSphere HTML5 plug-in:

Business continuity and disaster recovery 63


vSphere HTML5 plug-in server
The RecoverPoint for VMs HTML5 plug-in server deploys from an OVA and runs on SUSE Linux 12.5. It hosts the HTML5
plug-in and the user interface logic. It serves as a single endpoint for the new REST API through which is manages all
RecoverPoint for VMs systems running 5.3 or later on the vCenter server. A single HTML5 plug-in server can manage one or
more vRPA clusters per vCenter Server. Every vCenter and HTML5 plug-in server supports a maximum of 50 vRPA clusters.
The plug-in server communicates securely with every registered vCenter Server, and with every vRPA cluster registered with a
vCenter Server.

NOTE: All vRPA clusters registered to the same plug-in server require the same admin password.

Environments running vCenter in Embedded Linked Mode can either install:


● One HTML5 plug-in server per vCenter server where vRPA clusters are registered.
● One HTML5 plug-in server to manage all vRPA clusters registered across the different vCenter servers.
RecoverPoint for VMs supports up to seven vCenter servers using vCenter Enhanced Linked Mode.

RecoverPoint for VMs multisite support


RecoverPoint for VMs enables a single RecoverPoint for VMs cluster to replicate to multiple clusters, or multiple clusters to
replicate to a single cluster.
RecoverPoint for VMs supports the following multisite designs:
● Fan-out replication from a local site to up to four remote copies, or to one local and three remote copies. Each copy contains
data from the same consistency group.

64 Business continuity and disaster recovery


● Fan-in replication from up to four remote sites to one central site. Each copy contains data from a different consistency
group.
● Starting with RecoverPoint for VMs 5.2.2 and Cloud DR 19.1 the maximum number of noncloud replica copies per
consistency group is two.
NOTE: For fan-in topologies, one remote copy per consistency group can be synchronous and the other can be
asynchronous. As each link is independent, you can access different PiTs (point-in-time) in different locations.
Besides the multisite options listed earlier, RecoverPoint for VMs also supports a fully connected system, where all sites are
connected with each other.

Orchestration and automation


RecoverPoint for VMs provides the vAdmin with integrated automation and orchestration capabilities to simplify the recovery of
VMs. The vAdmin can access these capabilities using the VMware vSphere Web Client snap-in.
The following table shows the automation capabilities and orchestration capabilities available with RecoverPoint for VMs:

Group Capability Description


Automation VMDK manageability ● Select the type of VMDK to use for the copy VM. The
available options are:
○ Same as the source
○ Thick (provisioned lazy zeroed or eager zeroed)
○ Thin provisioned
● Exclude certain VMDKs, for example, shared or
nonpersistent VMDKs.
● Expand or add a VMDK without losing the journal or
causing a full sweep of the consistency group.
● Enable/Disable to automatically protect newly added
VMDKs.
Replication of VM hardware changes VM version, MAC address, CPU, memory, resource
reservations, network adapter status, and network adapter
type are replicated to all copy VMs in the consistency
group.
CAUTION:
When a VMDK is removed from a protected VM,
the corresponding copy VMDK is not removed.
This option protects against accidental changes.
When a protected VM is deleted, the
corresponding copy VMs are not removed. This
option protects against accidental changes.
Replication of the SR-IOV NIC type is not supported. If
the ESXi at a copy does not support the production VM
version, no hardware resources are replicated.

MAC address replication MAC addresses of remote copy VMs on a different


vCenter are automatically replicated.
Application-consistent bookmarks The RecoverPoint for VMs KVSS utility supports
application-consistent bookmarks for Microsoft Windows.
Orchestration Start-up Sequence As best practice, install VM Tools on VMs that require
protection by RecoverPoint for VMs.
Both consistency groups and group sets can use the Start-
up Sequence feature to define the startup order (priority)
for the VMs in the consistency groups and the consistency
groups in the group sets.

Business continuity and disaster recovery 65


Group Capability Description
User prompts You can add user prompts in the Start-up Sequence for a
consistency group for each VM to provide the vAdmin with
configurable messages at certain points in the workflow.
If a timeout is defined, the prompt is automatically
dismissed when the time-out period elapses. If no time-out
is defined and a prompt is not dismissed, the start-up
sequence does not continue until the prompt is dismissed.

User scripts You can add external scripts for before and after power up
in the Start-up Sequence for a consistency group for each
VM. Define an external host for each vRPA cluster in the
RecoverPoint for VM system.
Install an SSH server on each external host. The scripts
run with SSH on the external host configured. Each script
has a mandatory time-out period. Recovery is halted until
the script runs successfully. A prompt displays if the script
fails.

Networking enhancements ● The network settings for a VM can be changed for


the copy VM in three different ways. Which method is
used depends on the number of VMs that require their
network settings to be changed.
● Use the RecoverPoint for VMs GUI to change the
network configuration of a few VMs.
● Use a .CSV file to change the network configuration of
multiple VMs at a copy.
● Use a .CSV file to change the network configuration of
multiple VMs in a system.
● If re-IP glue scripts have already been implemented
during a previous version of RecoverPoint of VMs,
these glue scripts can continue to be used.
● To ensure that you do not lose your protection VM
network configuration when you fail back to the
production, edit the copy network configuration of your
production VMs.
.
For details regarding specific operating systems, see the
RecoverPoint for Virtual Machines Administrators Guide

RecoverPoint for Virtual Machines Cloud Solution


RecoverPoint for Virtual Machines 5.2.2 adds the capability to protect VMs directly to Amazon Simple Storage Service (S3)
using proprietary snap-based replication with a Recovery Point Objective (RPO) of 15 minutes.
Use RecoverPoint for VMs to protect production VMs while an instance of the Cloud DR Server (CDRS) is deployed to Amazon
Elastic Compute Cloud (EC2) to manage the cloud copies and run orchestrated recovery flows to native Amazon EC2 instances.
The Cloud DR Add-on is an optional appliance that can be deployed to fail back from Amazon Web Services (AWS) to an
on-premises vCenter server or to VMware Cloud on AWS. If the source VMs/sockets are licensed in the same way as a
non-Cloud Solution, the CDRA is a no-cost option with RecoverPoint for Virtual Machines.
By implementing the RecoverPoint for Virtual Machines Cloud Solution, you can:
● Allow multiple copies on-premises, both local and remote, and to the cloud.
● Test/Failover to AWS EC2.
● Failback to on-premises VMware environments or VMware cloud on AWS (SDDC) by using the Cloud DR appliance (CDRA)
● Orchestrate in-cloud DR.
The HTML5 plug-in for RecoverPoint for VMs 5.3 does not include the ability to configure and manage the RecoverPoint for
VMs Cloud Solution. This capability will be in a future release. For vCenter servers running a vSphere 6.7.x version, the Cloud
Solution can be managed through the RecoverPoint for VMs 5.3 Flex plug-in or the legacy API.

66 Business continuity and disaster recovery


RecoverPoint for Virtual Machines cloud solution use cases
A benefit of replicating to AWS using the RecoverPoint for Virtual Machines cloud solution is cost reduction. The following use
cases show how to accomplish this. RecoverPoint for Virtual Machines replicates data on a consistency group (CG) basis, hence
the replication topology is decided on a CG basis.
It is possible to use a combination of the following use cases:
● Cloud as a DR site
○ No need for an on-premises DR site or secondary site.
○ RPO as low as 15 minutes. RPO of seconds or zero requires RecoverPoint for Virtual Machines replication to a non-cloud
site.
○ Recover to EC2 or VMware cloud on AWS.
● Protection Tiering
○ Tier-1 VMs are protected by replicating to a remote VMware vCenter (DR site).
○ Tier-2 VMs are protected to the cloud.
○ Tier-2 VMs can be recovered directly from the cloud to the DR site.
● Extra layer of protection
○ Production VMs are protected by replicating them to a remote on-premises VMware vCenter (DR site).
○ A secondary copy is replicated to the AWS cloud. The use case would be:
■ Replicate a production VM to a location far enough away that local/regional disasters do not impact the cloud VM.
■ Long time retention. The RecoverPoint for Virtual Machines Cloud Solution enables you to retain a VM copy for up to
90 days.

RecoverPoint and VPLEX


VPLEX extends services, such as automated disaster recovery (DR), distributed resource scheduling (DRS), high-availability
(HA), and fault tolerance (FT), across data centers. These services were previously confined to single data centers.
RecoverPoint enhances VPLEX by extending disaster recovery to additional sites and over long distances without additional
investment in VPLEX infrastructure. RecoverPoint further enhances VPLEX by allowing data rollback to any point in time (PiT),
meaning that all possible disaster recovery and business continuity requirements are met.
RecoverPoint uses the integrated VPLEX write splitter software, which splits data writes across VPLEX and RecoverPoint
volumes. The VPLEX splitter also allows multiple RecoverPoint clusters to share a single splitter, so that up to four
RecoverPoint clusters can protect a single VPLEX cluster.
NOTE: Dell EMC supports RecoverPoint access to third-party arrays through VPLEX only while customers are migrating
data to an array.

NOTE: Metro node does not include the write splitter and therefore does not support RecoverPoint and MetroPoint.

VPLEX with RecoverPoint Data Protection


By integrating with RecoverPoint, VPLEX can take advantage of RecoverPoint's DVR-like rollback of data. VPLEX volumes can
replicate to local RecoverPoint volumes to provide Continuous Local Protection, or to remote volume to provide Continuous
Remote Replication. Either scenario allows VPLEX to recover data to any point in time.
The following figure shows how Converged Systems integrate VPLEX and RecoverPoint to provide two active failover sites with
any-point-in-time recovery:

Business continuity and disaster recovery 67


VPLEX with RecoverPoint data protection provides:
● Recovery to any point in time at the local or remote site
● Support for heterogeneous storage arrays.
● Protection from data loss and corruption

VPLEX with RecoverPoint Disaster Recovery


VPLEX can extend its disaster recovery capabilities to long distances by using the RecoverPoint asynchronous replication
capability and VMware SRM integration.
The following figure shows how Converged Systems can integrate VPLEX and RecoverPoint to extend disaster recovery
services to asynchronous distances:

68 Business continuity and disaster recovery


VPLEX with RecoverPoint disaster recovery provides:
● The ability to combine short- and long-distance disaster recovery site locations.
● The ability to use VMware vCenter Site Recovery Manager for disaster recovery site operations.
● The extension of VPLEX disaster recovery functionality without the need for additional VPLEX hardware

MetroPoint
MetroPoint combines VPLEX Metro and RecoverPoint to provide enhanced data protection for Converged Systems.
MetroPoint protects writes from both sides of a VPLEX Distributed Device. MetroPoint replicates data from one VPLEX Metro
site in a cluster to a remote Converged System over IP or Fiber Channel networks, using asynchronous, near-synchronous, or
synchronous replication. Additionally, MetroPoint replication allows local copies at all VPLEX Metro sites. Dell EMC customers
benefit from continuous availability of active-active applications across data centers while maintaining operational and disaster
recovery.
MetroPoint can operate in a two-site, three-site, or four-site topology. It can load balance replication data across WAN links and
can use VMware Site Recovery Manager to manage disaster recovery operations.
The following figure shows an overview of MetroPoint operation:

Business continuity and disaster recovery 69


MetroPoint operation
MetroPoint protects Converged System data on both sides of a VPLEX distributed device.

MetroPoint replication topologies


MetroPoint uses consistency groups to configure replication. An MetroPoint consistency group consists of the following copies:
● Two source copies (active production and standby production)
● Up to two local copies
● Up to one remote copy
A fully configured MetroPoint consistency group contains five copies. These copies are distributed as shown in the following
figure:

70 Business continuity and disaster recovery


Two-site MetroPoint topology
In a two-site MetroPoint deployment, each site hosts a Converged System and an Integrated Data Protection cabinet running
one VPLEX Metro site and containing one connected RecoverPoint cluster.
The following figure shows an Integrated Data Protection two-site MetroPoint implementation:

In this two-site MetroPoint, the VPLEX Metro cluster is hosting a Distributed Volume (DR1) that spans Site A and Site B.
At each site, a RecoverPoint splitter intercepts data writes destined for the Distributed Volume and passes a copy to the

Business continuity and disaster recovery 71


RecoverPoint cluster. The RecoverPoint cluster replicates the data to a local copy and also maintains a journal volume for the
copy.
You can add a third Converged System to the MetroPoint system at a later point and add a remote copy non-disruptively.

Three-site MetroPoint topology


A three-site MetroPoint deployment is similar to a two-site deployment but adds a remote Converged System. The remote
Converged System contains a RecoverPoint cluster and a remote copy. The local copies are optional in a three-site MetroPoint
topology.
The following figure shows an Integrated Data Protection three-site MetroPoint implementation:

This illustration shows a full-meshed RecoverPoint system, which is recommended in a three-site MetroPoint. Replication to the
remote site happens from either site of the VPLEX Metro. In this illustration, Site A is the active source and is replicating to
the remote site. Site B is the standby source and is marking only. The active and standby sources are both replicating the same
VPLEX Distributed Volume (DR1). During a source switchover, replication undergoes a short initialization phase.

72 Business continuity and disaster recovery


Four-site MetroPoint topology
In a four-site MetroPoint topology, the remote site is also a VPLEX Metro site.
NOTE: The replica copy in a four-site topology should be a local device in a MetroPoint consistency group. Using a
distributed device results in a fractured state. The replica device exists only after you enable image access, or after you
fail the remote copy over to, and set it to, production. In those cases, VPLEX Metro synchronizes the fractured leg of the
distributed device automatically.
The following figure shows an Integrated Data Protection four-site MetroPoint implementation:

In the previous figure, the distributed devices running Site A and Site B replicate to Site D, while distributed devices running at
Site C and Site D replicate to Site A.

Business continuity and disaster recovery 73


5
Data Protection Management
The ability to protect critical data, manage and view system health, performance and efficiency is imperative in an enterprise
environment.
Dell EMC offers various solutions to make managing the data protection systems easier. The following data protection
management solutions may be integrated with a VxBlock 1000 managed by AMP-VX or AMP Central.
● Data Protection Central
● Data Protection Search
● Data Protection Advisor

Data Protection Central


Data Protection Central is a management console for Avamar, NetWorker, Data Domain, PowerProtect Data Manager, Data
Protection Search, and Data Protection Advisor.
Data Protection Central provides the following features:
● Ability to launch the following administrator consoles from a central location:
○ Avamar Management Console
○ NetWorker Management Console
○ PowerProtect Data Manager
○ Data Domain System Manager
○ Data Protection Search Console
○ Data Protection Advisor Console
● Supports single sign-on authentication for the following:
○ Data Protection Search 18.1 and above
○ Avamar 7.5.0-183 Hotfix HF284113_2 and above
○ NetWorker 18.1 and above
○ Data Domain DD OS 6.2.0.10 and above
○ PowerProtect Data Manager
● Dashboard for information for the following Avamar and Data Domain details, including:
○ Backup Activities
○ Replication Activities
○ Capacity information for Avamar and Data Domain
○ Health
○ Alerts
● Monitoring multiple systems at the job, systems, and alert levels
● Management capabilities for Avamar systems:
○ View, add, edit, and delete policies, retentions, schedules, and datasets.
○ Add clients and proxies to policies.
○ Perform a backup of a policy.
○ Rerun a backup or replication activity
○ View existing clients that are associated with an Avamar system.
● Complex search and recover operations through integration with Data Protection Search
● Reporting capabilities through integration with Data Protection Advisor

Data Protection Search


Data Protection Search is a scalable index and search appliance that integrates with Avamar and NetWorker. Through scheduled
collection activities, backup content of one or more Avamar or NetWorker servers is gathered, indexed, and stored within the

74 Data Protection Management


Data Protection Search node. This feature enables users to perform searches across the backup environment from which the
user can then preview, download, or restore the backup content.
The Data Protection Search index capability enables you to do the following:
● Process content from multiple input sources.
● Index only metadata or full content.
● Apply scalable, fault tolerant open-source indexing technology.
Data Protection Search enables you to do the following:
● Perform advanced and powerful searches.
● Perform cross-server, cross-platform searches.
● Preview backup file content without downloading.
● Download backup files locally.
● Restore backups to original or alternate locations.
● Apply visual filters to search results.

VMware disk space size requirements


Each deployment of a search appliance is pre-configured with three virtual disks with a combined total of 180GB of disk space.
● Disk 1: 40 GB System Disk
● Disk 2: 100 GB Index Data Disk
● Disk 3: 40 GB Temporary Space Disk
To support a larger number of metadata files, it may be necessary to expand the disk space for Disk 2 and Disk 3. You must size
the Search node VMs appropriately before powering on the VM for the first time.

Index Data Disk


The following table details size requirements for the Index Data Disk:

Number of supported files Required Index Disk Space


200 Million 120 GB
500 Million 300 GB
1 Billion 600 GB
2 Billion 1.2 TB
4 Billion 2.4 TB

When sizing the index disk, consider the following:


● More disk space is required when implementing full content indexing
● Add additional search nodes to increase the total amount of Index Disk space
● The total disk space required can be divided by the number of Search Nodes
● If data index replication is enabled, twice as much disk space is required
● File counts are for unique files. A file that is unchanged through several backups remains a single file in the index

Temporary space disk


The following table details size requirements for the temporary space disk:

Largest backup Required size for disk 3 Indexing both Avamar and
NetWorker
5 million files 40 GB 80 GB
10 million files 58 GB 116 GB
20 million files 94 GB 188 GB

Data Protection Management 75


Largest backup Required size for disk 3 Indexing both Avamar and
NetWorker
50 million files 202 GB 404 GB
100 Million files 382 GB 764 GB
200 Million files 742 GB 1.5 TB
500 Million files 1.8 TB 3.6 TB

Sizing and performance guidance for Search nodes


When determining node sizing:
● Total number of clients to index across all backup servers
● Determine whether or not index replication is required

Total Estimated Estimated No replication With replication


clients files unique
files over Estimated Recommen Data disk Estimated Recommend Data disk
30 days of total space ded nodes per node total space ed nodes per node
backups (GB) (GB) (GB) (GB)

250 125 Million 200 Million 112 1 112 224 2 112


500 250 Million 400 Million 224 2 112 447 4 112
1000 500 Million 800 Million 447 4 112 894 8 112
2500 1.25 Billion 2 Billion 1118 8 140 2235 16 140
10000 5 Billion 8 Billion 4470 24 186 8941 48 186

Replication
This section contains information on single and multiple search node cluster replication.

System Index Replication


If the Search cluster contains more than one search node, system indexes are automatically replicated.

Data Index Replication


If the Search cluster contains more than one search node, data index replication can be enabled manually. When data index
replication is enabled, the total amount of disk space required is doubled.
In a multi-node search cluster, it is recommended that the Index Replica is set to On, otherwise failover cannot occur.

Data Protection Advisor


Data Protection Advisor is a platform for performing reporting and analytics on a data protection environment. It can provide full
visibility into the health and effectiveness of the data protection strategy. This solution has the ability to monitor technologies
including backup software, storage array replication, servers, databases, and virtual infrastructures.
The Data Protection Advisor reporting engine provides highly customizable reports for the following:
● Highlighting issues within the environment
● Capacity management
● Service level
● Chargeback
● Change management

76 Data Protection Management


● Troubleshooting
The Data Protection Advisor Predictive Analysis Engine provides the following:
● Early warnings about potential issues
● Alerts to allow for early resolution of potential issues
● A reduction in negative business impacts

The following systems will be discovered and monitored in the initial release of Data Protection Advisor:
● Avamar
● NetWorker
● PowerProtect Data Manager
● Data Domain
● VPLEX
● RecoverPoint
● RecoverPoint for Virtual Machines
Other hardware or software that is listed in the Data Protection Advisor Software Compatibility Guide may be added for
monitoring after the initial deployment.

Determine Data Protection Advisor datastore and application


server sizing
Use the Data Protection Advisor Sizing Estimator available in Solution Builder, to determine the sizing for the Data Protection
Advisor datastore server and application server.
The sizing estimator utilizes the following input to size the Data Protection Advisor environment accordingly:
● Total number of backup servers to be monitored
● Total number of backup jobs per day
● Storage replication capacity
● Reports and analysis rules run per day
● Expected annual growth
The sizing estimator utilizing the above input, provides the following output for both the datastore server and application server:
● Minimum storage requirement
● Minimum CPU requirement
● Minimum memory requirement
The sizing estimator is not an exact science, and may only provide a general recommendation based on assumptions and
expectations. Therefore, best practice is to virtualize each of these components to allow for the flexibility to expand the
systems resources as necessary.
The Data Protection datastore and application servers are installed on Microsoft Windows Server VMs, which are hosted on the
AMP-VX or AMP Central.
The memory, CPU, virtual disk, and NIC of each of the VMs will have the following resource settings applied:

VM CPU Shares Memory Memory Reserve all Virtual Disk vNIC Type
Shares Reservation guest memory Format
(All locked)
Data Protection High High Maximum Yes Thick Provision VMXNET3
Advisor Eager Zeroed
Datastore VM
Data Protection High High Maximum Yes Thick Provision VMXNET3
Advisor Eager Zeroed
Application VM

Data Protection Management 77


Backup of the Data Protection Advisor datastore
This section describes important information for safely backing up the Data Protection datastore.
It is highly recommended that the datastore is exported to a flat file on a regular schedule. This flat file should be backed up in
the traditional method with a backup product such as Avamar or NetWorker. This is the only supported method for backing up
the Data Protection Advisor datastore.
NOTE: VMware snapshots, and replication solutions such as RecoverPoint for Virtual Machines or RecoverPoint Classic,
should not be relied upon for the datastore backup. The only supported method of backup and recovery is the export and
backup of the datastore flat file.

Data Protection Advisor Agents


This topic provides information about Data Protection Advisor Agents.
The Data Protection Advisor (DPA) agents perform data collection from objects monitored by Data Protection Advisor. When
possible, it is highly recommended to install the DPA agent directly on the object being monitored such as NetWorker. For
objects such as Avamar and Data Domain, a DPA proxy agent should be deployed on a designated host, other than the datastore
and application servers, for data collection services.
When monitoring objects at remote sites, DPA proxy agents should be deployed at each site for collection duties. The agents
communicate with monitored objects typically using CLI, SNMP, SSH, or direct database connection, and may be sensitive to
latency which may cause collection of data to fail.
There is no limit to the number of objects that can be monitored from a single DPA agent, and no limit on the number of DPA
agents that may be deployed. There are also no additional licensing costs for the DPA agents themselves, but there may be
additional licensing costs for the operating systems on which the agents operate.

78 Data Protection Management

You might also like