Data Protection Product Guide
Data Protection Product Guide
Product Guide
July 2021
Rev. 5.0
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2013 -2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Revision history..........................................................................................................................................................................5
Chapter 1: Introduction................................................................................................................. 8
Contents 3
Encryption..................................................................................................................................................................... 42
Cyber Recovery................................................................................................................................................................. 42
Cyber Recovery architecture................................................................................................................................... 42
Use Enterprise Hybrid Cloud with converged backup systems............................................................................. 44
Avamar configurations for Enterprise Hybrid Cloud........................................................................................... 44
4 Contents
Revision history
Date Document Description of changes
revision
July 2021 5.0 Updating the Data Protection Support matrices links in Data Domain and
third-party backup applications.
June 2021 4.9 Document updates include:
● Dell EMC PowerProtect Data Manager 19.7 support
● AMP-3S naming - see the Introduction for details.
March 2021 4.8 Document updates include:
● Metro node support
● Dell EMC PowerProtect Data Manager 19.6 support
December 2020 4.7 Document updates include:
● Linux support for Dell EMC Networker
● Dell EMC RecoverPoint for Virtual Machines 5.3
September 2020 4.6 Updated document for:
● Dell EMC PowerProtect Data Manager 19.4 support
● Data Protection software version 19.3 support
● PowerStore storage arrays support
● Removal of Dell EMC PowerProtect X400 content
June 2020 4.5 Updated document for:
● Dell EMC PowerProtect Data Manager 19.3
● Dell EMC PowerProtect X400
● Data Protection software Version 19.2
March 2020 4.4 Updated for:
● Data Domain models 6900, 9400, and 9900
● Data Domain OS 7.0
December 2019 4.3 Updated for AMP Central and for Dell EMC Cloud Disaster Recovery 19.2.
September 2019 4.2 Updated for RecoverPoint for Virtual Machines 5.2.2. and for Dell EMC
Cyber Recovery.
August 2019 4.1 Added new content to support Cyber Recovery.
June 2019 4.0 Added support for Data Domain Virtual Edition and Dell EMC Cloud Disaster
Recovery.
March 2019 3.9 Added support for Data Domain High Availability
January 2019 3.8 Added support for VPLEX 6.1.1 and Dell EMC PowerMax storage
November 2018 3.7 Added support for Data Protection Suite 18.1
September 2018 3.6 Added support for IPv6 onVxBlock System 1000
Revision history 5
Date Document Description of changes
revision
February 2018 3.2 Added support for VxBlock System 1000 and AMP-VX
January 2018 3.1 Added support for RecoverPoint for Virtual Machines 5.1.1.1
December 2017 3.0 Added support for VMware vSphere 6.5
November 2017 2.9 ● Minor updates to support Avamar 7.5 and Networker 9.2.
● Minor updates to ProtectPoint section
● Removed reference to VPLEX GeoSynchrony 6.0 SP1 P5 update.
August 2017 2.8 Added support for:
● Data Domain Standalone
● VMAX 950F
Expanded VMware Site Recover Manager for RecoverPoint content
6 Revision history
Date Document Description of changes
revision
May 2014 1.3 Added a RecoverPoint section
December 2013 1.2 Added multi-node support and Data Domain 4500 physical specifications
November 2013 1.1 Added Data Domain DD4500
October 2013 1.0 Initial release
Revision history 7
1
Introduction
This document describes the Integrated Data Protection options for Converged Systems.
The target audience for this document includes field personnel, partners, and customers responsible for planning or managing
data protection for a Converged System. This document is designed for people familiar with Dell EMC Data Protection solutions.
References to AMP Central (unless stated otherwise) cover AMP Central with Unity XT for Single System Management
(formerly known as AMP-3S), AMP Central with Unity XT for Multi-System Management, and AMP Central VSAN.
See the Glossary for terms, definitions, and acronyms.
8 Introduction
2
Understanding the architecture
System overview
This section describes the various features of Integrated Data Protection.
Integrated Data Protection provides:
● Daily backup
● Data replication
● Business continuity
● Workload mobility (flexibility)
● Extended retention of backups
Daily backup
Once-daily backups provide minimal required data insurance by protecting against data corruption, accidental data deletion,
storage component failure, and site disaster. The daily backup process creates fully recoverable, point-in-time copies of
application data.
Successful daily backups ensure that, in a disaster, a business can recover with not more than 24 hours of lost data. The best
practice is to replicate the backup data to a second site to protect against a total loss of data in the event of a full site disaster.
Most daily backups are saved for 30 to 60 days.
Data replication
Most businesses have some datasets that are too valuable to risk losing up to 24 hours of data. Additionally, if disaster strikes,
these more valuable datasets must be recovered quickly.
For datasets that are more valuable, data replication achieves a higher level of data insurance. Multiple snapshots of application
data can be created throughout the day. Snapshots are used to restore data to a point in time, to retrieve an individual file, or to
copy application data to a different server for testing, data mining, and so on.
Retrieving a copy of the data from an offsite location reduces the worst-case data loss from 24 hours to the time since the
last snapshot. Data can be copied synchronously, where the data is updated locally and remotely simultaneously. It can also be
copied asynchronously, where there may be a time lag in updating the data remotely.
Typically, data replication is done in addition to daily backup. Replication cannot always protect against data corruption, because
a corrupted file replicates as a corrupted file. The best level of data protection is achieved by combining daily backup and
continuous replication methodologies.
Business continuity
Business continuity provides application availability insurance by ensuring zero data loss and near-zero recovery time for
business-critical data. Data and applications protected with a business continuity product should still use daily backup to provide
multi-day point-in-time copies.
Workload mobility
Workload mobility provides data protection by moving the workload to another site in anticipation of a disaster. For example, if
a tropical storm is heading for a data center and the decision is made to move critical applications to a data center out of the
storm’s path, a good workload mobility design allows that movement to occur easily and with zero downtime.
The switchless configuration is available for all RCM-supported Converged Systems, provided that the number of available ports
is sufficient. For guidance on the number of ports see the following table:
Converged Number of ports for Number of ports Number of ports for In- Number of ports for
backup system BRS OOB (1 GigE) for In-Band Band management (10 backup on data plane
components on management plane management (1 GigE) on data plane switch
switch GigE) on data plane switch
switch
Data Domain 0 DD2200/2500 1 0 2
2200/2500/3300/
6300 Controller 1 DD3300/6300
Shared deployment
A shared deployment provides a backup system that can protect more than the directly connected Converged Systems.
You can back up the following systems by connecting their Avamar or NetWorker or PowerProtect Data Manager clients to the
converged backup system:
● VxBlock and Vblock Systems 200 series
● Third-party systems
These clients use the connections from the converged backup system to the customer network to communicate with Avamar
or NetWorker or PowerProtect Data Manager and send data to the backup repository. NetWorker can also perform image-level
backups from virtual environments not on a Converged System by adding VMware Backup Appliances to those environments.
Product Component
Avamar ● Avamar M1200/M2400 Single Node (metadata storage only)
● Avamar M1200/M2400 Data Store (in a grid configuration)
● Avamar Virtual Edition (metadata storage only)
● Avamar NDMP Accelerator Node (Physical and Virtual)
● Avamar VMware Image Backup/FLR Appliance
NetWorker ● NetWorker server
● NetWorker storage node
● NetWorker Management Console server
● NetWorker vProxy Appliance
ProtectPoint for VMAX ● VMAX3 or VMAX All Flash Array
● Data Domain DD6300 or higher
Data Domain ● Data Domain DD2200 Controller
● Data Domain DD2500 Controller
● Data Domain DD3300 Controller
● Data Domain DD4200 Controller
● Data Domain DD4500 Controller
● Data Domain DD6300 Controller
● Data Domain DD6800 Controller
● Data Domain DD6900 Controller
● Data Domain DD7200 Controller
● Data Domain DD9300 Controller
● Data Domain DD9400 Controller
● Data Domain DD9500 Controller
● Data Domain DD9800 Controller
● Data Domain DD9900 Controller
● Data Domain Boost
PowerProtect Data Manager ● PowerProtect Data Manager virtual appliance
● PowerProtect Data Manager VM Direct Protection Engines appliance
● Supported Data Domain
Avamar
Avamar provides a data backup and recovery solution with deduplication technology.
Avamar comes in both physical and virtual editions. References to Avamar in this guide apply to both editions. Avamar Virtual
Edition is a single node system that integrates with all supported Data Domain Systems.
Avamar hardware
The Avamar Server manages the Avamar backups and, depending on the configuration, targets Avamar datastore nodes or a
Data Domain or both for backup storage. The metadata that Avamar software maintains is stored on the configured disk drives
in the Avamar datastores.
Avamar software
Avamar software provides the following features for Converged Systems:
● VMware vSphere Web Client plug-in
● Instant Access VM restore from Data Domain
NOTE: This feature is not supported when integrated with the Data Domain 3300.
● Self-service file restore
● Multiple simultaneous backups per proxy
● 24x7 backups
● Cloud Tier support when integrated with a Cloud Tier-supported and a Cloud Tier-enabled Data Domain System
● Ability to meet high service level agreements (SLAs) expected with applications running on a Converged System
● Avamar Backup and Recovery Manager, which provides real-time monitoring of activities and events, backup reports,
systems, and configurations. Optionally, use Avamar Backup and Recovery Manager to configure basic Avamar replication.
NOTE: Replicate each remote site to a central location with a physical Avamar and
Data Domain system or another Avamar Virtual Edition and Data Domain system.
NetWorker
NetWorker is a software-based backup and recovery system that runs on either Microsoft Windows Server or CentOS Virtual
Machines when integrated with VxBlock systems. NetWorker requires separate storage hardware as a backup target. A
converged backup system with NetWorker uses Data Domain for storage.
NetWorker software provides the following features for Converged Systems:
● NetWorker is deployed as a minimum of three VMs on the management cluster on AMP-VX or AMP Central.
● NetWorker is deployed as a minimum of three VMs on a production cluster on the AMP-2S.
● NetWorker is supported on either Microsoft Windows Server or CentOS Linux operating systems when you configure as
Integrated Data Protection for VxBlock Systems.
● Ability to back up a wide variety of physical and virtual compute environments, applications, and database, at image and file
level.
● VMware vSphere Web-Client plug-in
● Instant Access VM restore from Data Domain
NOTE: This feature is not supported when integrated with the Data Domain 3300.
Component Description
NetWorker server Provides the services to back up and recover data. Includes an index database of backup
history.
NetWorker storage node Maintains the physical connection to the backup target device (in Integrated Data
Protection, this device is a Data Domain system). The storage node offloads from
Data Domain
Data Domain systems provide data storage targets for converged backup systems. Data Domain supports deduplication of
backup data before writing it to storage, thus minimizing storage requirements by storing only unique data.
CDR Standard Mode for AWS, AWS GovCloud, and Microsoft Azure
This configuration includes:
● An Integrated Data Protection system consists of an Avamar system integrated with a Data Domain deployed and configured
to perform VM image backups.
● A CDR Add-on VM on the AMP management platform configures the CDR environment and deploys the CDR Server VM to
the cloud.
The following figure displays the architecture of a VxBlock System with Avamar and Data Domain integrated data protection
configured with CDR support for Microsoft Azure in standard mode:
Guest-level recovery
The process of guest-level recovery is the same in this solution for VMs as for traditional recovery from a backup application.
You can recover directories, files, and applications in the Avamar Administrator GUI or NetWorker Management Console.
Image-level recovery
To recover data from an image-based backup, you can:
● Recover to the original VM
● Recover to an existing VM
● Recover to a new VM
● Use Instant Access Recovery
NOTE: This feature is not supported on the Data Domain 3300.
Related information
IP-based data backup
Enterprise deployment
A corporation with multiple branch offices might have multiple isolated backup systems. The best practice is to deploy a
centrally-managed backup architecture. Converged backup systems also protect against site disasters by replicating the daily
backups offsite. This solution is fast, network efficient, and eliminates the risks and costs associated with tape backups.
Ports are allocated in the Integrated Data Protection cabinet to support up to four directly-connected Converged Systems.
Applications and data in remote offices back up to a local converged backup system. Converged backup systems in remote
offices replicate data to the converged backup system in the regional data center. This provides an off-site copy in case of
disaster at the remote office.
Applications and data created in the regional data center are backed up to the converged backup system in the regional data
center. Only the data that was created in the regional data center is replicated to the primary data center. The converged
backup system in the primary data center replicates to the converged backup system in the secondary data center. The
converged backup system in the secondary data center replicates to the converged backup system in the primary data center.
The following figure shows the global deployment of a converged backup system designed to provide rapid local file recovery:
Converged backup systems are directly integrated into the Converged System network. As a result, Oracle backup traffic is
completely offloaded from the customer's backup network.
A converged backup system implemented for other applications already has the network connections between the Converged
System and the Data Domain controller in place.
The following figure shows the high-level connectivity between a Converged System and converged backup system for Oracle
database native backup and recovery directly to Data Domain:
Related information
Converged backup systems connectivity overview
NOTE: This ProtectPoint section uses VMAX array as a generic term for VMAX3 and VMAX All Flash arrays.
ProtectPoint workflows
The application administrator initiates ProtectPoint workflows to protect applications and data. Before the workflow is
triggered, the application must be quiesced to ensure that the snapshot on the Data Domain system is application consistent.
ProtectPoint database application agents work with the application being protected to automatically quiesce the application.
The application administrator is also responsible for retaining and replicating copies, restoring data, and recovering applications.
NOTE: When ProtectPoint runs in a VMware virtual environment, only VMs running Microsoft or Linux operating systems
with RDM storage are supported.
● Deploy in Azure, Azure Government and AWS GovCloud to protect in cloud workloads.
● Storage policy based management for VMware VMs.
● Protect VMware Cloud Foundation infrastructure.
● Support for vRA 8.2.
● Agentless app consistent protection of PostgreSQL and Cassandra in Kubernetes.
● Protect Kubernetes clusters in multi cloud environments.
● Backup Kubernetes cluster-level resources.
● Protect Kubernetes in cloud with Data Manager in AWS and Azure.
● Enhanced resiliency
PowerProtect Data Manager deploys as a VM from an OVA in a VMware vSphere environment, and stores all backups and
data on a Data Domain. The Data Domain can be one you order together with the PowerProtect Data Manager solution, or a
previously purchased Data Domain. The Data Domain Boost (DD Boost) protocol provides advanced integration with backup and
enterprise applications for increased performance. DD Boost distributes parts of the deduplication process to the backup server
or application clients, enabling client-side deduplication for faster and more efficient backup and recovery.
PowerProtect Data Manager appliance provides the following features for Converged Systems:
● PowerProtect Data Manager deploys as a virtual appliance on the management cluster on AMP Central.
● Ability to back up a wide variety of physical and virtual compute environments, applications, and database, at image and file
level.
● VMware vSphere Web-Client plug-in
● Instant Access VM restore from Data Domain. Review PowerProtect Data Manager documentation on the features and
limitations of the Instant Access process.
Encryption
PowerProtect Data Manager offers three types of encryption.
● Inline encryption of data in rest.
● Encryption of data in flight through DDBoost software using TLS.
● Encryption of data in flight for replicating data over the WAN between sites.
Cyber Recovery
Cyber Recovery (CR) integrates with the Integrated Data Protection backup solution to maintain mission-critical business data
in a secure vault environment for data recovery. Use the management software to create writable copies for data validation and
analytics.
Data Domain replicates MTree data over an air-gapped link to a physically isolated Cyber Recovery Vault (CR Vault)
environment. The CR Vault data can be analyzed and altered in the case of tampering. If the copied data is acceptable, it
is saved as an independent full backup copy which can be retention-locked and can be restored, if needed. If this data must be
restored, data can be replicated out of the CR Vault and back to the production environment.
Cyber Recovery enables access to the CR Vault only long enough to replicate the data from the production Data Domain to the
CR Vault Data Domain. Otherwise, the CR Vault Data Domain is secured and off the network.
To minimize time and expedite the replication, deduplication is performed on the production Data Domain.
Cyber Recovery can use the Retention Lock software that is located on the Data Domain inside the CR Vault environment to
provide data immutability for a specific time. Retention Lock is enabled on a per-MTree basis and the retention time is set on a
per-file basis.
Refer to the Enterprise Hybrid Cloud: Concepts and Architecture Solution Guide for more information.
VPLEX
VPLEX delivers enhanced availability (zero data loss, near-zero downtime) of applications and data. VPLEX also delivers
enhanced mobility of applications and data (in other words, the migration of applications and data between systems without the
burden of work, planning, and downtime associated with traditional migrations).
A VPLEX cluster resides in the data path between the Converged System servers and storage, where it can create data copies.
Copies can be created locally or over distance and can be read and written simultaneously.
VPLEX enables dynamic workload mobility and continuous availability within and between Converged Systems over distance.
VPLEX also provides simultaneous access to storage devices at two sites through the creation of VPLEX distributed virtual
volumes, supported on each side by a VPLEX cluster.
The VPLEX Integrated Data Protection solution for Converged Systems uses the following components and technologies:
● One or more Converged Systems
● VPLEX
● VMware vCenter Server
● VMware vSphere High Availability
● VMware vSphere vMotion
Dell EMC supports the following VPLEX systems for Integrated Data Protection solutions:
● VPLEX Local
● VPLEX Metro
VPLEX Local consists of a single VPLEX cluster, which provides the ability to manage and mirror data between multiple
Converged Systems from a single interface within a single data center.
VPLEX Metro consists of two VPLEX clusters that are connected with intercluster links over distance. VPLEX Metro enables
concurrent read and write access to data by multiple hosts across two locations. By mirroring data between two sites, VPLEX
Metro provides nonstop data access in the event of a component failure or even a site failure.
The VPLEX Witness is an optional component for VPLEX Metro implementations. It can be deployed at a third site to improve
data availability in the presence of cluster failures and intercluster communication loss. The VPLEX Witness deploys as a
virtual machine, and its VMware ESXi host must reside in a separate failure domain from both VPLEX clusters to eliminate the
possibility of a single fault affecting both a cluster and VPLEX Witness.
The VPLEX Witness observes the state of the clusters and can distinguish between an outage of the intercluster link and
a cluster failure. The VPLEX Witness then uses this information, together with the preconfigured detach-rules, to guide the
clusters to either resume or suspend I/O.
Metro node
Metro node is the next generation hardware platform based on the Dell PowerEdge R640 server platform that is specifically
designed with embedded management services, I/O path simplification, and 32 GB fibre channel connectivity.
Metro node is currently only co-sold as a feature for PowerStore and Unity XT arrays. Customers with PowerMax, VMAX,
XtremIO, and other VPLEX supported arrays are directed to look at the VPLEX VS6.
Metro node supports the same use cases as the VPLEX VS2 and VPLEX VS6 platforms with the following exceptions:
● No support for RecoverPoint.
● No support for MetroPoint.
● No support for the Cluster Witness Server (CWS).
● No IPv6 support.
NOTE: Support for the Cluster Witness Server and IPv6 is scheduled for a future metro node release.
Metro is available in a two-node cluster only (single engine), whereas the VPLEX VS2 and VPLEX VS6 scale out to dual and
quad engines. Similar to the VPLEX VS2 and VPLEX VS6 metro node support both the Local and Metro configuration.
VPLEX components
A VPLEX cluster consists of the following hardware components:
● One, two, or four VPLEX engines
○ Each engine contains two directors.
○ Each engine is protected by backup power:
○ ■ VS2: Standby Power Supply (SPS), external to the engine
■ VS6: Battery Backup Unit (BBU), internal to the engine
● One management server
○ VS2: In-rack 1U server external to the VPLEX engine.
○ VS6: Embedded two Management Module and Control Stations (MMCS) in VS6 base engine with internal storage. These
are named MMCS-A and MMCS-B.
● In the case of a dual or quad engine cluster, the cluster also contains:
○ VS2: Two 1U FC switches for communication between the directors in the engines
○ VS6: Two half-width 1U InfiniBand switches for communication between the directors in the engines
○ Two uninterruptible power supplies to provide backup power to:
■ VS2: The FC switches and the management server
■ VS6: The InfiniBand switches
● VPLEX Cluster Witness
○ Resides in a third failure domain up to one second away
○ Lightweight VM
■ 1 vCPU and 1 GB RAM
■ 2.54 GB disk
○ IP connectivity
● Metro node is available in single engine only.
○ Each ‘engine’ contains two nodes.
○ As Global Cache is removed with metro node there is no longer a requirement for a dedicated battery backup. Best
practice is to supply backup power to the data center through battery backup or a power generator.
RecoverPoint
RecoverPoint is a data replication and recovery product that enables customers to roll back data to potentially any point in time.
RecoverPoint is a key component of the Integrated Data Protection product series, enhancing operational recovery, disaster
recovery processes, and reducing potential data loss. RecoverPoint is typically deployed to protect specific business applications
and data that need more data protection than a once-per-day backup provides.
RecoverPoint can protect data through the following methods:
Operational recovery
RecoverPoint enables you to quickly recover from operational failures through its ability to roll back to any point-in-time (PiT).
Recovery from an operational disaster using RecoverPoint can occur through the following methods:
By default, RecoverPoint snapshots are crash-consistent. To ensure write-order consistency across specific production copy
volumes, add the volumes to the same consistency group. RecoverPoint can provide application consistent snapshots for
compatible applications, such as Microsoft Exchange and Oracle.
Disaster recovery
The quickest way to resume services after a major infrastructure failure is often to transfer the services to another facility.
Remote replication enables you to maintain up-to-date replica data in remote sites over any distance.
Use synchronous replication when the distance between two Converged Systems is short enough. Data in the disaster recovery
site is fully synchronized with the production site. The disaster recovery site can resume operations should a disaster occur at
the production site. XtremIO requires the use of VPLEX in front of RecoverPoint to provide synchronous replication.
Use asynchronous replication when the distance between Converged Systems is too far to support synchronous replication.
Data in the disaster recovery site is synchronized as closely as possible with the production site. Any data lag depends on
network bandwidth, data change rate, and the distance between the production and disaster recovery sites.
The following figure shows disaster recovery using traditional methods:
Feature Description
Up to four copies of a single production Having multiple copies means that:
volume
● You can allocate some copies to recovery and others to alternative activities
● The risk of copy data being unavailable for recovery due to other activities being
performed on it is minimized.
● You can work with up-to-date copies of production data
Host application access to replica copy Various image access methods enable customers to access copy data while
volumes RecoverPoint continues to buffer production writes on the replica copy journal. You
can also roll back changes that are made during image access when image access
is suspended (if using logged access). You can perform tests and other actions on
copy data with minimal risk to production data, even where only single copies of
production data exist.
Recovery of replica copy volumes to an You can restore data from a replica copy volume to any appropriate volume. The
alternative location restoration enables you to repurpose data for alternative production requirements.
Examples include analysis and reporting, or re-creating production environments in a
testing or development setting. The data is made available quickly, is up-to-date, and
can be manipulated with no impact to production data.
Asynchronous mode
Asynchronous mode is the default replication mode for RecoverPoint. The application initiates a write and does not wait for
the acknowledgment from the remote RecoverPoint Appliance (RPA) before initiating the next write. The data of each write
is stored on the production RecoverPoint cluster, which acknowledges the write. Based on the lag policy, system loads, and
available resources, the remote RPA cluster decides when to transfer the writes.
The main advantage of asynchronous replication is its ability to provide synchronous-like replication without degrading the
performance of host applications.
Asynchronous mode might not be the best option for all situations. For example, a high data change rate can increase data that
is stored between transfers and can cause data loss if a disaster occurs.
RecoverPoint replicates asynchronous only in situations in which doing so enables superior host performance without resulting in
an unacceptable level of potential data loss.
When XtremIO is at the target, image access and recovery time objective (RTO) to any point in time is instantaneous.
In protecting VMAX3/AF and PowerMax arrays, the entire VMAX3/AF or PowerMax storage group is protected rather than the
individual devices.
RecoverPoint supports using TimeFinder SnapVX to clone a copy, enabling you to access the clone rather than using
RecoverPoint image access to access the copy directly.
Cloning a copy with TimeFinder SnapVX is recommended in the following situations:
● Image access is needed for an extended time.
● The accessed needs to support a heavy write workload.
For RecoverPoint for VMAX3/AF and PowerMax limitations, see to the RecoverPoint 5.1 with VMAX3/AF/PowerMax Technical
notes document on Dell EMC support.
Synchronous mode
In synchronous mode, the application initiates a write, which is replicated to the remote RPA cluster. The write is acknowledged
when it reaches the remote RPA memory. The remote copy is always up-to-date with its production copy. Application
performance may be impacted when the solution is not properly sized. Poor WAN performance, SAN performance, and array
performance, and high RPA load can all cause the application performance to degrade.
By default, new consistency groups are created with asynchronous mode enabled, and can be set to replicate synchronously
through the Link policies.
NOTE: Fibre Channel communications over a wide area network can use fiber-optic cabling or copper cabling. Fibre
Channel over copper cabling is called Fibre Channel over IP, or FC-IP.
Use Description
Multi-level protection Fan-out topologies allow you to implement multi-level protection for core data. You can
choose how many copies of data to make and at how many locations to store those copies.
Because each copy in a fan-out topology is independent of other copies, you can allocate
copies to different roles. For example, you can allocate one copy to disaster recovery and
another to testing and development.
Disaster recovery site Fan-in topologies allow you to reduce the complexity and costs of managing multiple remote
consolidation sites, by allowing you to replicate multiple remote sites through one RecoverPoint cluster in
one central location. You can manage and maintain all replica data in one place and avoid the
need for multiple clusters or multiple sites. You can also maintain a single disaster recovery
site to protect multiple production sites.
Use cases
VMware vCenter Site Recovery Manager supports several use cases and provides significant capability and flexibility to Dell
EMC customers.
The following use cases for VMware vCenter Site Recovery Manager with Converged Systems are supported:
The following figure shows how Converged Systems use VMware SRM and RecoverPoint to provide automated failover with
any point in time (point-in-time) recovery:
AMP Protection
VMware vCenter Site Recovery Manager uses RecoverPoint to replicate the datastores on which the virtual machines reside.
RecoverPoint connects to the Converged System storage array over FC. The physical servers that are used in the AMP are not
configured with HBAs and cannot connect to the storage array over FC. Because of this and other reasons, VMware vCenter
Site Recovery Manager cannot protect the AMP. To protect one or more of the AMP element managers, move the VMs to the
VMware ESXi hosts that run on the Cisco blade servers.
Automation Deep integration with other VMware VMware vSphere PowerCLI provides Microsoft Windows
solutions PowerShell functions to administer VMware SRM or to create
scripts that automate SRM tasks.
The VMware vRealize Orchestrator plug-in for Site Recovery
Manager enables you to automatically create of a VMware SRM
infrastructure. You can add virtual machines to protection groups
and configure recovery settings of virtual machines.
Orchestration Recovery Plans Recovery plans are like an automated run book, controlling all
the steps in the recovery process. A VM can be part of multiple
recovery plans.
Orchestration Priority Groups (Start-up Sequence) There are five priority groups in VMware SRM. VMs in group one
are recovered first and VMs in group five are recovered last. All
VMs in a priority group are started simultaneously and the next
priority group is started only after all the previous VMs are booted
and responding. VMware tools must be installed.
Orchestration Dependencies When more granularity is needed for startup order, you can use
dependencies on a per-VM level.
Orchestration Shutdown and Startup actions Shutdown actions apply to the protected VM at the protected site
during a recovery plan run. Shutdown actions are not used during
recovery plan testing.
Startup actions apply to a VM that recovered by VMware SRM.
Powering on a recoverd VM is the default setting. Sometimes, it
might be desirable to recover a VM, but to leave it powered off.
Orchestration Pre and Postpower on steps VMware SRM can run commands or scripts from the VMware SRM
server at the recovery site before and after powering on a VM.
Running a script inside a VM is also supported as a postpower-on
step.
VMware SRM can also display a visual prompt as a pre- or
postpower-on step.
Orchestration Reporting When workflows such as test and recovery plans are run, history
reports are automatically generated. These reports contain items
such are the workflow name, the time the workflow was run, the
duration, successful and failed operations, and any error messages.
These history reports can be exported and used for various
purposes, including internal auditing, proof of disaster, recovery
testing for regulatory requirements, and troubleshooting.
● RecoverPoint for VMs splitter is installed on the VMware ESXi hypervisor. The RecoverPoint for VMs splitter is installed
on every VMWare ESXi host in the VMWare vSphere cluster that hosts VMs protected by RecoverPoint for VMs. The
RecoverPoint for Virtual Machines splitter splits the write coming from the host and sends it to the vRPA and the virtual
machine VMDK. The vRPAs handle all traffic to the journals and replicas as they do in a physical RecoverPoint system. All
storage traffic between the vRPAs and the VMware vSphere datastores uses the IP protocol.
The following figure shows the RecoverPoint for VMs system architecture with the vSphere HTML5 plug-in:
NOTE: All vRPA clusters registered to the same plug-in server require the same admin password.
User scripts You can add external scripts for before and after power up
in the Start-up Sequence for a consistency group for each
VM. Define an external host for each vRPA cluster in the
RecoverPoint for VM system.
Install an SSH server on each external host. The scripts
run with SSH on the external host configured. Each script
has a mandatory time-out period. Recovery is halted until
the script runs successfully. A prompt displays if the script
fails.
NOTE: Metro node does not include the write splitter and therefore does not support RecoverPoint and MetroPoint.
MetroPoint
MetroPoint combines VPLEX Metro and RecoverPoint to provide enhanced data protection for Converged Systems.
MetroPoint protects writes from both sides of a VPLEX Distributed Device. MetroPoint replicates data from one VPLEX Metro
site in a cluster to a remote Converged System over IP or Fiber Channel networks, using asynchronous, near-synchronous, or
synchronous replication. Additionally, MetroPoint replication allows local copies at all VPLEX Metro sites. Dell EMC customers
benefit from continuous availability of active-active applications across data centers while maintaining operational and disaster
recovery.
MetroPoint can operate in a two-site, three-site, or four-site topology. It can load balance replication data across WAN links and
can use VMware Site Recovery Manager to manage disaster recovery operations.
The following figure shows an overview of MetroPoint operation:
In this two-site MetroPoint, the VPLEX Metro cluster is hosting a Distributed Volume (DR1) that spans Site A and Site B.
At each site, a RecoverPoint splitter intercepts data writes destined for the Distributed Volume and passes a copy to the
This illustration shows a full-meshed RecoverPoint system, which is recommended in a three-site MetroPoint. Replication to the
remote site happens from either site of the VPLEX Metro. In this illustration, Site A is the active source and is replicating to
the remote site. Site B is the standby source and is marking only. The active and standby sources are both replicating the same
VPLEX Distributed Volume (DR1). During a source switchover, replication undergoes a short initialization phase.
In the previous figure, the distributed devices running Site A and Site B replicate to Site D, while distributed devices running at
Site C and Site D replicate to Site A.
Largest backup Required size for disk 3 Indexing both Avamar and
NetWorker
5 million files 40 GB 80 GB
10 million files 58 GB 116 GB
20 million files 94 GB 188 GB
Replication
This section contains information on single and multiple search node cluster replication.
The following systems will be discovered and monitored in the initial release of Data Protection Advisor:
● Avamar
● NetWorker
● PowerProtect Data Manager
● Data Domain
● VPLEX
● RecoverPoint
● RecoverPoint for Virtual Machines
Other hardware or software that is listed in the Data Protection Advisor Software Compatibility Guide may be added for
monitoring after the initial deployment.
VM CPU Shares Memory Memory Reserve all Virtual Disk vNIC Type
Shares Reservation guest memory Format
(All locked)
Data Protection High High Maximum Yes Thick Provision VMXNET3
Advisor Eager Zeroed
Datastore VM
Data Protection High High Maximum Yes Thick Provision VMXNET3
Advisor Eager Zeroed
Application VM