NetWorker 19.5 Cluster Integration Guide
NetWorker 19.5 Cluster Integration Guide
June 2021
Rev. 01
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 1990- 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................5
Preface.........................................................................................................................................................................................6
Chapter 1: Introduction................................................................................................................10
Stand-alone application.................................................................................................................................................... 10
Cluster-aware application................................................................................................................................................ 10
Highly available application.............................................................................................................................................. 10
Contents 3
Chapter 3: Configuring Devices for a Highly Available NetWorker Server.....................................33
Configuring an autochanger with shared tape devices............................................................................................33
Configuring an autochanger with non-shared tape devices................................................................................... 34
Configuring the robotics on a stand-alone host........................................................................................................ 35
4 Contents
Figures
Figures 5
Preface
As part of an effort to improve product lines, periodic revisions of software and hardware are released. Therefore, all versions of
the software or hardware currently in use might not support some functions that are described in this document. The product
release notes provide the most up-to-date information on product features.
If a product does not function correctly or does not function as described in this document, contact a technical support
professional.
NOTE: This document was accurate at publication time. To ensure that you are using the latest version of this document,
go to the Support website https://www.dell.com/support.
Purpose
This document describes how to uninstall, update and install the NetWorker software in a cluster environment.
Audience
This document is part of the NetWorker documentation set and is intended for use by system administrators during the
installation and setup of NetWorker software in a cluster environment.
Revision history
The following table presents the revision history of this document.
Related documentation
The NetWorker documentation set includes the following publications, available on the Support website:
● NetWorker E-LAB Navigator
Provides compatibility information, including specific software and hardware configurations that NetWorker supports. To
access E-LAB Navigator, go to https://elabnavigator.emc.com/eln/elnhome.
● NetWorker Administration Guide
Describes how to configure and maintain the NetWorker software.
● NetWorker Network Data Management Protocol (NDMP) User Guide
Describes how to use the NetWorker software to provide data protection for NDMP filers.
● NetWorker Cluster Integration Guide
Contains information related to configuring NetWorker software on cluster servers and clients.
● NetWorker Installation Guide
Provides information on how to install, uninstall, and update the NetWorker software for clients, storage nodes, and servers
on all supported operating systems.
● NetWorker Updating from a Previous Release Guide
Describes how to update the NetWorker software from a previously installed release.
● NetWorker Release Notes
6 Preface
Contains information on new features and changes, fixed problems, known limitations, environment and system requirements
for the latest NetWorker software release.
● NetWorker Command Reference Guide
Provides reference information for NetWorker commands and options.
● NetWorker Data Domain Boost Integration Guide
Provides planning and configuration information on the use of Data Domain devices for data deduplication backup and
storage in a NetWorker environment.
● NetWorker Performance Optimization Planning Guide
Contains basic performance tuning information for NetWorker.
● NetWorker Server Disaster Recovery and Availability Best Practices Guide
Describes how to design, plan for, and perform a step-by-step NetWorker disaster recovery.
● NetWorker Snapshot Management Integration Guide
Describes the ability to catalog and manage snapshot copies of production data that are created by using mirror technologies
on storage arrays.
● NetWorkerSnapshot Management for NAS Devices Integration Guide
Describes how to catalog and manage snapshot copies of production data that are created by using replication technologies
on NAS devices.
● NetWorker Security Configuration Guide
Provides an overview of security configuration settings available in NetWorker, secure deployment, and physical security
controls needed to ensure the secure operation of the product.
● NetWorker VMware Integration Guide
Provides planning and configuration information on the use of VMware in a NetWorker environment.
● NetWorker Error Message Guide
Provides information on common NetWorker error messages.
● NetWorker Licensing Guide
Provides information about licensing NetWorker products and features.
● NetWorker REST API Getting Started Guide
Describes how to configure and use the NetWorker REST API to create programmatic interfaces to the NetWorker server.
● NetWorker REST API Reference Guide
Provides the NetWorker REST API specification used to create programmatic interfaces to the NetWorker server.
● NetWorker 19.5 with CloudBoost 19.5 Integration Guide
Describes the integration of NetWorker with CloudBoost.
● NetWorker 19.5 with CloudBoost 19.5 Security Configuration Guide
Provides an overview of security configuration settings available in NetWorker and Cloud Boost, secure deployment, and
physical security controls needed to ensure the secure operation of the product.
● NetWorker Management Console Online Help
Describes the day-to-day administration tasks performed in the NetWorker Management Console and the NetWorker
Administration window. To view the online help, click Help in the main menu.
● NetWorker User Online Help
Describes how to use the NetWorker User program, which is the Windows client interface, to connect to a NetWorker
server to back up, recover, archive, and retrieve files over a network.
NOTE: Data Domain is now PowerProtect DD. References to Data Domain or DD systems in this documentation, in the UI,
and elsewhere in the product include PowerProtect DD systems and older Data Domain systems. In many cases the UI has
not yet been updated to reflect this change.
Preface 7
Typographical conventions
The following type style conventions are used in this document:
You can use the following resources to find more information about this product, obtain support, and provide feedback.
Knowledgebase
The Knowledgebase contains applicable solutions that you can search for either by solution number (for example, KB000xxxxxx)
or by keyword.
To search the Knowledgebase:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Knowledge Base.
3. In the search box, type either the solution number or keywords. Optionally, you can limit the search to specific products by
typing a product name in the search box, and then selecting the product from the list that appears.
8 Preface
Live chat
To participate in a live interactive chat with a support agent:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Contact Support.
3. On the Contact Information page, click the relevant support, and then proceed.
Service requests
To obtain in-depth help from Licensing, submit a service request. To submit a service request:
1. Go to https://www.dell.com/support.
2. On the Support tab, click Service Requests.
NOTE: To create a service request, you must have a valid support agreement. For details about either an account or
obtaining a valid support agreement, contact a sales representative. To find the details of a service request, in the
Service Request Number field, type the service request number, and then click the right arrow.
Online communities
For peer contacts, conversations, and content on product support and solutions, go to the Community Network https://
www.dell.com/community. Interactively engage with customers, partners, and certified professionals online.
Preface 9
1
Introduction
This document describes how to configure and use the NetWorker software in a clustered environment. This guide also provides
cluster specific information that you need to know before you install NetWorker on a clustered host. You must install the
NetWorker software on each physical node in a cluster.
This guide does not describe how to install the NetWorker software. The NetWorker Installation Guide describes how to install
the NetWorker software on supported operating systems. You can configure the NetWorker software in a cluster in one of the
following ways:
Topics:
• Stand-alone application
• Cluster-aware application
• Highly available application
Stand-alone application
When you install the NetWorker server, storage node, or client software as a stand-alone application, the required daemons
run on each node. When the NetWorker daemons stop on a node, the cluster management software does not restart them
automatically.
In this configuration:
● NetWorker does not know which node owns the shared disk. To ensure that there is always a backup of the shared disks,
configure a NetWorker client resource for each physical node to back up the shared and local disks.
● Shared disk backups will fail for each physical node that does not own or control the shared disk.
● NetWorker writes client file index entries for the shared backup to the physical node that owns the shared disk.
● To recover data from a shared disk backup, you must determine which physical node owned the shared disk at the time of
backup.
NOTE: NMC should always be used as an stand-alone application. NMC is not a cluster-aware or highly-available
application.
Cluster-aware application
On supported operating systems, when you configure a cluster-aware NetWorker client, all required daemons run on each
physical node. When the NetWorker daemons stop on a node, the Cluster Management software does not restart them
automatically.
A cluster-aware NetWorker application determines path ownership of the virtual applications that run in the cluster. This allows
the NetWorker software to back up the shared file system and write the client file index entries for the virtual client.
When you configure a cluster-aware NetWorker application, you must:
● Create a NetWorker client resource for the virtual node in the cluster to back up the shared disk.
● Create a NetWorker client resource for each physical node to back up the local disks.
● Select the virtual node to recover data from a shared disk backup.
10 Introduction
● The active node runs the NetWorker Server daemons and accesses the global /nsr or C:\Program Files\EMC
NetWorker\nsr directory on the shared drive.
● The passive nodes run the NetWorker Client daemon, nsrexecd.
● When a failover occurs, the new active node runs the NetWorker server daemons.
● The NetWorker virtual server uses the IP address and hostname of the NetWorker virtual host, regardless of which cluster
node owns the NetWorker Server application.
● NetWorker determines path ownership of the virtual applications that run in the cluster. This allows the NetWorker software
to back up the shared file system and write the client file index entries for the virtual client.
When you configure a highly available NetWorker Server, you must:
● Create a NetWorker Client resource for the virtual node in the cluster to back up the shared disk.
● Create a NetWorker Client resource for each physical node to back up the local disks.
● Select the virtual node to recover data from a shared disk backup.
The following figure provides an example of a highly available NetWorker Server in a general cluster configuration consisting of
two nodes and one virtual server. In this illustration:
● Node 1, clus_phy1, is a physical node with local disks.
● Node 2, clus_phy2, is a physical node with local disks.
● Virtual Server, clus_vir1:
○ Owns the shared disks. A volume manager manages the shared disk.
○ Can fail over between Node 1 and Node 2. However, the NetWorker Server software only runs on one node at a time.
Introduction 11
2
Configuring the Cluster
This chapter describes how to prepare for a NetWorker installation on a cluster and how to configure NetWorker on each
cluster. Perform these steps after you install the NetWorker software on each physical node.
The steps to install and update the NetWorker software in a clustered environment are the same as the steps to install
and update the software in a non-clustered environment. The NetWorker Cluster Integration Guide describes how to install
NetWorker on each supported operating system.
Topics:
• Prepare to install NetWorker on a cluster
• Microsoft Failover Cluster Server
• SLES High Availability Extension
• Red Hat Enterprise Linux High Availability
• Sun Cluster and Oracle Solaris Cluster
• AIX HACMP/PowerHA SystemMirror
• HP MC/ServiceGuard
• VERITAS Cluster Server
• Troubleshooting configuration
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
regsvr32 /u nsrdresex.dll
● To back up a host that is a member of multiple domains, an Active Directory (AD) domain, and a DNS domain, you must
define the AD domain name in:
○ The host file on the NetWorker Server.
○ The Alias attribute for the Client resource on the NetWorker Server.
● The WINDOWS ROLES AND FEATURES save set includes the MSFCS database. When you back up the WINDOWS ROLES
AND FEATURES save set, NetWorker will automatically back up cluster configuration. The cluster maintains the MSFCS
database synchronously on two nodes, as a result the database backup on one node might not reflect changes made on the
other node.
● The NetWorker Server and Client software supports backup and recovery of file system data on Windows 2019, Windows
2016, Windows Server 2012, and Windows Server 2012 R2 File Servers configured for Windows Continuous Availability
with Cluster Shared Volumes (CSV). Support of CSV and deduplicated CSV backups include levels Full, Incremental, and
incr_synth_full. NetWorker supports CSV and deduplicated CSV backups with the following restrictions:
○ The volume cannot be a critical volume.
○ NetWorker cannot shadow copy a CSV and local disks that are in the same volume shadow copy set.
NOTE: The NetWorker software does not protect the Microsoft application data stored on a CSV or deduplicated
CSV, such as SQL databases or Hyper-V virtual machines. To protect Microsoft application data use the NetWorker
Module for Microsoft (NMM) software. The NMM documentation provides more information about specific backup and
recovery instructions of Microsoft application data.
The section Windows Optimized Deduplication in the NetWorker Administration Guide provides more information about
performing a backup and recovery of deduplicated CSV volumes.
a. For Windows 2012, 2016, and 2019 on the Select Role page, select Other Server, and then click Next.
7. On the Client Access Point page, specify a hostname that does not exist in the ID and an available IP address, and then
click Next.
8. On the Select Storage page, select the shared storage volume for the shared nsr directory, and then click Next.
9. In the Select Resource Type list, select the NetWorker Server resource type, and then click Next.
10. On the Confirmation page, review the resource configurations and then click Next. The High Availability Wizard creates
the resources components and the group.
When the Summary page appears, a message similar to the following appears, which you can ignore:
The clustered role will not be started because the resources may need additional
configuration. Finish configuration, and then start the clustered role.
f. Click OK.
NOTE: Do not create multiple NetWorker server resources. Creating more than one instance of a NetWorker Server
resource interferes with how the existing NetWorker Server resources function.
A dependency is set between the NetWorker server resource and the shared disk.
13. Right-click the NetWorker cluster resource and select Start Role.
The NetWorker server resource starts.
14. Confirm that the state of the NetWorker Server resource changes to Online.
5. Restart the server name by using the Start-ClusterResource "Cluster Name" command.
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
export OCF_ROOT=/usr/lib/ocf
Update your profile file to make the change persistent across reboots.
6. On one node, create a the required resource groups for the NetWorker resources:
a. Start the crm tool, by typing:
crm configure
b. Create a file system resource for the nsr directory. For example, type:
primitive fs ocf:heartbeat:Filesystem \
operations $id="fs-operations" \
op monitor interval="20" timeout="40" \
params device="/dev/sdb1" directory="/share1" fstype="ext3"
c. Create an IP address resource for the NetWorker Server name. For example, type:
primitive ip ocf:heartbeat:IPaddr \
operations $id="ip-operations" \
op monitor interval="5s" timeout="20s" \
params ip="10.5.172.250" cidr_netmask="255.255.254.0" nic="eth1"
e. Define the NetWorker Server resource group that contains the file system, NetWorker Server, and IP address resources.
For example, type:
group NW_group fs ip nws
f. Commit the changes by typing:
commit
7. For SLES 11 SP4 only, perform the following steps:
a. Open the Pacemaker GUI.
b. Connect to the highly available cluster server by clicking Login to cluster , type the username and password, and then
click OK.
c. Expand Configuration in the left navigation pane, and then click Resources.
The NetWorker Installation Guide describes how to install the NetWorker software.
The configuration script creates the nw_redhat file and the lcmap file.
9. Create a service group:
a. Connect to the Conga web interface.
b. On the Service tab, click Add.
c. In the Service Name field, specify a name for the resource. For example rg1.
10. Add an LVM resource for the shared volume to the service group :
a. Click Add resource.
b. From the Global Resources drop down, select HA LVM.
c. In the Name field, specify the name of the resource. For example, ha_lvm_vg1.
d. In the Volume Group Name field, specify the name of the volume group for the shared disk that contains the /nsr
directory. For example, vg1.
e. In the Logical Volume Name field, specify the logical volume name. For example, vg1_1v.
11. Add a file system resource for the shared file system to the service group.
a. After the HA LVM Resource section, click Add Child Resource.
b. From the Global Resources drop down, select Filesystem.
c. In the Name field, specify the name of the file system. For example, ha_fs_vg1 .
d. In the Mount point field, specify the mount point. For example: /vg1.
e. In the Device, FS label or UUID field, specify the device information. For example, device "/dev/vg1/vg1_lv"
12. Add an IP address resource to the group:
a. After the Filesystem section, click Add Child Resource.
b. From the Global Resources drop down, select IP Address.
c. In the IP Address field, specify the IP address of the virtual NetWorker server.
d. Optionally, in the Netmask field, specify the netmask that is associated with IP address.
13. Add a script resource to the group:
a. After the IP address section, click Add Child Resource.
b. From the Global Resources drop down, select Script.
c. In the Name field, specify the name for the script resource. For example, nwserver.
d. In the Path field, specify the path to the script file. For example, /usr/sbin/nw_redhat.
14. Click Submit.
export OCF_ROOT=/usr/lib/ocf
6. On one node, create the required resource groups for the NetWorker resources:
a. Create a file system resource for the nsr directory. For example, type:
NOTE: --group NW_group adds the file system resource to the resource group.
b. Create an IP address resource for the NetWorker Server name. For example, type:
NOTE: --group NW_group adds the file system resource to the resource group.
NOTE: --group NW_group adds the file system resource to the resource group.
7. If any resource fails to start, confirm that the shared volume is mounted. If the shared volume is not mounted, manually
mount the volume, and then reset the status by typing the following command:
pcs resource cleanup nws
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
b. Add the logical hostname resource type to the new resource group:
clreslogicalhostname create -g resource_group_name logical_name
For example, when the logical hostname is clus_vir1, type:
clreslogicalhostname create -g backups clus_vir1
c. Optionally, to create an instance of the SUNW.HAStoragePlus resource type:
● Determine if the HAStoragePlus resource type is registered within the cluster:
clresourcetype list
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
HP MC/ServiceGuard
This section describes how to prepare the HP MC/ServiceGuard cluster before you install the NetWorker software. This section
also describes how to configure the NetWorker client as a cluster-aware application, after you install the NetWorker software
on each physical node of the cluster.
The NetWorker Installation Guide describes how to install the NetWorker software.
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
touch /etc/cmcluster/NetWorker.clucheck
touch /etc/cmcluster/.nsr_cluster
NOTE: Ensure everyone has read ownership and access permissions for the .nsr_cluster file.
2. Define the mount points that the MC/ServiceGuard or MC/LockManager package owns in the .nsr_cluster file. Include
the NetWorker shared mount point.
For example:
pkgname:published_ip_address:owned_path [:...]
where:
● pkgname is the name of the package.
● published_ip_address is the IP address assigned to the package that owns the shared disk. Enclose IPv6 addresses in
square brackets. You can enclose IPv4 addresses in square brackets, but it is not necessary.
● owned_path is the path to the mount point. Separate additional paths with a colon.
For example:
● IPv6 address:
client:[3ffe:80c0:22c:74:6eff:fe4c:2128]:/share/nw
● IPv4 address:
client:192.168.109.10:/share/nw
NOTE: An HP-UX MC/ServiceGuard package that does not contain a disk resource does not require an entry in
the.nsr_cluster file. If an online diskless package is the only package on that cluster node, cmgetconf messages
may appear in the /var/admin file during a backup.
To avoid these messages, allocate a mounted file system to a mount point, then add this mount point, the package
name, and the IP address to the .nsr_cluster file. The NetWorker software does not back up the file system.
However, you can mount the file system on each cluster node that the diskless package might fail over to.
3. Copy the NetWorker.clucheck and .nsr_cluster file to the /etc/cmcluster directory, on each passive node.
NOTE: This section does not apply when you install NetWorker as a stand-alone application.
○ VCS_CONF
The default directory is /etc/VRTSvcs.
● Ensure that the PATH environment variable includes the /usr/sbin and $VCS_HOME/bin directories. The default
$VCS_HOME directory is /opt/VRTSvcs/bin.
NOTE:
● Create mount points on all nodes. Example: On linux, /dg1vol1 should be created on all nodes.
● When configuring vxfs, use dsk instead of rdsk for block device. Example: /dev/vx/dsk/dg2/dg2vol1
group networker (
SystemList = { arrow = 0, canuck = 1 }
)
Application nw_server (
StartProgram = "/usr/sbin/nw_vcs start"
StopProgram = "/usr/sbin/nw_vcs stop"
CleanProgram = "/usr/sbin/nw_vcs stop_force"
MonitorProgram = "/usr/sbin/nw_vcs monitor"
MonitorProcesses = {"/usr/sbin/nsrd -k Virtual_server_hostname"}
)
DiskGroup dg1 (
DiskGroup = dg1
)
IP NW_IP (
Device = eth0
Address = "137.69.104.104"
)
Mount NW_Mount (
MountPoint = "/mnt/share"
BlockDevice = "/dev/sdc3"
FSType = ext2
FsckOpt = "-n"
)
NW_Mount requires dg1
NW_IP requires NW_Mount
nw_server requires NW_IP
// resource dependency tree
//
// group networker
// {
// Application nw_server
// {
// IP NW_IP
// {
// Mount NW_Mount
// {
// DiskGroup dg1
// }
// }
// }
group networker (
SystemList = { BU-ZEUS32 = 0, BU-HERA32 = 1 }
)
IP NWip1 (
Address = "10.5.163.41"
SubNetMask = "255.255.255.0"
MACAddress @BU-ZEUS32 = "00-13-72-5A-FC-06"
MACAddress @BU-HERA32 = "00-13-72-5A-FC-1E"
)
MountV NWmount1 (
MountPath = "S:\\"
VolumeName = SharedVolume1
VMDGResName = NWdg_1
)
Process NW_1 (
Enabled = 0
StartProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\\nw_vcs.exe start"
StopProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\\nw_vcs.exe stop"
CleanProgram = "D:\\Program Files\\EMC NetWorker\\nsr\\bin\\nw_vcs.exe
stop_force"
NOTE: To change the configuration at a later time, run the lc_config.exe -r option then run lc_config.exe
again.
2. To stop the VCS software on all nodes and leave the resources available, type:
hastop -all -force
4. To copy the NWClient resource definition for the file that is located in the VCS configuration directory:
● For UNIX systems, type:
cp /etc/VRTSvcs/conf/NWClient.cf /etc/VRTSvcs/conf /config/NWClient.cf
For Windows systems, type:
cp C:\Program Files\Veritas\cluster server\conf\NWClient.cf
C:\Program Files\Veritas\cluster server
\conf\config\NWClient.cf
5. To add the NWClient resource type and the NWClient resource type instances to the main.cf file:
a. Type the following command:
include "NWClient.cf"
g. Add a NWClient resource instance for the service groups that require the resource.
Slow backups
The lcmap program, queries cluster nodes and creates a map that includes information such as path ownership of resource
groups. In large cluster configurations, lcmap may take a long time to complete and thus slow down certain operations. This is
most often noticed in very long backup times.
In these situations, consider adjusting cluster cache timeout. This attribute specifies a time, in seconds, in which to cache the
cluster map information on a NetWorker client.
Edit the cluster cache timeout attribute with caution. Values for the attribute can vary from several minutes to several days and
depends the following factors:
● How often the cluster configuration changes.
● The possibility of resource group failover.
● The frequency of NetWorker operations.
If you set the value too large, then an out-of-date cluster map can result and cause incorrect path resolution. For example, if the
cluster cache timeout value is set to 86400 (one day), then any changes to the cluster map will not be captured for up to one
day. If cluster map information changes before the next refresh period, then some paths may not resolve correctly.
NOTE: If you set the value too small, then cache updates can occur too frequently, which negatively affects performance.
Experiment with one physical cluster node to find a satisfactory timeout value. If you cannot obtain a significant
improvement in performance by adjusting this attribute, then reset the attribute value to 0 (zero). When the attribute
value is 0, NetWorker does not use the attribute.
2. Display the current settings for attributes in the NSRLA resource. For example, type:
print type:NSRLA
3. Change the value of the cluster cache timeout attribute. For example, type:
update cluster cache timeout: value
where value is the timeout value in seconds. A value of 0 (zero) specifies that the cache is not used.
6. To make the timeout value take effect immediately, delete the cache file on the physical node:
● UNIX: /tmp/lcmap.out
● Windows: NetWorker_install_path\nsr\bin\lcmap.out
06/08/00 10:00:11 nsrmon #217: connect to nsrexec prog 390113 vers 1 on `uranus' failed:
RPC error: Remote system error
06/08/00 10:00:11 nsrd: media notice: check storage node: uranus (RPC error: Remote
system error)
06/08/00 10:00:11 nsrd: media info: restarting nsrmmd #1 on uranus in 2 minute(s)
06/08/00 10:02:12 nsrd: media info: restarting nsrmmd #1 on uranus now
06/08/00 10:02:42 nsrmon #183: connect to nsrexec prog 390113 vers 1 on `
The error also appears when the nsrexecd daemon on a UNIX host or the NetWorker Remote Exec service on a Windows host
is not running on the storage node.
To resolve this issue, start the nsrexecd process on UNIX or the NetWorker Remote Exec service on Windows.
To resolve this issue, add the following to the Remote access list:
For Windows:
administrator@clus_phy1_IP
administrator@clus_phy1_shortname
administrator@clus_phy1_FQDN
administrator@clus_phy2_IP
administrator@clus_phy2_shortname
administrator@clus_phy2_FQDN
system@clus_phy1_IP
system@clus_phy1_shortname
system@clus_phy1_FQDN
system@clus_phy2_IP
system@clus_phy2_shortname
system@clus_phy2_FQDN
For Linux:
root@clus_phy1_IP
root@clus_phy1_shortname
root@clus_phy1_FQDN
root@clus_phy2_IP
root@clus_phy2_shortname
root@clus_phy2_FQDN
NOTE: Replace IP, shortname and FQDN with current environmental details.
2. Zone the robotic arm and all drives to each physical node in the cluster.
3. Configure the same path (bus, target and LUNs) to the robotics and tape drives on each node.
4. If you configured the bridge with node device-reassignment reservation commands, then add these commands to the nsrrc
startup script on the NetWorker virtual server. The NetWorker Administration Guide describes how to modify the nsrrc
script.
5. Install the cluster vendor-supplied special device file for the robotic arm on each physical node. The special device file
creates a link to the tape or autochanger device driver. Ensure that the name that is assigned to the link is the same on each
node for the same device. If you do not have matching special device files across cluster nodes, you might be required to
install fibre HBAs in the same PCI slots on all the physical nodes within the cluster.
The following figure provides a graphical view of this configuration option.
6. To configure the autochanger and devices by using the NMC device configuration wizard, specify the hostname of the
virtual server, clus_vir1, when prompted for the storage node name and the prefix name. The NetWorker Administration
Guide describes how to use NMC to configure autochangers and devices.
7. To configure the autochanger and devices by using the jbconfig command, run jbconfig -s clus_vir1 on the
physical node that owns the NetWorker server resource.
a. When prompted for the hostname to use as a prefix, specify the virtual server name, clus_vir1.
b. When prompted to configure shared devices, select Yes.
The NetWorker Administration Guide describes how to use NMC to configure autochangers and devices.
8. The storage node attribute value for each host is as follows:
● clus_phys1: clus_phys1
● clus_phys2: clus_phys2
● clus_vir1: nsrserverhost
Configuring backup and recovery describes how to configure the Client resource for each cluster node.
9. When a failover occurs, NetWorker relocates and restarts savegroup operations that were in progress on the failover node.
Standard autochanger operations however, (for example: performing an inventory, labeling, mounting, or unmounting a
volume) does not automatically restart on the new failover node.
1. To configure the autochanger and devices by using the NMC device configuration wizard, specify the hostname of the
virtual server, clus_vir1, when prompted for the storage node name and the prefix name. The NetWorker Administration
Guide describes how to use NMC to configure autochangers and devices.
2. To configure the autochanger and devices by using the jbconfig command, run jbconfig -s clus_vir1 on the
physical node that owns the NetWorker server resource.
● When prompted for the hostname to use as a prefix, specify the virtual server name, clus_vir1.
● When prompted to configure shared devices, select Yes. The NetWorker Administration Guide describes how to use
jbconfig to configure autochangers and devices.
3. The storage node attribute value for each host is as follows:
● clus_phys1: nsrserverhost
● clus_phys2: nsrserverhost
● clus_vir1: nsrserverhost
The "Configuring backup and recovery" chapter describes how to configure the Client resource for each cluster node.
In this example, use the following procedure to configure a stand-alone storage node:
● The NetWorker virtual server uses local device AFTD1 to back up the bootstrap and indexes.
● To configure the autochanger and devices by using the NMC device configuration wizard, specify the hostname of the
stand-alone host, ext_SN, when prompted for the storage node name and the prefix name.
● To configure the autochanger and devices by using the jbconfig command, run jbconfig -s clu_vir1 on the
ext_SN. The NetWorker Administration Guide describes how to use jbconfig to configure autochangers and devices.
○ When prompted for the hostname to use as a prefix, specify the external storage node, ext_SN.
○ When prompted to configure shared devices, select Yes.
● The Storage nodes attribute value in the Client resource for each host is as follows:
○ clus_phys1: clus_phys1
○ clus_phys2: clus_phys2
○ clus_vir1: nsrserverhost
The "Configuring backup and recovery" chapter describes how to configure the Client resource for each cluster node.
Topics:
• Setting NetWorker environment variables in a cluster
• Limiting NetWorker server access to a client
• Configuring the NetWorker virtual server
• Creating client resources for physical node backups
• Creating a client resource for virtual client backups
• Configuring a backup device for the NetWorker virtual server
• Performing manual backups of a cluster node
• Troubleshooting backups
• Recovering data
• Troubleshooting recovery
● On Windows, stop the NetWorker Remote Exec service. This also stops the NetWorker Backup and Recover service on a
NetWorker server.
When the servers file does not contain any hosts, any NetWorker Server can back up or perform a directed recovery to
the host.
5. On the node with access to the shared disk, edit the global servers file.
NOTE: Ensure that the hostnames that are defined in the global servers file are the same as the local servers file on
each physical node.
6. For Linux only, edit the NetWorker boot-time startup file, /etc/init.d/networker and delete any nsrexecd -s
arguments that exist.
For example, when the /etc/init.d/networker contains the following entry:
4. Click OK.
5. For NetWorker Server configured to use the lockbox only:
a. In the left navigation pane, select Clients.
b. Right-click the client resource for the NetWorker virtual service and select Modify Client Properties.
c. On the Globals (2 of 2) tab specify the name of each cluster node in the Remote Access field.
● For RHEL cluster nodes, specify the name of the host that appears when you use the hostname command.
● For Windows cluster nodes, use the full computer name that appears in the Control Panel > System > Computer
name field.
6. Click OK.
NOTE: When you configure the NetWorker Server to use a lockbox, you must update the Remote Access field before
the virtual node fails over to another cluster node. If you do not update the Remote Access field before failover, you
must delete and create the lockbox resource. The NetWorker Security Configuration Guide describes how to configure
the lockbox resource.
7. On the Apps and Modules tab, in the Application Information field, specify environment variables, as required.
● For Snapshot Management backups only, use the NSR_PS_SHARED_DIR variable to specify the share directory. For
example:
NSR_PS_SHARED_DIR=P:\share
The NetWorker Snapshot Management Integration Guide describes how to configure Snapshot backups.
● For Windows Server 2012 and Windows 2012 R2 CSV and deduplicated CSV backups only:
As part of a deduplicated CSV backup, the preferred node tries to move ownership of the CSV volume to itself. If
the ownership move succeeds, then NetWorker performs a backup locally. If the ownership move fails, then NetWorker
performs the backup over SMB. When the CSV ownership moves, NetWorker restores the ownership to the original node
after the backup completes.
You can optionally specify the preferred cluster node to perform the backup. To specify the preferred server, use the
NetWorker client Preferred Server Order List (PSOL) variable NSR_CSV_PSOL.
When you do not specify a PSOL NetWorker performs the backup by using the Current Host Server node (virtual node).
Review the following information before you specify a PSOL:
○ The save.exe process uses the first available server in the list to start the CSV backup. The first node that is
available and responds becomes the preferred backup host. If none of the specified nodes in the PSOL are available,
then NetWorker tries the backup on the Current Host Server node.
○ The Remote access list attribute on the NetWorker client must contain the identified cluster nodes.
○ Use the NetBIOS name when you specify the node names. You cannot specify the IP address or FQDN of the node.
To specify the PSOL, include a key/value pair in the client resource Application information attribute. Specify the
key/value pair in the following format:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
For example, physical node clus_phys2 owns the cluster resources for virtual node clus_vir1. By default, clus_vir1 runs
the backup request.
To offload operations, define clus_phy1 as the preferred node to start the save operation. If clus_phy1 is unavailable, then
NetWorker should try to use clus_phy2 to start the save operation.
The NSR_CSV_PSOL variable in the clus_vir1 client resource is set to:
NSR_CSV_PSOL=MachineName1,MachineName2,MachineName3...
8. For deduplicated CSV backups only, to configure an unoptimized deduplication backup, specify
VSS:NSR_DEDUP_NON_OPTIMIZED=yes in the Save operations attribute.
9. Define the remaining attributes in the Client properties window, as required, and then click OK.
1. Edit the properties of the client resource for the NetWorker virtual server by using NMC.
2. Select Globals (2 of 2).
3. In the Storage nodes attribute, specify the hostnames of each physical cluster node followed by nsrserverhost.
NOTE:
MSFCS does not support shared tapes. You cannot configure the NetWorker virtual server with tape devices connected
to a shared bus. MSFCS supports disk devices connected to a shared bus. It is recommended that you do not use file
type devices connected to a shared bus.
nodeA
nodeB
● As the root user on each node in the cluster, edit or create the /etc/cmcluster/cmclnodelist file and add the
following information to the file:
nodeA user_name
nodeB user_name
NOTE: If the cmclnodelist file exists, the cluster software ignores any .rhosts file.
where:
● client is the virtual hostname to back up shared disk data or the physical node hostname to back up data that is local to the
node on which you run the save command.
● save_set specifies the path to the backup data.
Troubleshooting backups
This section provides resolutions for the following common backup and configuration errors.
Recovering data
This section describes how to recover data from shared disks that belong to a virtual client.
NOTE:
To recover Windows clusters, the chapter Windows Bare Metal Recovery (BMR) in the NetWorker Administration Guide
provides more information.
To recover data that is backed up from a shared disk that belongs to a virtual client, perform the following steps:
1. Ensure that you have correctly configured remote access to the virtual client:
a. Edit the properties of the virtual client resource in NMC.
b. On the Globals (2 of 2) tab, ensure that the Remote Access attribute contains an entry for the root or Administrator
user for each physical cluster node.
2. To recover a CSV backup for a client that uses the NSR_CSV_PSOL variable, ensure that the system account for each
host in the preferred server order list is a member of the NetWorker Operators User Group.
For example, if you configure the virtual node client resource that specifies the CSV volumes with the following variable:
NSR_CSV_PSOL=clu_virt1, clu_virt2, specify the following users in the NetWorker Operators User Group:
system@clu_virt1
system@clu_virt2
regcnsrd -u
2. Remove the NetWorker Server resource from MSFCS by running the following command from any cluster node:
regcnsrd -d
c. Uninstall the NetWorker software. The NetWorker Installation Guide provides complete instructions.
2. Uninstall the NetWorker software. The NetWorker Installation Guide provides complete instructions.
c. Uninstall the NetWorker software. The NetWorker Installation Guide provides complete instructions.
c. Uninstall the NetWorker software. The NetWorker Installation Guide provides complete instructions.
A
administrator
Person who normally installs, configures, and maintains software on network computers, and who adds users and defines user
privileges.
attribute
Name or value property of a resource.
authorization code
Unique code that in combination with an associated enabler code unlocks the software for permanent use on a specific host
computer. , See license key.
B
backup
1. Duplicate of database or application data, or an entire computer system, stored separately from the original, which can be
used to recover the original if it is lost or damaged.
2. Operation that saves data to a volume for use as a backup.
backup group
, See group.
BMR
Windows Bare Metal Recovery, formerly known as Disaster Recovery. For more information on BMR, refer to the Windows Bare
Metal Recovery chapter in the Networker Administration Guide.
boot address
The address used by a node name when it boots up, but before HACMP/PowerHA for AIX starts.
bootstrap
Save set that is essential for disaster recovery procedures. The bootstrap consists of three components that reside on the
NetWorker server: the media database, the resource database, and a server index.
C
client
Host on a network, such as a computer, workstation, or application server whose data can be backed up and restored with the
backup server software.
Glossary 51
Client resource
NetWorker server resource that identifies the save sets to be backed up on a client. The Client resource also specifies
information about the backup, such as the schedule, browse policy, and retention policy for the save sets.
cluster client
A NetWorker client within a cluster; this can be either a virtual client, or a NetWorker Client resource that backs up the private
data that belongs to one of the physical nodes.
Console server
, See NetWorker Management Console (NMC).
D
database
1. Collection of data arranged for ease and speed of update, search, and retrieval by computer software.
2. Instance of a database management system (DBMS), which in a simple case might be a single file containing many records,
each of which contains the same set of fields.
datazone
Group of clients, storage devices, and storage nodes that are administered by a NetWorker server.
device
1. Storage folder or storage unit that can contain a backup volume. A device can be a tape device, optical drive, autochanger,
or disk connected to the server or storage node.
2. General term that refers to storage hardware.
3. Access path to the physical drive, when dynamic drive sharing (DDS) is enabled.
device-sharing infrastructure
The hardware, firmware, and software that permit several nodes in a cluster to share access to a device.
disaster recovery
Restore and recovery of data and business operations in the event of hardware failure or software corruption.
E
enabler code
Unique code that activates the software:
● Evaluation enablers or temporary enablers expire after a fixed period of time.
● Base enablers unlock the basic features for software.
● Add-on enablers unlock additional features or products, for example, library support.
, See license key.
52 Glossary
F
failover
A means of ensuring application availability by relocating resources in the event of a hardware or software failure. Two-node
failover capability allows operations to switch from one cluster node to the other. Failover capability can also be used as a
resource management tool.
failover cluster
Windows high-availability clusters, also known as HA clusters or failover clusters, are groups of computers that support server
applications that can be reliably utilized with a minimum of down-time. They operate by harnessing redundant computers in
groups or clusters that provide continued service when system components fail.
G
group
One or more client computers that are configured to perform a backup together, according to a single designated schedule or
set of conditions.
H
Highly available application
An application that is installed in a cluster environment and configured for failover capability. On an MC/ServiceGuard cluster
this is called a highly-available package.
host
Computer on a network.
host ID
Eight-character alphanumeric number that uniquely identifies a computer.
hostname
Name or address of a physical or virtual host computer that is connected to a network.
L
license key
Combination of an enabler code and authorization code for a specific product release to permanently enable its use. Also called
an activation key.
M
managed application
Program that can be monitored or administered, or both from the Console server.
media index
Database that contains indexed entries of storage volume location and the life cycle status of all data and volumes managed by
the NetWorker server. Also known as media database.
Glossary 53
N
networker_install_path
The path or directory where the installation process places the NetWorker software.
● AIX: /usr/sbin
● Linux: /usr/bin
● Solaris: /usr/sbin
● HP-UX: /opt/networker/bin
● Windows (New installs): C:\Program Files\EMC NetWorker\nsr\bin
● Windows (Updates): C:\Program Files\Legato\nsr\bin
NetWorker server
Computer on a network that runs the NetWorker server software, contains the online indexes, and provides backup and restore
services to the clients and storage nodes on the same network.
node
A physical computer that is a member of a cluster.
node name
The HACMP/PowerHA for AIX defined name for a physical node.
P
pathname
Set of instructions to the operating system for accessing a file:
● An absolute pathname indicates how to find a file by starting from the root directory and working down the directory tree.
● A relative pathname indicates how to find a file by starting from the current location.
physical client
The client associated with a physical node. For example the / and /usr file systems belong to the physical client.
private disk
A local disk on a cluster node. A private disk is not available to other nodes within the cluster.
R
recover
To restore data files from backup storage to a client and apply transaction (redo) logs to the data to make it consistent with a
given point-in-time.
remote device
1. Storage device that is attached to a storage node that is separate from the NetWorker server.
2. Storage device at an offsite location that stores a copy of data from a primary storage device for disaster recovery.
resource
Software component whose configurable attributes define the operational properties of the NetWorker server or its clients.
Clients, devices, schedules, groups, and policies are all NetWorker resources.
54 Glossary
resource database
NetWorker database of information about each configured resource.
S
save
NetWorker command that backs up client files to backup media volumes and makes data entries in the online index.
save set
1. Group of tiles or a file system copied to storage media by a backup or snapshot rollover operation.
2. NetWorker media database record for a specific backup or rollover.
scheduled backup
Type of backup that is configured to start automatically at a specified time for a group of one or more NetWorker clients. A
scheduled backup generates a bootstrap save set.
service address
The address used by highly-available services in an HACMP/PowerHA for AIX environment.
shared disk
Storage disk that is connected to multiple nodes in a cluster.
stand-alone server
A NetWorker server that is running within a cluster, but not configured as a highly-available application. A stand-alone server
does not have failover capability.
storage device
, See device.
storage node
Computer that manages physically attached storage devices or libraries, whose backup operations are administered from the
controlling NetWorker server. Typically a “remote” storage node that resides on a host other than the NetWorker server.
V
virtual client
A NetWorker Client resource that backs up data that belongs to a highly-available service or application within a cluster. Virtual
clients can fail over from one cluster node to another. For HACMP/PowerHA for unix the virtual client is the client associated
with a highly-available resource group. The file system defined in a resource group belongs to a virtual client. The virtual client
uses the service address. The HACMP/PowerHA for AIX resource group must contain an IP service label to be considered a
NetWorker virtual client.
Glossary 55