Failover Guide Interplay v2018 11
Failover Guide Interplay v2018 11
Failover Guide
Version 2018.11
Legal Notices
Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology, Inc.
This product is subject to the terms and conditions of a software license agreement provided with the software. The product may only be
used in accordance with the license agreement.
This product may be protected by one or more U.S. and non-U.S patents. Details are available at www.avid.com/patents.
This guide is protected by copyright. This guide is for your personal use and may not be reproduced or distributed, in whole or in part,
without permission of Avid. Reasonable care has been taken in preparing this guide; however, it may contain omissions, technical
inaccuracies, or typographical errors. Avid Technology, Inc. disclaims liability for all losses incurred through the use of this document.
Product specifications are subject to change without notice.
Copyright © 2019 Avid Technology, Inc. and its licensors. All rights reserved.
The following disclaimer is required by Sam Leffler and Silicon Graphics, Inc. for the use of their TIFF library:
Copyright © 1988–1997 Sam Leffler
Copyright © 1991–1997 Silicon Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this software [i.e., the TIFF library] and its documentation for any purpose is hereby
granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and
related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to
the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE,
INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR
CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
This Software may contain components licensed under the following conditions:
Copyright (c) 1989 The Regents of the University of California. All rights reserved.
Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are
duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use
acknowledge that the software was developed by the University of California, Berkeley. The name of the University may not be used to
endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS
IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided "as is" without express or implied warranty.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided "as is" without express or implied warranty.
Permission to use, copy, modify, distribute, and sell this software for any purpose is hereby granted without fee, provided that the above
copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation,
and that the name of Daniel Dardailler not be used in advertising or publicity pertaining to distribution of the software without specific,
written prior permission. Daniel Dardailler makes no representations about the suitability of this software for any purpose. It is provided "as
is" without express or implied warranty.
Modifications Copyright 1999 Matt Koss, under the same license as above.
Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice
is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting
documentation for such software.
2
THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR, NEITHER
THE AUTHOR NOR AT&T MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
This product includes software developed by the University of California, Berkeley and its contributors.
“This software contains V-LAN ver. 3.0 Command Protocols which communicate with V-LAN ver. 3.0 products developed by Videomedia,
Inc. and V-LAN ver. 3.0 compatible products developed by third parties under license from Videomedia, Inc. Use of this software will allow
“frame accurate” editing control of applicable videotape recorder decks, videodisc recorders/players and the like.”
The following disclaimer is required by Altura Software, Inc. for the use of its Mac2Win software and Sample Source
Code:
©1993–1998 Altura Software, Inc.
This product includes portions of the Alloy Look & Feel software from Incors GmbH.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
© DevelopMentor
This product may include the JCifs library, for which the following notice applies:
JCifs © Copyright 2004, The JCIFS Project, is licensed under LGPL (http://jcifs.samba.org/). See the LGPL.txt file in the Third Party
Software directory on the installation CD.
Avid Interplay contains components licensed from LavanTech. These components may only be used as part of and in connection with Avid
Interplay.
Trademarks
Avid, the Avid Logo, Avid Everywhere, Avid DNXHD, Avid DNXHR, Avid NEXIS, AirSpeed, Eleven, EUCON, Interplay, iNEWS, ISIS, Mbox,
MediaCentral, Media Composer, NewsCutter, Pro Tools, ProSet and RealSet, Maestro, PlayMaker, Sibelius, Symphony, and all related
product names and logos, are registered or unregistered trademarks of Avid Technology, Inc. in the United States and/or other countries.
The Interplay name is used with the permission of the Interplay Entertainment Corp. which bears no responsibility for Avid products. All
other trademarks are the property of their respective owners. For a full list of Avid trademarks, see: http://www.avid.com/US/about-avid/
legal-notices/trademarks.
Footage
Eco Challenge Morocco — Courtesy of Discovery Communications, Inc.
News material provided by WFTV Television Inc.
Ice Island — Courtesy of Kurtis Productions, Ltd.
Interplay | Engine Failover Guide • Created July 19, 2019 • This document is distributed by Avid in online (electronic)
form only, and is not available for purchase in printed form.
3
Contents
4
Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Symbols and Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
If You Need Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Avid Training Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 1 Automatic Server Failover Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Server Failover Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
How Server Failover Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Server Failover Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Server Failover Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Installing the Failover Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Slot Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Failover Cluster Connections: Redundant-Switch Configuration . . . . . . . . . . . . . . . . . . . 16
Failover Cluster Connections, Dual-Connected Configuration . . . . . . . . . . . . . . . . . . . . 18
HPE MSA Reference Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
HPE MSA Storage Management Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
HPE MSA Command Line Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
HPE MSA Support Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Clustering Technology and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chapter 2 Creating a Microsoft Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Server Failover Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Before You Begin the Server Failover Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Requirements for Domain User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
List of IP Addresses and Network Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Active Directory and DNS Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Preparing the Server for the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Configuring the ATTO Fibre Channel Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Changing Windows Server Settings on Each Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Configuring Local Software Firewalls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Renaming the Local Area Network Interface on Each Node . . . . . . . . . . . . . . . . . . . . . . 35
Configuring the Private Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 38
Configuring the Binding Order Networks on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 40
Configuring the Public Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 41
Configuring the Cluster Shared-Storage RAID Disks on Each Node. . . . . . . . . . . . . . . . 42
Configuring the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Joining Both Servers to the Active Directory Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Installing the Failover Clustering Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Creating the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Renaming the Cluster Networks in the Failover Cluster Manager . . . . . . . . . . . . . . . . . . 58
Renaming the Quorum Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Adding a Second IP Address to the Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Testing the Cluster Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 3 Installing the Interplay | Engine for a Failover Cluster . . . . . . . . . . . . . . . . . . . . 66
Disabling Any Web Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Installing the Interplay | Engine on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Preparation for Installing on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Bringing the Shared Database Drive Online if Necessary . . . . . . . . . . . . . . . . . . . . . . . . 67
Installing the Interplay Engine Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Checking the Status of the Cluster Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Adding a Second IP Address (Dual-Connected Configurations only) . . . . . . . . . . . . . . . 75
Changing the Resource Name of the Avid Workgroup Server (if applicable) . . . . . . . . . 79
Installing the Interplay | Engine on the Second Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Bringing the Interplay | Engine Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
After Installing the Interplay | Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Creating an Interplay | Production Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Testing the Complete Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Installing a Permanent License. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Updating a Clustered Installation (Rolling Upgrade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Uninstalling the Interplay Engine or Archive Engine on a Clustered System . . . . . . . . . . . . . 87
Chapter 4 Automatic Server Failover Tips and Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Appendix A Configuring the HPE MSA 2050 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Creating a Disk Group and Disk Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Appendix B Expanding the Database Volume for an Interplay Engine Cluster . . . . . . . . . . 97
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2) . . . . . . . . . . . . . . . 98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3) . . . . . . . . . . . . . . 103
Task 3: Extend the Databases Volume in Windows Disk Management . . . . . . . . . . . . . . . . 109
Appendix C Adding Storage for File Assets for an Interplay Engine Cluster . . . . . . . . . . . 112
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Task 2: Create a Disk and Volume Using the HPE SMU V3 . . . . . . . . . . . . . . . . . . . . . . . . 114
5
Task 3: Initialize the Volume in Windows Disk Management . . . . . . . . . . . . . . . . . . . . . . . . 120
Task 4: Add the Disk to the Failover Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Task 5: Copy the File Assets to the New Drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Task 6: Mount the FileAssets Partition in the _Master Folder . . . . . . . . . . . . . . . . . . . . . . . 127
Task 7: Create Cluster Dependencies for the New Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6
Using This Guide
Congratulations on the purchase of Interplay | Production, a powerful system for managing media in
a shared storage environment.
This guide is intended for all Interplay Production administrators who are responsible for installing,
configuring, and maintaining an Interplay | Engine with the Automatic Server Failover module
integrated. The information in this guide applies to Avid MediaCentral Production Management
v2018.11 and later, running Windows Server 2012 R2 or Windows Server 2016.
Revision History
Date Revised Changes Made
July 2019 This update adds references to the Dell PowerEdge 640 and the HPE ProLiant DL360 Gen10
servers.
n
A note provides important related information, reminders, recommendations, and
strong suggestions.
c
A caution means that a specific action you take could cause harm to your computer or
cause you to lose data.
A warning describes an action that could cause you physical harm. Follow the
w guidelines in this document or on the unit itself when handling electrical equipment.
> This symbol indicates menu commands (and subcommands) in the order you select
them. For example, File > Import means to open the File menu and then select the
Import command.
If You Need Help
This symbol indicates a single-step procedure. Multiple arrows in a list indicate that
you perform one of the actions listed.
(Windows), (Windows This text indicates that the information applies only to the specified operating system,
only), (Macintosh), or either Windows or Macintosh OS X.
(Macintosh only)
Bold font Bold font is primarily used in task instructions to identify user interface items and
keyboard sequences.
Italic font Italic font is used to emphasize certain words and to indicate variables.
Courier Bold font Courier Bold font identifies text that you type.
Ctrl+key or mouse action Press and hold the first key while you press the last key or perform the mouse action.
For example, Command+Option+C or Ctrl+drag.
| (pipe character) The pipe character is used in some Avid product names, such as Interplay |
Production. In this document, the pipe is used in product names when they are in
headings or at their first use in text.
For information on courses/schedules, training centers, certifications, courseware, and books, please
visit www.avid.com/support and follow the Training links, or call Avid Sales at 800-949-AVID
(800-949-2843).
8
1 Automatic Server Failover Introduction
The Interplay implementation of server failover uses Microsoft® clustering technology. For
background information on clustering technology and links to Microsoft clustering information, see
“Clustering Technology and Terminology” on page 23.
The failover cluster is a system made up of two server nodes and a shared-storage device connected
over Fibre Channel. These are to be deployed in the same location given the shared access to the
storage device. The cluster uses the concept of a “virtual server” to specify groups of resources that
failover together. This virtual server is referred to as a “cluster application” in the failover cluster user
interface.
The following diagram illustrates the components of a cluster group, including sample IP addresses.
For a list of required IP addresses and node names, see “List of IP Addresses and Network Names”
on page 28.
How Server Failover Works
Cluster Group
Intranet
FibreChannel
n If you are already using clusters, the Avid Interplay Engine will not interfere with your current setup.
When the Microsoft Cluster service is running on both systems and the server is deployed in cluster
mode, the Interplay Engine and its accompanying services are exposed to users as a virtual server. To
clients, connecting to the clustered virtual Interplay Engine appears to be the same process as
connecting to a single, physical machine. The user or client application does not know which node is
actually hosting the virtual server.
When the server is online, the resource monitor regularly checks its availability and automatically
restarts the server or initiates a failover to the other node if a failure is detected. The exact behavior
can be configured using the Failover Cluster Manager. Because clients connect to the virtual network
name and IP address, which are also taken over by the failover node, the impact on the availability of
the server is minimal.
Avid supports a configuration that uses connections to two public networks (VLAN 10 and VLAN
20) on a single switch. The cluster monitors both networks. If one fails, the cluster application stays
on line and can still be reached over the other network. If the switch fails, both networks monitored
by the cluster will fail simultaneously and the cluster application will go offline.
10
Server Failover Configurations
For a high degree of protection against network outages, Avid supports a configuration that uses two
network switches, each connected to a shared primary network (VLAN 30) and protected by a
failover protocol. If one network switch fails, the virtual server remains online through the other
VLAN 30 network and switch.
This document describes a cluster configuration that uses the cluster application supplied with
Windows Server 2012 R2 and Windows Server 2016. The process to create a cluster in these two
version of Microsoft Windows is similar. Any variations to Avid processes related to these two
operating systems are detailed in this document.
For information about Microsoft Windows Server 2016 Failover Clustering, see: https://
docs.microsoft.com/en-us/windows-server/failover-clustering/failover-clustering-overview
For information about Microsoft Windows Server 2012 Failover Clustering, see: https://
technet.microsoft.com/en-us/library/hh831579.aspx
These configurations refer to multiple virtual networks (VLANS) that are used with ISIS 7000/7500
shared-storage systems. ISIS 5000/5500 and Avid NEXIS® systems typically do not use multiple
VLANS. You can adapt these configurations for use in ISIS 5000/5500 or Avid NEXIS
environments.
Redundant-Switch Configuration
The following diagram illustrates the failover cluster architecture for an Avid ISIS environment that
uses two layer-3 switches. These switches are configured for failover protection through either HSRP
(Hot Standby Router Protocol) or VRRP (Virtual Router Redundancy Protocol). The cluster nodes
are connected to one subnet (VLAN 30), each through a different network switch. If one of the
VLAN 30 networks fails, the virtual server remains online through the other VLAN 30 network and
switch.
n This guide does not describe how to configure redundant switches for an Avid shared-storage
network. Configuration information is included in the ISIS Qualified Switch Reference Guide and
the Avid NEXIS Network and Switch Guide, which are available for download from the Avid
Customer Support Knowledge Base at www.avid.com\onlinesupport.
11
Server Failover Configurations
Interplay editing
clients
Interplay Engine cluster node 1
Private network
for heartbeat
Cluster-storage
RAID array
1 GB Ethernet connection
Fibre Channel connection
The following table describes what happens in the redundant-switch configuration as a result of an
outage:
Hardware (CPU, network adapter, The cluster detects the outage and triggers failover to the remaining
memory, cable, power supply) fails node.
The Interplay Engine is still accessible.
Network switch 1 (VLAN 30) fails External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.
Network switch 2 (VLAN 30) fails External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.
Dual-Connected Configuration
The following diagram illustrates the failover cluster architecture for an Avid ISIS environment. In
this environment, each cluster node is “dual-connected” to the network switch: one network interface
is connected to the VLAN 10 subnet and the other is connected to the VLAN 20 subnet. If one of the
subnets fails, the virtual server remains online through the other subnet.
12
Server Failover Requirements
Interplay editing
clients
Interplay Engine cluster node 1
Private network
for heartbeat
Cluster-storage
RAID array
Interplay Engine cluster node 2
LEGEND
1 GB Ethernet connection
Fibre Channel Connection
The following table describes what happens in the dual-connected configuration as a result of an
outage:
Hardware (CPU, network adapter, The cluster detects the outage and triggers failover to the remaining
memory, cable, power supply) fails node.
The Interplay Engine is still accessible.
Left ISIS VLAN (VLAN10) fails The Interplay Engine is still accessible through the right network.
Right ISIS VLAN (VLAN 20) fails The Interplay Engine is still accessible through the left network.
Hardware
The automatic server failover system was qualified with the following hardware:
• Two servers functioning as nodes in a failover cluster. Avid has qualified a Dell™ server and an
HPE® server with minimum specifications, their equivalent, or better. For more information, see
the Avid MediaCentral | Production Management Dell and HPE Server Support or the Avid
Interplay | Production Dell and HP Server Support documents on the Avid Knowledge Base at:
http://avid.force.com/pkb/articles/en_US/readme/Avid-Interplay-Production-Documentation
On-board network interface connectors (NICs) for these servers are qualified. There is no
requirement for an Intel network card.
13
Server Failover Requirements
• Two Fibre Channel host adapters (one for each server in the cluster).
The ATTO Celerity FC-81EN is qualified for these servers. Other Fibre Channel adapters might
work but have not been qualified. Before using another Fibre Channel adapter, contact the
vendor to check compatibility with the server host, the storage area network (SAN), and most
importantly, a Microsoft failover cluster.
• One of the following
- One Infortrend® S12F-R1440 storage array. For more information, see the Infortrend
EonStor®DS S12F-R1440 Installation and Hardware Reference Manual.
- One HPE MSA 2040 SAN storage array. For more information, see the HPE MSA 2040
Quick Start Instructions, available here:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03792354
Also see “HPE MSA Reference Information” on page 22.
- One HPE MSA 2050 SAN storage array. For more information, see the HPE MSA 2050/
2052 Quick Start Instructions, available here:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00017714en_us
Also see “HPE MSA Reference Information” on page 22.
The servers in a cluster are connected using one or more cluster shared-storage buses and one or
more physically independent networks acting as a heartbeat.
Server Software
The automatic failover system was qualified on the following operating systems:
• Windows Server 2012 R2 Standard
• Windows Server 2016 Standard
Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. One license is used for both nodes in an Interplay Engine failover cluster.
For installation information, see “Installing a Permanent License” on page 84.
Starting with Avid MediaCentral Production Management v2018.11, the software installation is
deployed using a different method from prior releases. If you are familiar with prior versions of this
guide, pay close attention to the changes included in this document.
Space Requirements
The default disk configuration for the shared RAID array is as follows:
14
Installing the Failover Hardware Components
Antivirus Software
You can run antivirus software on a cluster, if the antivirus software is cluster-aware. For information
about cluster-aware versions of your antivirus software, contact the antivirus vendor. If you are
running antivirus software on a cluster, make sure you exclude these locations from the virus
scanning: Q:\ (Quorum disk), C:\Windows\Cluster, and S:\Workgroup_Databases (database).
Before you set up a cluster in an Avid Interplay environment, you should be familiar with the
following functions:
• Microsoft Windows Active Directory domains and domain users
• Microsoft Windows clustering for Windows Server (see “Clustering Technology and
Terminology” on page 23)
• Disk configuration (format, partition, naming)
• Network configuration
For information about Avid Networks and Interplay Production, see “Network Requirements for
ISIS/NEXIS” on the Avid Knowledge Base at http://avid.force.com/pkb/articles/en_US/
compatibility/en244197.
Slot Locations
Each server requires a fibre channel host adapter to connect to the shared-storage RAID array.
For more information on Avid qualified servers, see the Avid Audio and Video Compatibility Charts
on the Avid Knowledge Base at the following link:
http://avid.force.com/pkb/articles/en_US/compatibility/Avid-Video-Compatibility-Charts
15
Installing the Failover Hardware Components
The Avid qualified Dell PowerEdge 630 and 640 servers include three PCIe slots. Avid recommends
installing the fibre channel host adapter in slot 2, as shown in the following example illustration of a
Dell PowerEdge 630.
n The Dell system is designed to detect what type of card is in each slot and to negotiate optimum
throughput. As a result, using slot 2 for the fibre channel host adapter is recommended but not
required. For more information, see the Dell PowerEdge Owner’s Manual.
The Avid qualified HPE ProLiant DL 360 Gen 9 and Gen 10 servers include two or three slots. Avid
recommends installing the fibre channel host adapter in slot 2, as shown in the following example
illustration of an HPE ProLiant DL360 Gen 9.
16
Installing the Failover Hardware Components
The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.
n This configuration refers to a virtual network (VLAN) that is used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
Ethernet to node 2
(Private network) Infortrend RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
LEGEND
1 GB Ethernet connection
Fibre Channel connection
17
Installing the Failover Hardware Components
Ethernet to node 2
(Private network)
HPE MSA RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
LEGEND
1 GB Ethernet connection
Fibre Channel connection
18
Installing the Failover Hardware Components
- Network interface connector 3 to the bottom-left network interface connector on the second
cluster node (private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 1 (top left) on the Infortrend RAID array or the HPE MSA RAID array.
• Second cluster node:
- Network interface connector 2 to the ISIS left subnet (VLAN 10 public network)
- Network interface connector 4 to the ISIS right subnet (VLAN 20 public network)
- Network interface connector 3 to the bottom-left network interface connector on the first
cluster node (private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 2 (bottom, second from left) on the HPE MSA RAID array.
The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.
n This configuration refers to virtual networks (VLANs) that are used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
19
Installing the Failover Hardware Components
Ethernet to node 2
(Private network) Infortrend RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
LEGEND
1 GB Ethernet connection
Fibre Channel connection
20
Installing the Failover Hardware Components
Ethernet to node 2
(Private network) HPE MSA RAID Array
Back Panel
Fibre Channel
to Infortrend
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
LEGEND
1 GB Ethernet connection
Fibre Channel connection
21
HPE MSA Reference Information
n As of November 1st, 2015, servers, storage and networking products are supported by Hewlett
Packard Enterprise (HPE).
Default IP Settings
• Management Port IP Address:
- 10.0.0.2 (controller A)
- 10.0.0.3 (controller B)
• IP Subnet Mask: 255.255.255.0
• Gateway IP Address: 10.0.0.1
You can change these settings to match local networks through the SMU, the Command Line
Interface (CLI), or the MSA Device Discovery Tool DVD that ships with the array.
Hostnames
Hostnames are predefined using the MAC address of the controller adapter, using the following
syntax:
• http://hp-msa-storage-<last 6 digits of mac address>
For example:
• http://hp-msa-storage-1dfcfc
You can find the MAC address through the SMU. Go to Enclosure Overview and click the Network
port. The hostname itself is not displayed in the SMU and cannot be changed.
22
Clustering Technology and Terminology
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03820042
Documentation for the HPE MSA 2050 is located on the HPE support site:
https://support.hpe.com/hpsc/doc/public/display?docLocale=en_US&docId=emr_na-
a00017812en_us
Here is a brief summary of the major concepts and terms, adapted from the Microsoft Windows
Server web site:
• failover cluster; A group of independent computers that work together to increase the availability
of clustered roles (formerly called clustered applications and services). The clustered servers
(called nodes) are connected by physical cables and by software. If one of the nodes fails,
another node begins to provide services (a process known as failover).
• Cluster service: The essential software component that controls all aspects of server cluster or
failover cluster operation and manages the cluster configuration database. Each node in a failover
cluster owns one instance of the Cluster service.
23
Clustering Technology and Terminology
• cluster resources: Cluster components (hardware and software) that are managed by the cluster
service. Resources are physical hardware devices such as disk drives, and logical items such as
IP addresses and applications.
• clustered role: A collection of resources that are managed by the cluster service as a single,
logical unit and that are always brought online on the same node.
• quorum: The quorum for a cluster is determined by the number of voting elements that must be
part of active cluster membership for that cluster to start properly or continue running. By
default, every node in the cluster has a single quorum vote. In addition, a quorum witness (when
configured) has an additional single quorum vote. A quorum witness can be a designated disk
resource or a file share resource.
An Interplay Engine failover cluster uses a disk resource, named Quorum, as a quorum witness.
24
2 Creating a Microsoft Failover Cluster
This chapter describes the processes for creating a Microsoft failover cluster for automatic server
failover. It is crucial that you follow the instructions given in this chapter completely, otherwise the
automatic server failover will not work.
Instructions for installing the Interplay Engine are provided in “Installing the Interplay | Engine for a
Failover Cluster” on page 66.
n Do not install any other software on the cluster machines except the Interplay Engine. For example,
Media Indexer software needs to be installed on a different server. For complete installation
instructions, see the Interplay | Production Software Installation and Configuration Guide.
b Make sure all cluster hardware connections are See “Installing the Failover Hardware
correct. Components” on page 15.
Before You Begin the Server Failover Installation
b Make sure that the site has a network that is Facility staff
qualified to run Active Directory and DNS
services.
b Create or select domain user accounts for See “Requirements for Domain User Accounts” on
creating and administering the cluster. page 27.
b Reserve static IP addresses for all network See “List of IP Addresses and Network Names” on
interfaces and host names. page 28.
b If necessary, download the ATTO Configuration See “Changing Default Settings for the ATTO Card
Utility. on Each Node” on page 32.
b Make sure the time settings for both nodes are in Operating system documentation.
sync. If not, you must synchronize the times or
A Guide to Time Synchronization for Avid Interplay
you will not be able to add both nodes to the
Systems on the Avid Knowledge Base.
cluster. You should also sync the shared storage
array. You can use the Network Time Protocol
(NTP).
b Make sure the Remote Registry service is started Operating system documentation
and is enabled for Automatic startup. Open
Server Management and select Configuration >
Services > Remote Registry.
26
Before You Begin the Server Failover Installation
b Install a permanent license. A temporary license See “Installing a Permanent License” on page 84.
is installed with the Interplay Engine software.
After the installation is complete, install the
permanent license. Permanent licenses are
supplied in one of two ways:
• As a hardware license that is activated
through an application key (dongle).
• As a software license using the Application
Manager.
n The tool that allows you to change the Server Execution User has changed for 2018.11. See the
Interplay 2018.11 ReadMe for details.
• Cluster installation account: Create or select a domain user account to use during the
installation and configuration process. There are special requirements for the account that you
use for the Microsoft cluster installation and creation process (described below).
- If your site allows you to use an account with the required privileges, you can use this
account throughout the entire installation and configuration process.
- If your site does not allow you to use an account with the required privileges, you can work
with the site’s IT department to use a domain administrator’s account only for the Microsoft
cluster creation steps. For other tasks, you can use a domain user account without the
required privileges.
In addition, the account must have administrative permissions on the servers that will become
cluster nodes. You can do this by adding the account to the local Administrators group on each of
the servers that will become cluster nodes.
27
Before You Begin the Server Failover Installation
Requirements for Microsoft cluster creation: To create a user with the necessary rights for
Microsoft cluster creation, you need to work with the site’s IT department to access Active
Directory (AD). Depending on the account policies of the site, you can grant the necessary rights
for this user in one of the following ways:
- Create computer objects for the failover cluster (virtual host name) and the Interplay Engine
(virtual host name) in the Active Directory (AD) and grant the user Full Control on them. In
addition, the failover cluster object needs Full Control over the Interplay Engine object. For
examples, see “List of IP Addresses and Network Names” on page 28.
The account for these objects must be disabled so that when the Create Cluster wizard and
the Interplay Engine installer are run, they can confirm that the account to be used for the
cluster is not currently in use by an existing computer or cluster in the domain. The cluster
creation process then enables the entry in the AD.
- Make the user a member of the Domain Administrators group. There are fewer manual steps
required when using this type of account.
- Grant the user the permissions “Create Computer objects” and “Read All Properties” in the
container in which new computer objects get created, such as the computer’s Organizational
Unit (OU).
For more information, see the Avid Knowledge Base article “How to Prestage Cluster Name
Object and Virtual Interplay Engine Name” at http://avid.force.com/pkb/articles/en_US/
How_To/How-to-prestage-cluster-name-object-and-virtual-Interplay-Engine-name. This article
references the Microsoft article “Failover Cluster Step-by-Step Guide: Configuring Accounts in
Active Directory” at http://technet.microsoft.com/en-us/library/cc731002%28WS.10%29.aspx
• Cluster administration account: Create or select a user account for logging in to and
administering the failover cluster server. Depending on the account policies of your site, this
account could be the same as the cluster installation account, or it can be a different domain user
account with administrative permissions on the servers that will become cluster nodes.
n Make sure that these IP addresses are outside of the range that is available to DHCP so they cannot
automatically be assigned to other machines.
n All names must be valid and unique network host names.A hostname must comply with RFC 952
standards. For example, you cannot use an underscore in a hostname. For more information, see
“Naming Conventions in Active Directory for Computers, Domains, Sites, and OUs” on the
Microsoft Support Knowledge Base.
The following table provides a list of example names that you can use when configuring the cluster
for a redundant-switch configuration. You can fill in the blanks with your choices to use as a
reference during the configuration process.
28
Before You Begin the Server Failover Installation
The following table provides a list of example names that you can use when configuring the cluster
for an dual-connected configuration. Fill in the blanks to use as a reference.
29
Before You Begin the Server Failover Installation
30
Preparing the Server for the Failover Cluster
a. Entries are dynamically added to the DNS when the node logs on to Active Directory.
b. If you manually created Active Directory entries for the Microsoft failover cluster and Interplay Engine cluster
role, make sure to disable the entries in Active Directory in order to build the Microsoft failover cluster (see
“Requirements for Domain User Accounts” on page 27).
c. Add reverse static entries only. Forward entries are dynamically added by the failover cluster. Static entries
must be exempted from scavenging rules.
The tasks in this section do not require the administrative privileges needed for Microsoft cluster
creation (see “Requirements for Domain User Accounts” on page 27).
n The ATTO Celerity FC-81EN is qualified for Dell and HPE servers. Other Fibre Channel adapters
supported by Dell and HPE are also supported for an Interplay Engine cluster. This guide does not
contain information about the configuration of these cards; the default factory settings should work
correctly. If the SAN drives are accessible on both nodes, and if the failover cluster validation
succeeds, the adapters are configured correctly.
31
Preparing the Server for the Failover Cluster
You need to download the ATTO drivers and the ATTO Configuration Tool from the ATTO web site
and install it on the server. You must register to download tools and drivers.
To download and install the ATTO Configuration Tool for the FC-81EN card:
1. Go to the 8Gb Celerity HBAs Downloads page and download the ATTO Configuration Tool:
https://www.attotech.com/downloads/70/
Scroll down several pages to find the Windows ConfigTool (currently version 4.22).
2. Double-click the downloaded file win_app_configtool_422.exe, then click Run.
3. Extract the files.
4. Locate the folder to which you extracted the files and double-click ConfigTool_422.exe.
5. Follow the system prompts for a Full Installation.
Then locate, download and install the appropriate driver. The current version for the Celerity FC-
81EN is version 1.85.
You need to use the ATTO Configuration Tool to change some default settings on each node in the
cluster.
3. Type the user name and password for a local administrator account and click Login.
4. In the Device Listing tree, navigate to the appropriate channel on your host adapter.
5. Click the NVRAM tab.
32
Preparing the Server for the Failover Cluster
33
Preparing the Server for the Failover Cluster
n No other Windows server settings need to be changed. Later, you need to add features for clustering.
See “Installing the Failover Clustering Features” on page 49.
3. In the Advanced tab, in the Performance section, click the Settings button.
4. In the Performance Options dialog box, click the Advanced tab.
5. In the Processor scheduling section, for “Adjust for best performance of,” select Programs.
6. Click OK.
7. In the System Properties dialog box, click OK.
34
Preparing the Server for the Failover Cluster
Make sure any local software firewalls used in a failover cluster, such as Symantec End Point (SEP),
are configured to allow iPv6 communication and IPv6 over IPv4 communication.
n The Windows Firewall service must be enabled for proper operation of a failover cluster. Note that
enabling the service is different from enabling or disabling the firewall itself and firewall rules
Currently the SEP Firewall does not support IPv6. Allow this communication in the SEP Manager.
Edit the rules shown in the following illustrations:
c Avid recommends that both nodes use identical network interface names. Although you can use
any name for the network connections, Avid suggests that you use the naming conventions
provided in the table in the following procedure.
35
Preparing the Server for the Failover Cluster
n One way to find out which hardware port matches which Windows device name is to plug in
sequentially one network cable into each physical port and check in the Network Connections dialog
which device becomes connected.
Dual-Connected Configuration
Dell PowerEdge R630 Back Panel
36
Preparing the Server for the Failover Cluster
Network
Connector New Names
s as (Redundant-switch New Names (Dual-
Labeled configuration) connected configuration) Device Name
37
Preparing the Server for the Failover Cluster
38
Preparing the Server for the Failover Cluster
6. On the General tab of the Internet Protocol (TCP/IP) Properties dialog box:
a. Select “Use the following IP address.”
b. IP address: type the IP address for the Private network connection for the node you are
configuring. See “List of IP Addresses and Network Names” on page 28.
n When performing this procedure on the second node in the cluster, make sure you assign a static
private IP address unique to that node. In this example, node 1 uses 192.168.100.1 and node 2 uses
192. 168. 100. 2.
n Make sure you use a completely different IP address scheme from the one used for the public
network.
d. Make sure the “Default gateway” and “Use the Following DNS server addresses” text boxes
are empty.
7. Click Advanced.
The Advanced TCP/IP Settings dialog box opens.
39
Preparing the Server for the Failover Cluster
8. On the DNS tab, make sure no values are defined and that the “Register this connection’s
addresses in DNS” and “Use this connection’s DNS suffix in DNS registration” are not selected.
9. On the WINS tab, do the following:
t Make sure no values are defined in the WINS addresses area.
t Make sure “Enable LMHOSTS lookup” is selected.
t Select “Disable NetBIOS over TCP/IP.”
10. Click OK.
A message might by displayed stating “This connection has an empty primary WINS address.
Do you want to continue?” Click Yes.
11. Repeat this procedure on node 2, using the static private IP address for that node.
40
Preparing the Server for the Failover Cluster
5. In the Connections area, use the arrow controls to position the network connections in the
following order:
- For a redundant-switch configuration, use the following order:
- Public
- Private
- For a dual-connected configuration, use the following order, as shown in the illustration:
- Left
- Right
- Private
6. Click OK.
7. Repeat this procedure on node 2 and make sure the configuration matches on both nodes.
Avid recommends that you disable IPv6 for the public network adapters, as shown in the following
illustration:
41
Preparing the Server for the Failover Cluster
The first procedure describes how to configure disks for the Infortrend array, which contains three
disks. The second procedure describes how to configure disks for the HPE MSA array, which
contains two disks.
42
Preparing the Server for the Failover Cluster
3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 3. Do not bring Disk 2 online.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.
Select Disk 1 and Disk 3 and make sure that MBR is selected. Click OK.
43
Preparing the Server for the Failover Cluster
5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.
Use the following names and drive letters, depending on your storage array:
n If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
The following illustration shows Disk 1 and Disk 3 with the required names and drive letters for
the Infortrend S12F-R1440:
44
Preparing the Server for the Failover Cluster
6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a. Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 3 online, as described in step 3.
c. Right-click a partition, select Change Drive Letter, and enter the appropriate letter.
45
Preparing the Server for the Failover Cluster
t Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. In this example they are initialized and formatted, but offline.
3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 2.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.
Select Disk 1 and Disk 2 and make sure that MBR is selected. Click OK.
5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.
46
Preparing the Server for the Failover Cluster
Disk Name and Drive Letter HPE MSA 2040, MSA 2050
n If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
The following illustration shows Disk 1 and Disk 2 with the required names and drive letters.
47
Configuring the Failover Cluster
6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a. Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 2 online, as described in step 3.
c. Right-click a partition, select Change Drive Letter, and enter the appropriate letter.
d. Repeat these actions for the other partitions.
9. Boot the first node.
10. Open the Disk Management tool to make sure that the disks are still online and have the correct
drive letters assigned.
At this point, both nodes should be running.
48
Configuring the Failover Cluster
5. Rename the Quorum disk. See “Renaming the Quorum Disk” on page 60.
6. For a dual-connected configuration, add a second IP address. See “Adding a Second IP Address
to the Cluster” on page 61.
7. Test the failover. See “Testing the Cluster Installation” on page 65.
c Creating the failover cluster requires an account with particular administrative privileges. For
more information, see “Requirements for Domain User Accounts” on page 27.
49
Configuring the Failover Cluster
50
Configuring the Failover Cluster
6. Make sure “Select a server from the server pool” is selected. Then select the server on which you
are working and click Next.
The Server Roles screen is displayed. Two File and Storage Services are installed. No additional
server roles are needed. Make sure that “Application Server” is not selected.
51
Configuring the Failover Cluster
9. Make sure “Include management tools (if applicable)” is selected, then click Add Features.
The Features screen is displayed again.
10. Verify that the following two features have been added:
- Failover Cluster Management Tools
- Failover Cluster Module for Windows PowerShell
n In previous releases you were instructed to add the Failover Cluster Command Interface feature.
Microsoft has deprecated the feature and it is no longer needed by the installation.
52
Configuring the Failover Cluster
53
Configuring the Failover Cluster
The Create Cluster Wizard opens with the Before You Begin window.
5. Review the information and click Next (you will validate the cluster in a later step).
6. In the Select Servers window, type the simple computer name of node 1 and click Add. Then
type the computer name of node 2 and click Add. The Cluster Wizard checks the entries and, if
the entries are valid, lists the fully qualified domain names in the list of servers, as shown in the
following illustration:
54
Configuring the Failover Cluster
c If you cannot add the remote node to the cluster, and receive an error message “Failed to
connect to the service manager on <computer-name>,” check the following:
- Make sure that the time settings for both nodes are in sync.
- Make sure that the login account is a domain account with the required privileges.
- Make sure the Remote Registry service is enabled.
For more information, see “Before You Begin the Server Failover Installation” on page 25.
7. Click Next.
The Validation Warning window opens.
8. Select Yes and click Next several times. When you can select a testing option, select Run All
Tests.
The automatic cluster validation tests begin. The tests take approximately five minutes. After
running these validation tests and receiving notification that the cluster is valid, you are eligible
for technical support from Microsoft.
The following tests display warnings, which you can ignore:
- List Software Updates (Windows Update Service is not running)
- Validate Storage Spaces Persistent Reservation
- Validate All Drivers Signed
- Validate Software Update Levels (Windows Update Service is not running)
9. In the Access Point for Administering the Cluster window, type a name for the cluster, then click
in the Address text box and enter an IP address. This is the name you created in the Active
Directory (see “Requirements for Domain User Accounts” on page 27).
55
Configuring the Failover Cluster
If you are configuring a dual-connected cluster, you need to add a second IP address after
renaming and deleting cluster disks. This procedure is described in “Adding a Second IP Address
to the Cluster” on page 61.
10. Click Next.
A message informs you that the system is validating settings. At the end of the process, the
Confirmation window opens.
56
Configuring the Failover Cluster
11. Review the information. Make sure “Add all eligible storage to the cluster” is selected. If all
information is correct, click Next.
The Create Cluster Wizard creates the cluster. At the end of the process, a Summary window
opens and displays information about the cluster.
You can click View Report to see a log of the entire cluster creation.
12. Click Finish.
Now when you open the Failover Cluster Manager, the cluster you created and information about
its components are displayed, including the networks available to the cluster (cluster networks).
To view the networks, select Networks in the list on the left side of the window.
The following illustration shows components of a cluster in a redundant-switch environment.
Cluster Network 1 is a public network (Cluster and Client) connecting to one of the redundant
switches, and Cluster Network 2 is a private, internal network for the heartbeat (Cluster only).
If you are configuring a dual-connected cluster, three networks are listed. Cluster Network 1 and
Cluster Network 2 are external networks connected to VLAN 10 and VLAN 20 on Avid ISIS,
and Cluster Network 3 is a private, internal network for the heartbeat.
57
Configuring the Failover Cluster
n This configuration refers to virtual networks (VLAN) that are used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
58
Configuring the Failover Cluster
n The installer will ask for this name later in the installation process so make note of the name. Avid
recommends that you use the suggested names to make it easier for someone to upgrade or trouble
shoot the system at a later date.
5. Click OK.
6. If you are configuring a dual-connected cluster configuration, rename Cluster Network 2, using
Right. For this network, keep the option “Allow clients to connect through this network.” Click
OK.
7. Rename the other network Private. This network is used for the heartbeat. For this private
network, leave the option “Allow clients to connect through this network” unchecked. Click OK.
59
Configuring the Failover Cluster
2. Right-click the disk assigned to “Disk Witness in Quorum” and select Properties
60
Configuring the Failover Cluster
4. Click OK.
If a network is not enabled, right-click the network, select Properties, and select “Allow clients to
connect through this network.”
61
Configuring the Failover Cluster
2. In the Failover Cluster Manager, select the failover cluster by clicking on the Cluster name in the
left column.
3. In the Actions panel (right column), select Properties in the Name section.
62
Configuring the Failover Cluster
63
Configuring the Failover Cluster
5. Click Apply.
A confirmation box asks you to confirm that all cluster nodes need to be restarted. You will
restart the nodes later in this procedure, so select Yes.
64
Configuring the Failover Cluster
6. Click the Dependencies tab and check if the new IP address was added with an OR conjunction.
If the second IP address is not there, click “Click here to add a dependency.” Select “OR” from
the list in the AND/OR column and select the new IP address from the list in the Resource
column.
65
3 Installing the Interplay | Engine for a
Failover Cluster
After you set up and configure the cluster, you need to install the Interplay Engine software on both
nodes. The following topics describe installing the Interplay Engine and other related tasks:
• Disabling Any Web Servers
• Installing the Interplay | Engine on the First Node
• Installing the Interplay | Engine on the Second Node
• Bringing the Interplay | Engine Online
• After Installing the Interplay | Engine
• Creating an Interplay | Production Database
• Testing the Complete Installation
• Installing a Permanent License
• Updating a Clustered Installation (Rolling Upgrade)
• Uninstalling the Interplay Engine or Archive Engine on a Clustered System
The tasks in this chapter require local administrator rights to the Interplay Engine servers. Unlike the
process for “Requirements for Domain User Accounts” on page 27, domain administrator privileges
are not required.
n In a standard installation, you should not be required to take any action as IIS is disabled by default
in Windows Server 2012 R2 and Windows Server 2016.
n When installing the Interplay Engine for the first time on a machine with a failover cluster, you are
asked to verify the type of installation: cluster or single-server. When you install a cluster, the
installation on the second node reuses the configuration information from the first node without
allowing you to change the cluster-specific settings. In other words, it is not possible to change the
cluster configuration settings without uninstalling the Interplay Engine.
If the shared database drive is not under cluster control, do the following:
• Use the first procedure attempt to add the shared database drive to the Cluster via the Cluster
Manager.
• If that is not successful then use the second procedure to make the S: drive available in the
Windows Disk Management. The shared database drive then will not be under cluster control
when the Engine installer starts, but the Engine installer will try to add it to the Cluster.
67
Installing the Interplay | Engine on the First Node
Using the Cluster Manager to add the shared database drive to the cluster:
1. In the Failover Cluster Manager select cluster_name > Storage > Disks.
2. Click on “Add Disk” in the Actions tab.
3. In the “Add Disks to Cluster” dialog the database disk should be offered (it will have Resource
Name like “Cluster Disk <number>”) and should already by checked as shown in the following
illustration. Click OK to add it.
If an Infortrend storage array is used, the dialog might offer 2 disks. In that case only select the
database disk (which is the larger one).
4. The disk will then be displayed in the Failover Cluster Manager and it should already be online.
If it is not online, select it and click “Bring Online” in the Actions menu.
Note that if this procedure does not bring the shared database drive online, use the alternate
procedure described below.
Using the Windows Disk Management to bring the shared database drive online:
1. Note that this procedure is only necessary if the first procedure did not bring the shared database
drive online.
2. On the first node, open Disk Management by doing one of the following:
t Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. Disk 1 is online, and Disk 2 is offline.
68
Installing the Interplay | Engine on the First Node
4. Make sure the drive letter is correct (S:) and the drive is named Database. If not, you must
change it now. Right-click the disk name and letter (right-column) and select Change Drive
Letter or Path.
69
Installing the Interplay | Engine on the First Node
If you attempt to change the drive letter, you receive a warning tells you that some programs that
rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
Also be aware that the Avid software installer does not verify values after you enter them. If you
enter the wrong subnet for the cluster for example, the installer will allow you to proceed. The one
exception to this rule is the password for the Server Execution User. The installer prompts you for the
SEU password — twice, and it verifies that you entered the same password in both instances.
n If you are installing MediaCentral Production Management v2018.11 or later on Windows Server
2012, the installer prompts you to install one or more prerequisite Windows components. Follow the
prompts to install these prerequisites and reboot the server when prompted.
The installer opens a new PowerShell command window and displays the Production
Management Engine Installer Welcome screen. During the installation, the command window
displays information about the installation process.
c Avoid marking (highlighting) any text in the PowerShell command window. If you mark any
text in this window, you will pause the installation process. If you accidentally mark text, you
must click once anywhere inside the command window to resume the installation.
70
Installing the Interplay | Engine on the First Node
71
Installing the Interplay | Engine on the First Node
The Server Execution User is the Windows domain user that runs the Interplay Engine. This
account is automatically added to the Local Administrators group on the server. See “Before You
Begin the Server Failover Installation” on page 25.
c When typing the domain name do not use the full DNS name such as mydomain.company.com,
because the DCOM part of the server will be unable to start. You should use the NetBIOS
name, for example, mydomain.
15. Type the password for the cluster account (Server Execution User) specified above and click
Next.
16. Retype the password for the cluster account and click Next.
The Production installer verifies that this password matches the password that you entered in the
previous step. If it does not match, you are returned to the previous step where you must enter
and reconfirm your password again.
17. In the following screen, the installer prompts you to specify the path for the Production database.
Accept the default path and click Next.
The default path is: S:\Workgroup_Databases
This folder must reside on the shared drive that is owned by the cluster role of the server. You
must use this shared drive resource so that it can be monitored and managed by the Cluster
service. The drive must be assigned to the physical drive resource that is mounted under the same
drive letter on both nodes.
18. The installer asks if you want to enable the Interplay SNMP (Simple Network Management
Protocol) service.
t If you do not need to enable SNMP, keep the default selection of No and click Next.
t If you need to enable SNMP, click the Yes button and click Next.
The installer verifies that the local SNMP service is installed on the engine. The installer
does not verify that the SNMP service is configured or running, only that it is installed. If
you click Yes and Windows SNMP is not installed, a second window appears to confirm that
you still wish to install the Interplay SNMP service.
For more information on configuring Production Management with SNMP, contact Avid
Customer Care.
19. The installer asks if you want to install the Sentinel USB Dongle Driver.
t If your Production Engine is licensed using a software license only (no dongle), keep the
default selection of No and click Next.
t If your Production Engine is licensed using USB dongles that are attached directly to each
Production Engine, click the Yes button and click Next.
The USB driver is installed automatically for you during the Production Engine installation
process.
20. The installer presents a confirmation window that details the information that you specified in
the steps above.
t If you see an error, click the Cancel button to exit the installer.
In this case, you must restart the installation process from the beginning.
t If the information is correct, click the Start button to begin the installation process.
72
Installing the Interplay | Engine on the First Node
As shown in the following illustration, the PowerShell command window that was opened
when you first initiated the installation process begins to provide feedback about the
installation tasks.
If you see any errors during the installation process, you can review the logs under
<drive>\<path to Production installer>\Engineinstaller\Logs for more information.
n If the system displays the following warning message, you can ignore the message and continue with
the installation.
WARNING: The properties were stored, but not all changes will take effect until Avid
Workgroup Disk is taken offline and then online again.
21. At the end of the installation process, you should see an “Installation finished” message as in the
following illustration.
Click inside the command window and press any key to close the window.
73
Installing the Interplay | Engine on the First Node
The Avid Workgroup Disk resources, Server Name, and File Server should be online and all
other resources offline. S$ and WG_Database$ should be listed in the Shares tab.
Take one of the following steps:
- If you are setting up a redundant-switch configuration, leave this node running so that it
maintains ownership of the cluster role and proceed to “Installing the Interplay | Engine on
the Second Node” on page 81.
- If you are setting up a dual-connected configuration, proceed to “Adding a Second IP
Address (Dual-Connected Configurations only)” on page 75.
n Avid does not recommend starting the server at this stage, because it is not installed on the other
node and a failover would be impossible.
74
Installing the Interplay | Engine on the First Node
75
Installing the Interplay | Engine on the First Node
c Note that the Resource Name is listed as “Avid Workgroup Name.” Make sure to check the
Resource Name after adding the second IP address and bringing the resources on line in step 9.
If the Kerberos Status is offline, you can continue with the procedure. After bringing the server
online, the Kerberos Status should be OK.
6. Click the Add button below the IP Addresses list.
The IP Address dialog box opens.
76
Installing the Interplay | Engine on the First Node
8. Check that you entered the IP address correctly, then click Apply.
9. Click the Dependencies tab and check that the second IP address was added, with an OR in the
AND/OR column.
77
Installing the Interplay | Engine on the First Node
11. Bring the Name, both IP addresses, and the File Server resource online by doing one of the
following:
- Right-click the resource and select “Bring Online.”
- Select the resources and select “Bring Online” in the Actions panel.
The following illustration shows the resources online.
78
Installing the Interplay | Engine on the First Node
The Resource Name must be listed as “Avid Workgroup Name.” If it is not, see “Changing the
Resource Name of the Avid Workgroup Server (if applicable)” on page 79.
13. Leave this node running so that it maintains ownership of the cluster role and proceed to
“Installing the Interplay | Engine on the Second Node” on page 81.
Changing the Resource Name of the Avid Workgroup Server (if applicable)
If you find that the resource name of the Avid Workgroup Server application is not “Avid Workgroup
Name” (as displayed in the properties for the Server Name), you need to change the name in the
Windows registry.
c If you are installing a dual-connected cluster, make sure to edit the “Cluster” key. Do not edit
other keys that include the word “Cluster,” such as the “0.Cluster” key.
2. Browse through the GUID named subkeys looking for the one subkey where the value “Type” is
set to “Network Name” and the value “Name” is set to <incorrect_name>.
3. Change the value “Name” to “Avid Workgroup Name.”
4. Do the following to shut down the cluster:
79
Installing the Interplay | Engine on the First Node
c Make sure you have edited the registry entry before you shut down the cluster.
a. In the Failover Cluster Manager tree (left panel) select the cluster. In the following example,
the cluster name is muc-vtlasclu1.VTL.local.
b. In the context menu or the Actions panel on the right side, select “More Actions > Shutdown
Cluster.”
80
Installing the Interplay | Engine on the Second Node
c Do not attempt to move the cluster role over to the second node, or similarly, do not shut down
the first node while the second is up, before the installation is completed on the second node.
c Do not attempt to initiate a failover before installation is completed on the second node and you
create an Interplay database. See “Testing the Complete Installation” on page 84.
2. Perform the installation procedure for the second node as described in “Installing the Interplay |
Engine on the First Node” on page 66 and note the following differences:
- When you are prompted to select either Cluster or Standalone mode, select Cluster.
If you select the Standalone option and you are on a cluster, the Engine installer detects that
you have a partially installed cluster configuration and prevents you from proceeding with
the single-server (standalone) installation.
- After you click Next in the Specify Installation Mode window, the install pulls all
configuration information from the first node and displays a confirmation window as shown
in the following illustration.
c Make sure that you specify the same values for the second node as you entered on the first node.
Using different values results in a corrupted installation.
c If you receive a message that the Avid Workgroup Name resource was not found, you need to
check the registry. See “Changing the Resource Name of the Avid Workgroup Server (if
applicable)” on page 79.
81
Bringing the Interplay | Engine Online
82
After Installing the Interplay | Engine
n If you cannot log in or connect to the Interplay Engine, make sure the database share
WG_Database$ exists. You might get the following error message when you try to log in: “The
network name cannot be found (0x80070043).”
n If this a completely fresh installation (without a pre-existing database), then the only database user is
"Administrator" with an empty password.
2. In the Database section of the Interplay Administrator window, click the Create Database icon.
The Create Database view opens.
3. In the New Database Information area, leave the default “AvidWG” in the Database Name text
box. For an archive database, leave the default “AvidAM.” These are the only two supported
database names.
4. Type a description for the database in the Description text box, such as “Main Production
Server.”
5. Select “Create default Avid Interplay structure.”
After the database is created, a set of default folders within the database are visible in Interplay
Access and other Interplay clients. For more information about these folders, see the
Interplay | Access User’s Guide.
6. Keep the root folder for the New Database Location (Meta Data).
The metadata database must reside on the Interplay Engine server.
7. Keep the root folder for the New Data Location (Assets).
8. Click Create to create directories and files for the database.
The Interplay database is created.
83
Testing the Complete Installation
n If you want to test the Microsoft cluster failover process again, see “Testing the Cluster Installation”
on page 65.
n A failure of a resource does not necessarily initiate failover of the complete Avid Workgroup Server
role.
3. You might also want to experiment by terminating the Interplay Engine manually using the
Windows Task Manager (NxNServer.exe). This is also a good way to get familiar with the
failover settings which can be found in the Properties dialog box of the Avid Workgroup Server
and on the Policies tab in the Properties dialog box of the individual resources.
4. Look at the related settings of the Avid Workgroup Server. If you need to change any
configuration files, make sure that the Avid Workgroup Disk resource is online; the configuration
files can be found on the resource drive in the Workgroup_Data folder.
Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. In previous versions, licenses were managed through hardware application
keys (dongles). Dongles continue to be supported for existing licenses, but new licenses require
software licensing.
84
Updating a Clustered Installation (Rolling Upgrade)
There are no special requirements to activate or deactivate a node before licensing. Log in
directly to each node and use the local version of the Avid License Control application or Avid
Application Manager (for Interplay Engine v3.8 and later) to install the license.
• As a file with the extension .nxn on a USB flash drive or another delivery mechanism
For hardware licensing (dongle), these permanent licenses must match the Hardware ID of the
dongle. After installation, the license information is stored in a Windows registry key. Licenses
for an Interplay Engine failover cluster are associated with two Hardware IDs.
n You can copy the license file from the USB flash drive. The advantage of copying the license file to a
server is that you have easy access to installer files if you should ever need them in the future.
For more information on managing licenses, see the Interplay | Engine and Interplay | Archive
Engine Administration Guide.
n For information about updating specific versions of the Interplay Engine and a cluster, see the Avid
Interplay ReadMe. The ReadMe describes an alternative method of updating a cluster, in which you
lock and deactivate the database before you begin the update.
85
Updating a Clustered Installation (Rolling Upgrade)
Starting in 2018.11, there is no longer a Typical installation mode. When updating a clustered
installation, Avid recommends that you use the default settings presented by the installer. These
settings represent the values that your system administrator entered during the original installation or
previous upgrade. Avid highly recommends that some settings remain unchanged – such as the name
of the database folder. However, if you need to change other settings such as your cluster account
(Server Execution User) or your SNMP selection, now would be a good time to do so. If you decide
to change any settings, you must make sure that the same information is entered on both nodes.
Make sure you follow the procedure in this order, otherwise you might end up with a corrupted
installation.
To update a cluster:
1. On either node, determine which node is active:
a. Right-click My Computer and select Manage. The Server Manager window opens.
b. In the Server Manager list, open Features and click Failover Cluster Manager.
c. Click Roles.
d. On the Summary tab, check the name of the Owner Node.
n Starting with Interplay Production v2018.11, you are not required to restart the node following the
software upgrade. However if you have another reason to reboot your upgraded node at this time, it
is safe to do so.
c Do not move the Avid Workgroup Server to the second node yet.
3. Make sure that first node is active. Run the Interplay Engine installer to update the installation on
the first node. Accept the parameters suggested by the installer so that all values are reused.
4. The installer displays a dialog box that displays the following message:
“To proceed with the installation, the installer will now trigger a failover to the offline node."
5. Click OK in the dialog box to continue.
After completing the above steps, your entire clustered installation is updated to the new version.
Should you encounter any complications or face a specialized situation, contact Avid Support as
instructed in “If You Need Help” on page 8.
86
Uninstalling the Interplay Engine or Archive Engine on a Clustered System
87
4 Automatic Server Failover Tips and Rules
This chapter provides some important tips and rules to use when configuring the automatic server
failover.
Don't access the Interplay Engine database directly through the individual machines (nodes) of the
cluster. Use the virtual network name or IP address that has been assigned to the Interplay Engine
resource group (see “List of IP Addresses and Network Names” on page 28).
The Interplay Engine must be installed on the local disk of the cluster nodes and not on a shared
resource. This is because local changes are also necessary on both machines. Also, with independent
installations you can later use a rolling upgrade approach, upgrading each node individually without
affecting the operation of the cluster. The Microsoft documentation is also strongly against installing
on shared disks.
If you edit the registry on the offline node or on the online node while the Avid Workgroup Monitor
is offline, you will lose your changes. This is something that most likely will happen to you since it is
very easy to forget the implications of the registry replication. Remember that the registry is restored
by the resource monitor before the process is put online, thereby wiping out any changes that you
made while the resource (the server) was offline. Only changes that take place while the resource is
online are accepted.
If you are performing changes that could make the Avid Interplay Engine fail, consider disabling
failover. The default behavior is to restart the server twice (threshold = 3) and then initiate the
failover, with the entire procedure repeating several times before final failure. This can take quite a
while.
You can change your CCS using the Interplay Administrator tool. Alternatively, if you cannot log
into the database you can use the following procedure to change the CCS via Registry settings.
If you specify the wrong Central Configuration Server (CCS), you can change the setting later on the
server machine in the Windows Registry under:
The string value CMS specifies the server. Make sure to set the CMS to a valid entry while the
Interplay Engine is online, otherwise your changes to the registry won't be effective. After the
registry is updated, stop and restart the server using the Cluster Administrator (in the Administration
Tools folder in Windows).
Specifying an incorrect CCS can prevent login. See “Troubleshooting Login Problems” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.
For more information, see “Understanding the Central Configuration Server” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.
89
A Configuring the HPE MSA 2050
Use these instructions to configure the HPE MSA 2050 storage array. To configure the array, use the
HPE Storage Management Utility (SMU).
n For complete information about the SMU, see the HPE document MSA 1050/2050 SMU Reference
Guide, located here:
https://support.hpe.com/hpsc/doc/public/display?docId=a00017707en_us
2. Type the user name and password and click Sign In.
Creating a Disk Group and Disk Volumes
91
Creating a Disk Group and Disk Volumes
7. Click Add.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.
92
Creating a Disk Group and Disk Volumes
4. To add information for the second volume, click Add Row and specify the following:
a. For Volume Name, enter “Quorum”.
b. For Size, enter “10 GB”.
93
Creating a Disk Group and Disk Volumes
5. Click OK.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.
94
Creating a Disk Group and Disk Volumes
The Map button changes to Reset and the mapped Quorum volume is listed.
5. Make sure the Mode is set to read-write, the LUN is set to 0, and all four Ports are selected. Then
click OK.
A confirmation message is displayed. Click Yes. A progress bar is displayed. At the end of the
process, a success message is displayed. Click OK.
6. To map the Databases volume, specify the following information:
a. In the left column, click “All Other Initiators”
b. In the right column, click “Databases.”
7. Click the Map button.
95
Creating a Disk Group and Disk Volumes
The Map button changes to Reset and the mapped Databases volume is listed.
8. Make sure the Mode is set to read-write, the LUN is set to 1, and all four Ports are selected. Click
OK.
A confirmation message is displayed. Click Yes. A progress bar is displayed. At the end of the
process, a success message is displayed. Click OK.
Both volumes are displayed as mapped.
96
B Expanding the Database Volume for an
Interplay Engine Cluster
This document describes how to add drives to the HPE MSA 2040 storage array to expand the drive
space available for the Interplay Production database. The procedure is described in the following
topics:
• Before You Begin
• Task 1: Add Drives to the MSA Storage Array
• Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
• Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
• Task 3: Extend the Databases Volume in Windows Disk Management
n You can adapt these instructions for the HPE MSA 2050. Use Version 3 of the Storage Management
Utility.
The HPE MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using either version:
- “Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)” on page 98
- “Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)” on page 103
98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
3. Supply the user name and password and click Sign In.
4. In the Configuration View, select Physical > Enclosure 1.
The following illustration shows the five additional drives, labeled AVAIL.
5. In the Configuration View, right-click the Vdisk (named dg01 in the illustration) and select
Tools > Expand Vdisk.
99
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
100
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
Another message box tells you that expansion of the Vdisk was started. Click OK.
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. When the process is complete, the SMU displays the additional space
as unallocated (green in the following illustration).
101
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
9. In the Configuration View, right-click Volume Databases and select Tools > Expand Volume.
10. On the Expand Volume page, select the entire amount of available space, then click Expand
Volume.
At the end of the process, the expanded Vdisk and Databases Volume are displayed.
102
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
103
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
The following illustration shows the five additional drives, labeled SAS but without the gray
highlight.
The dialog box enlarges to show the disk group and the available disks.
104
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
The SMU automatically creates a mirrored pair of two new sub-groups, named RAID1-4 and
RAID 1-5.
8. For each new RAID group, assign two of the available disks:
t For RAID1-4, click the first two side-by-side disks.
t For RAID1-5, click the next two side-by-side disks.
Leave one disk as a spare.
The following illustration shows these assignments.
105
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
9. Click Modify.
A message box describes how the expansion can take a significant amount of time and asks you
to confirm the operation. Click Yes.
Another message box tells you that the disk group was successfully modified. Click OK.
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. You can track the progress on the Pools page, in the Related Disk
Groups section, under Current Job.
106
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
When the process is complete, the SMU displays the additional space as available. Note the
amount of available space, which you will need to enter in the Modify Volume dialog box.
12. In the Modify Volume dialog box, type the available space exactly as displayed on the Pools
page (in this example, 599.4GB) and click OK.
107
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
At the end of the process, the new size of the expanded Databases Volume is displayed on the
Volumes page.
The new size of the disk group is also displayed on the Pools page.
108
Task 3: Extend the Databases Volume in Windows Disk Management
109
Task 3: Extend the Databases Volume in Windows Disk Management
4. Click Next.
The Completing page is displayed.
5. Click Finish.
The Database volume is extended.
110
Task 3: Extend the Databases Volume in Windows Disk Management
6. Close the Disk Management window and the Computer Management window.
7. Perform a cluster failover.
The expansion is complete and the Interplay Database has the new space available. You can
check the size of the Database disk (Avid Workgroup Disk) in the Failover Cluster Manager.
111
C Adding Storage for File Assets for an
Interplay Engine Cluster
This document describes how to add drives to the HPE MSA 2040 storage array to expand the drive
space available for the Interplay Production database’s file assets. The procedure is described in the
following topics:
• Before You Begin
• Task 1: Add Drives to the MSA Storage Array
• Task 2: Create a Disk and Volume Using the HPE SMU V3
• Task 3: Initialize the Volume in Windows Disk Management
• Task 4: Add the Disk to the Failover Cluster Manager
• Task 5: Copy the File Assets to the New Drive
• Task 6: Mount the FileAssets Partition in the _Master Folder
• Task 7: Create Cluster Dependencies for the New Disk
n You can adapt these instructions for the HPE MSA 2050.
2 RAID Level 1
• Schedule a convenient time to perform the expansion. You need to bring the Interplay Engine
offline during the process, so this procedure is best performed during a maintenance window.
The configuration itself will take approximately one hour, with the engine offline for
approximately 5 to 15 minutes. In addition, allow time for the copying of file assets, which
depends on the number of file assets in the database.
• Decide if you want to allocate the entire drive space to file assets, or reserve space for snapshots
or future expansion. See “Task 2: Create a Disk and Volume Using the HPE SMU V3” on
page 114.
• Make sure you have the following complete, recent backups:
- Interplay database, created through the Interplay Administrator
- _Master folder (file assets), created through a backup utility.
The Interplay Administrator does not have a backup mechanism for the _Master folder.
• Make sure you can access the HPE Storage Management Utility (SMU).
The HPE SMU is a web-based application. To access the SMU, it needs to be connected to a
LAN through at least one of its Ethernet ports. The following are default settings for accessing
the application:
- IP address: http://10.0.0.2
- User name: manage
- Password: !manage
Check if these settings have been changed by an administrator.
The HPE MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using SMU version 3.
113
Task 2: Create a Disk and Volume Using the HPE SMU V3
3. In the navigation bar on the left side of the screen, click System and select View System.
114
Task 2: Create a Disk and Volume Using the HPE SMU V3
The following illustration shows the three additional drives, labeled MDL, which is an HPE
name for a “midline” drive. Click the drive to display disk information.
115
Task 2: Create a Disk and Volume Using the HPE SMU V3
d. Click Add.
A progress bar is displayed. At the end of the process, a success message is displayed. Click
OK.The new disk group is displayed. If you select the name, information is displayed in the
Related Disk Groups section.
116
Task 2: Create a Disk and Volume Using the HPE SMU V3
117
Task 2: Create a Disk and Volume Using the HPE SMU V3
d. Click OK.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.
The new volume is added to the list of volumes.
118
Task 2: Create a Disk and Volume Using the HPE SMU V3
4. Click Apply. A confirmation dialog is displayed. Click Yes. At the end of the process, a success
message is displayed. Click OK.
The Volumes page shows the new volume fully configured.
119
Task 3: Initialize the Volume in Windows Disk Management
120
Task 3: Initialize the Volume in Windows Disk Management
5. Use the New Simple Volume wizard to configure the volume as a partition.
a. Right-click the new disk.
b. Select New Simple Volume.
The New Simple Volume wizard opens with the Specify Volume Size page.
9. Click Finish.
121
Task 3: Initialize the Volume in Windows Disk Management
At the end of the process the new disk is named and online.
122
Task 4: Add the Disk to the Failover Cluster Manager
n The disk does not need to be initialized, because the initialization was done on Node 1.
e. Click OK.
A confirmation box asks if you want to continue. Click Yes.
The new disk is now named and online on Node 2.
11. Close Disk Management on Node 2.
c This task and all remaining tasks must be performed on the online node.
123
Task 4: Add the Disk to the Failover Cluster Manager
124
Task 4: Add the Disk to the Failover Cluster Manager
The new disk, named Avid Workgroup File Assets, is listed as storage for the Avid Workgroup
Server.
10. Keep Failover Cluster Manager open for use in Tasks 6 and 7.
125
Task 5: Copy the File Assets to the New Drive
c This task and all remaining tasks must be performed on the online node.
Copying the existing file assets folder is likely to be the most time-consuming part of the
configuration process, depending on the size of the folder. The Interplay Engine can continue to be
running during the copying process, but best practice is to perform this copy during a maintenance
window. The following illustration shows the contents of the _Master folder, which holds the file
assets.
Avid recommends using a copy program such as Robocopy, which preserves timestamps for the file
assets and copies empty files, through the /E parameter. The following procedure uses Robocopy,
executed from a command line.
126
Task 6: Mount the FileAssets Partition in the _Master Folder
c You must take the Engine services offline for this task. The other resources, especially the disk
resources, must stay online.
c This task and all remaining tasks must be performed on the online node.
127
Task 6: Mount the FileAssets Partition in the _Master Folder
7. In the Change Drive letter and Paths dialog box, select drive letter L: and click Remove.
128
Task 6: Mount the FileAssets Partition in the _Master Folder
129
Task 7: Create Cluster Dependencies for the New Disk
130
Task 7: Create Cluster Dependencies for the New Disk
131
Task 7: Create Cluster Dependencies for the New Disk
9. Now bring the cluster back online. Select Avid Workgroup Server, and in the Actions list, click
Start Role.
132