0% found this document useful (0 votes)
78 views

Failover Guide Interplay v2018 11

Failover HA on avid system

Uploaded by

Bry Grey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Failover Guide Interplay v2018 11

Failover HA on avid system

Uploaded by

Bry Grey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

Interplay® | Engine

Failover Guide
Version 2018.11
Legal Notices
Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology, Inc.

This product is subject to the terms and conditions of a software license agreement provided with the software. The product may only be
used in accordance with the license agreement.

This product may be protected by one or more U.S. and non-U.S patents. Details are available at www.avid.com/patents.

This guide is protected by copyright. This guide is for your personal use and may not be reproduced or distributed, in whole or in part,
without permission of Avid. Reasonable care has been taken in preparing this guide; however, it may contain omissions, technical
inaccuracies, or typographical errors. Avid Technology, Inc. disclaims liability for all losses incurred through the use of this document.
Product specifications are subject to change without notice.

Copyright © 2019 Avid Technology, Inc. and its licensors. All rights reserved.

The following disclaimer is required by Apple Computer, Inc.:


APPLE COMPUTER, INC. MAKES NO WARRANTIES WHATSOEVER, EITHER EXPRESS OR IMPLIED, REGARDING THIS
PRODUCT, INCLUDING WARRANTIES WITH RESPECT TO ITS MERCHANTABILITY OR ITS FITNESS FOR ANY PARTICULAR
PURPOSE. THE EXCLUSION OF IMPLIED WARRANTIES IS NOT PERMITTED BY SOME STATES. THE ABOVE EXCLUSION MAY
NOT APPLY TO YOU. THIS WARRANTY PROVIDES YOU WITH SPECIFIC LEGAL RIGHTS. THERE MAY BE OTHER RIGHTS THAT
YOU MAY HAVE WHICH VARY FROM STATE TO STATE.

The following disclaimer is required by Sam Leffler and Silicon Graphics, Inc. for the use of their TIFF library:
Copyright © 1988–1997 Sam Leffler
Copyright © 1991–1997 Silicon Graphics, Inc.

Permission to use, copy, modify, distribute, and sell this software [i.e., the TIFF library] and its documentation for any purpose is hereby
granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and
related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to
the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.

THE SOFTWARE IS PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE,
INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR
CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

The following disclaimer is required by the Independent JPEG Group:


This software is based in part on the work of the Independent JPEG Group.

This Software may contain components licensed under the following conditions:
Copyright (c) 1989 The Regents of the University of California. All rights reserved.

Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are
duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use
acknowledge that the software was developed by the University of California, Berkeley. The name of the University may not be used to
endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS
IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

Copyright (C) 1989, 1991 by Jef Poskanzer.

Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided "as is" without express or implied warranty.

Copyright 1995, Trinity College Computing Center. Written by David Chappell.

Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided "as is" without express or implied warranty.

Copyright 1996 Daniel Dardailler.

Permission to use, copy, modify, distribute, and sell this software for any purpose is hereby granted without fee, provided that the above
copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation,
and that the name of Daniel Dardailler not be used in advertising or publicity pertaining to distribution of the software without specific,
written prior permission. Daniel Dardailler makes no representations about the suitability of this software for any purpose. It is provided "as
is" without express or implied warranty.

Modifications Copyright 1999 Matt Koss, under the same license as above.

Copyright (c) 1991 by AT&T.

Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice
is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting
documentation for such software.

2
THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR, NEITHER
THE AUTHOR NOR AT&T MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.

This product includes software developed by the University of California, Berkeley and its contributors.

The following disclaimer is required by Paradigm Matrix:


Portions of this software licensed from Paradigm Matrix.

The following disclaimer is required by Ray Sauers Associates, Inc.:


“Install-It” is licensed from Ray Sauers Associates, Inc. End-User is prohibited from taking any action to derive a source code equivalent of
“Install-It,” including by reverse assembly or reverse compilation, Ray Sauers Associates, Inc. shall in no event be liable for any damages
resulting from reseller’s failure to perform reseller’s obligation; or any damages arising from use or operation of reseller’s products or the
software; or any other damages, including but not limited to, incidental, direct, indirect, special or consequential Damages including lost
profits, or damages resulting from loss of use or inability to use reseller’s products or the software for any reason including copyright or
patent infringement, or lost data, even if Ray Sauers Associates has been advised, knew or should have known of the possibility of such
damages.

The following disclaimer is required by Videomedia, Inc.:


“Videomedia, Inc. makes no warranties whatsoever, either express or implied, regarding this product, including warranties with respect to
its merchantability or its fitness for any particular purpose.”

“This software contains V-LAN ver. 3.0 Command Protocols which communicate with V-LAN ver. 3.0 products developed by Videomedia,
Inc. and V-LAN ver. 3.0 compatible products developed by third parties under license from Videomedia, Inc. Use of this software will allow
“frame accurate” editing control of applicable videotape recorder decks, videodisc recorders/players and the like.”

The following disclaimer is required by Altura Software, Inc. for the use of its Mac2Win software and Sample Source
Code:
©1993–1998 Altura Software, Inc.

The following disclaimer is required by Interplay Entertainment Corp.:


The “Interplay” name is used with the permission of Interplay Entertainment Corp., which bears no responsibility for Avid products.

This product includes portions of the Alloy Look & Feel software from Incors GmbH.

This product includes software developed by the Apache Software Foundation (http://www.apache.org/).

© DevelopMentor

This product may include the JCifs library, for which the following notice applies:
JCifs © Copyright 2004, The JCIFS Project, is licensed under LGPL (http://jcifs.samba.org/). See the LGPL.txt file in the Third Party
Software directory on the installation CD.

Avid Interplay contains components licensed from LavanTech. These components may only be used as part of and in connection with Avid
Interplay.

Attn. Government User(s). Restricted Rights Legend


U.S. GOVERNMENT RESTRICTED RIGHTS. This Software and its documentation are “commercial computer software” or “commercial
computer software documentation.” In the event that such Software or documentation is acquired by or on behalf of a unit or agency of the
U.S. Government, all rights with respect to this Software and documentation are subject to the terms of the License Agreement, pursuant
to FAR §12.212(a) and/or DFARS §227.7202-1(a), as applicable.

Trademarks
Avid, the Avid Logo, Avid Everywhere, Avid DNXHD, Avid DNXHR, Avid NEXIS, AirSpeed, Eleven, EUCON, Interplay, iNEWS, ISIS, Mbox,
MediaCentral, Media Composer, NewsCutter, Pro Tools, ProSet and RealSet, Maestro, PlayMaker, Sibelius, Symphony, and all related
product names and logos, are registered or unregistered trademarks of Avid Technology, Inc. in the United States and/or other countries.
The Interplay name is used with the permission of the Interplay Entertainment Corp. which bears no responsibility for Avid products. All
other trademarks are the property of their respective owners. For a full list of Avid trademarks, see: http://www.avid.com/US/about-avid/
legal-notices/trademarks.

Footage
Eco Challenge Morocco — Courtesy of Discovery Communications, Inc.
News material provided by WFTV Television Inc.
Ice Island — Courtesy of Kurtis Productions, Ltd.

Interplay | Engine Failover Guide • Created July 19, 2019 • This document is distributed by Avid in online (electronic)
form only, and is not available for purchase in printed form.

3
Contents

4
Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Symbols and Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
If You Need Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Avid Training Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 1 Automatic Server Failover Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Server Failover Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
How Server Failover Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Server Failover Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Server Failover Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Installing the Failover Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Slot Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Failover Cluster Connections: Redundant-Switch Configuration . . . . . . . . . . . . . . . . . . . 16
Failover Cluster Connections, Dual-Connected Configuration . . . . . . . . . . . . . . . . . . . . 18
HPE MSA Reference Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
HPE MSA Storage Management Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
HPE MSA Command Line Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
HPE MSA Support Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Clustering Technology and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chapter 2 Creating a Microsoft Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Server Failover Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Before You Begin the Server Failover Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Requirements for Domain User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
List of IP Addresses and Network Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Active Directory and DNS Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Preparing the Server for the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Configuring the ATTO Fibre Channel Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Changing Windows Server Settings on Each Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Configuring Local Software Firewalls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Renaming the Local Area Network Interface on Each Node . . . . . . . . . . . . . . . . . . . . . . 35
Configuring the Private Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 38
Configuring the Binding Order Networks on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 40
Configuring the Public Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 41
Configuring the Cluster Shared-Storage RAID Disks on Each Node. . . . . . . . . . . . . . . . 42
Configuring the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Joining Both Servers to the Active Directory Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Installing the Failover Clustering Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Creating the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Renaming the Cluster Networks in the Failover Cluster Manager . . . . . . . . . . . . . . . . . . 58
Renaming the Quorum Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Adding a Second IP Address to the Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Testing the Cluster Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 3 Installing the Interplay | Engine for a Failover Cluster . . . . . . . . . . . . . . . . . . . . 66
Disabling Any Web Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Installing the Interplay | Engine on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Preparation for Installing on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Bringing the Shared Database Drive Online if Necessary . . . . . . . . . . . . . . . . . . . . . . . . 67
Installing the Interplay Engine Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Checking the Status of the Cluster Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Adding a Second IP Address (Dual-Connected Configurations only) . . . . . . . . . . . . . . . 75
Changing the Resource Name of the Avid Workgroup Server (if applicable) . . . . . . . . . 79
Installing the Interplay | Engine on the Second Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Bringing the Interplay | Engine Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
After Installing the Interplay | Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Creating an Interplay | Production Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Testing the Complete Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Installing a Permanent License. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Updating a Clustered Installation (Rolling Upgrade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Uninstalling the Interplay Engine or Archive Engine on a Clustered System . . . . . . . . . . . . . 87
Chapter 4 Automatic Server Failover Tips and Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Appendix A Configuring the HPE MSA 2050 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Creating a Disk Group and Disk Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Appendix B Expanding the Database Volume for an Interplay Engine Cluster . . . . . . . . . . 97
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2) . . . . . . . . . . . . . . . 98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3) . . . . . . . . . . . . . . 103
Task 3: Extend the Databases Volume in Windows Disk Management . . . . . . . . . . . . . . . . 109
Appendix C Adding Storage for File Assets for an Interplay Engine Cluster . . . . . . . . . . . 112
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Task 2: Create a Disk and Volume Using the HPE SMU V3 . . . . . . . . . . . . . . . . . . . . . . . . 114

5
Task 3: Initialize the Volume in Windows Disk Management . . . . . . . . . . . . . . . . . . . . . . . . 120
Task 4: Add the Disk to the Failover Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Task 5: Copy the File Assets to the New Drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Task 6: Mount the FileAssets Partition in the _Master Folder . . . . . . . . . . . . . . . . . . . . . . . 127
Task 7: Create Cluster Dependencies for the New Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6
Using This Guide

Congratulations on the purchase of Interplay | Production, a powerful system for managing media in
a shared storage environment.

This guide is intended for all Interplay Production administrators who are responsible for installing,
configuring, and maintaining an Interplay | Engine with the Automatic Server Failover module
integrated. The information in this guide applies to Avid MediaCentral Production Management
v2018.11 and later, running Windows Server 2012 R2 or Windows Server 2016.

Revision History
Date Revised Changes Made

July 2019 This update adds references to the Dell PowerEdge 640 and the HPE ProLiant DL360 Gen10
servers.

c Although this document includes information on Windows Server 2012 R2 and


Windows Server 2016, these servers are supported with Windows Server 2016 only.
For more information, see the Production Server and Operating System Support
document on the Avid Knowledge Base at: http://avid.force.com/pkb/articles/
en_US/compatibility/Avid-Video-Compatibility-Charts.

December 2018 First publication for Interplay 2018.11.


The Avid software installation process described in this document differs from that found in
prior versions of the Interplay Engine Failover Guide. See the Interplay 2018.11 ReadMe for a
list of changes.
If you are familiar with prior versions of this guide, pay close attention to changes in this
document to ensure a successful system installation.

Symbols and Conventions


Avid documentation uses the following symbols and conventions:

Symbol or Convention Meaning or Action

n
A note provides important related information, reminders, recommendations, and
strong suggestions.

c
A caution means that a specific action you take could cause harm to your computer or
cause you to lose data.

A warning describes an action that could cause you physical harm. Follow the
w guidelines in this document or on the unit itself when handling electrical equipment.

> This symbol indicates menu commands (and subcommands) in the order you select
them. For example, File > Import means to open the File menu and then select the
Import command.
If You Need Help

Symbol or Convention Meaning or Action

This symbol indicates a single-step procedure. Multiple arrows in a list indicate that
you perform one of the actions listed.

(Windows), (Windows This text indicates that the information applies only to the specified operating system,
only), (Macintosh), or either Windows or Macintosh OS X.
(Macintosh only)

Bold font Bold font is primarily used in task instructions to identify user interface items and
keyboard sequences.

Italic font Italic font is used to emphasize certain words and to indicate variables.

Courier Bold font Courier Bold font identifies text that you type.

Ctrl+key or mouse action Press and hold the first key while you press the last key or perform the mouse action.
For example, Command+Option+C or Ctrl+drag.

| (pipe character) The pipe character is used in some Avid product names, such as Interplay |
Production. In this document, the pipe is used in product names when they are in
headings or at their first use in text.

If You Need Help


If you are having trouble using your Avid product:
1. Retry the action, carefully following the instructions given for that task in this guide. It is
especially important to check each step of your workflow.
2. Check the latest information that might have become available after the documentation was
published. You should always check online for the most up-to-date release notes or ReadMe
because the online version is updated whenever new information becomes available. To view
these online versions, select ReadMe from the Help menu, or visit the Knowledge Base at
www.avid.com/support.
3. Check the documentation that came with your Avid application or your hardware for
maintenance or hardware-related issues.
4. Visit the online Knowledge Base at www.avid.com/support. Online services are available 24
hours per day, 7 days per week. Search this online Knowledge Base to find answers, to view
error messages, to access troubleshooting tips, to download updates, and to read or join online
message-board discussions.

Avid Training Services


Avid makes lifelong learning, career advancement, and personal development easy and convenient.
Avid understands that the knowledge you need to differentiate yourself is always changing, and Avid
continually updates course content and offers new training delivery methods that accommodate your
pressured and competitive work environment.

For information on courses/schedules, training centers, certifications, courseware, and books, please
visit www.avid.com/support and follow the Training links, or call Avid Sales at 800-949-AVID
(800-949-2843).

8
1 Automatic Server Failover Introduction

This chapter covers the following topics:


• Server Failover Overview
• How Server Failover Works
• Installing the Failover Hardware Components
• HPE MSA Reference Information
• Clustering Technology and Terminology

Server Failover Overview


The automatic server failover mechanism in Avid Interplay allows client access to the Interplay
Engine in the event of failures or during maintenance, with minimal impact on the availability. A
failover server is activated in the event of application, operating system, or hardware failures. The
server can be configured to notify the administrator about such failures using email.

The Interplay implementation of server failover uses Microsoft® clustering technology. For
background information on clustering technology and links to Microsoft clustering information, see
“Clustering Technology and Terminology” on page 23.

c Additional monitoring of the hardware and software components of a high-availability solution


is always required. Avid delivers Interplay preconfigured, but additional attention on the
customer side is required to prevent outage (for example, when a private network fails, RAID
disk fails, or a power supply loses power). In a mission critical environment, monitoring tools
and tasks are needed to be sure there are no silent outages. If another (unmonitored)
component fails, only an event is generated, and while this does not interrupt availability, it
might go unnoticed and lead to problems. Additional software reporting such issues to the IT
administration lowers downtime risk.

The failover cluster is a system made up of two server nodes and a shared-storage device connected
over Fibre Channel. These are to be deployed in the same location given the shared access to the
storage device. The cluster uses the concept of a “virtual server” to specify groups of resources that
failover together. This virtual server is referred to as a “cluster application” in the failover cluster user
interface.

The following diagram illustrates the components of a cluster group, including sample IP addresses.
For a list of required IP addresses and node names, see “List of IP Addresses and Network Names”
on page 28.
How Server Failover Works

Cluster Group

Intranet

Resource groups Failover Cluster Interplay Server


(cluster application)
11.22.33.200 11.22.33.201

Clustered Node #1 Node #2


Private Network Intranet: 11.22.33.45
services Intranet: 11.22.33.44
Private: 10.10.10.10 Private: 10.10.10.11

FibreChannel

Disk resources Quorum Database


(shared disks) Disk
Disk

n If you are already using clusters, the Avid Interplay Engine will not interfere with your current setup.

How Server Failover Works


Server failover works on three different levels:
• Failover in case of hardware failure
• Failover in case of network failure
• Failover in case of software failure

Hardware Failover Process

When the Microsoft Cluster service is running on both systems and the server is deployed in cluster
mode, the Interplay Engine and its accompanying services are exposed to users as a virtual server. To
clients, connecting to the clustered virtual Interplay Engine appears to be the same process as
connecting to a single, physical machine. The user or client application does not know which node is
actually hosting the virtual server.

When the server is online, the resource monitor regularly checks its availability and automatically
restarts the server or initiates a failover to the other node if a failure is detected. The exact behavior
can be configured using the Failover Cluster Manager. Because clients connect to the virtual network
name and IP address, which are also taken over by the failover node, the impact on the availability of
the server is minimal.

Network Failover Process

Avid supports a configuration that uses connections to two public networks (VLAN 10 and VLAN
20) on a single switch. The cluster monitors both networks. If one fails, the cluster application stays
on line and can still be reached over the other network. If the switch fails, both networks monitored
by the cluster will fail simultaneously and the cluster application will go offline.

10
Server Failover Configurations

For a high degree of protection against network outages, Avid supports a configuration that uses two
network switches, each connected to a shared primary network (VLAN 30) and protected by a
failover protocol. If one network switch fails, the virtual server remains online through the other
VLAN 30 network and switch.

These configurations are described in the next section.

Windows Server Versions

This document describes a cluster configuration that uses the cluster application supplied with
Windows Server 2012 R2 and Windows Server 2016. The process to create a cluster in these two
version of Microsoft Windows is similar. Any variations to Avid processes related to these two
operating systems are detailed in this document.

For information about Microsoft Windows Server 2016 Failover Clustering, see: https://
docs.microsoft.com/en-us/windows-server/failover-clustering/failover-clustering-overview

For information about Microsoft Windows Server 2012 Failover Clustering, see: https://
technet.microsoft.com/en-us/library/hh831579.aspx

Server Failover Configurations


The following sections describe two supported configurations for integrating a failover cluster into an
existing network:
• A cluster in an Avid ISIS® environment that is integrated into the intranet through two layer-3
switches (VLAN 30 in Zone 3). This “redundant-switch” configuration protects against both
hardware and network outages and thus provides a higher level of protection than the dual-
connected configuration.
• A cluster in an Avid ISIS environment that is integrated into the intranet through two public
networks (VLAN 10 and VLAN 20 in Zone 1). This “dual-connected” configuration protects
against hardware outages and network outages. If one network fails, the cluster application stays
on line and can be reached over the other network.

These configurations refer to multiple virtual networks (VLANS) that are used with ISIS 7000/7500
shared-storage systems. ISIS 5000/5500 and Avid NEXIS® systems typically do not use multiple
VLANS. You can adapt these configurations for use in ISIS 5000/5500 or Avid NEXIS
environments.

Redundant-Switch Configuration

The following diagram illustrates the failover cluster architecture for an Avid ISIS environment that
uses two layer-3 switches. These switches are configured for failover protection through either HSRP
(Hot Standby Router Protocol) or VRRP (Virtual Router Redundancy Protocol). The cluster nodes
are connected to one subnet (VLAN 30), each through a different network switch. If one of the
VLAN 30 networks fails, the virtual server remains online through the other VLAN 30 network and
switch.

n This guide does not describe how to configure redundant switches for an Avid shared-storage
network. Configuration information is included in the ISIS Qualified Switch Reference Guide and
the Avid NEXIS Network and Switch Guide, which are available for download from the Avid
Customer Support Knowledge Base at www.avid.com\onlinesupport.

11
Server Failover Configurations

Two-Node Cluster in an Avid ISIS Environment (Redundant-Switch Configuration)

Avid network switch 2


running VRRP or HSRP
VLAN 30

Avid network switch 1


running VRRP or HSRP
VLAN 30

Interplay editing
clients
Interplay Engine cluster node 1

Private network
for heartbeat

Cluster-storage
RAID array

Interplay Engine cluster node 2


LEGEND

1 GB Ethernet connection
Fibre Channel connection

The following table describes what happens in the redundant-switch configuration as a result of an
outage:

Type of Outage Result

Hardware (CPU, network adapter, The cluster detects the outage and triggers failover to the remaining
memory, cable, power supply) fails node.
The Interplay Engine is still accessible.

Network switch 1 (VLAN 30) fails External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.

Network switch 2 (VLAN 30) fails External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.

Dual-Connected Configuration

The following diagram illustrates the failover cluster architecture for an Avid ISIS environment. In
this environment, each cluster node is “dual-connected” to the network switch: one network interface
is connected to the VLAN 10 subnet and the other is connected to the VLAN 20 subnet. If one of the
subnets fails, the virtual server remains online through the other subnet.

12
Server Failover Requirements

Two-Node Cluster in an Avid ISIS Environment (Dual-Connected Configuration)

Avid network switch 1


running VRRP or HSRP
VLAN 10 VLAN 20

Interplay editing
clients
Interplay Engine cluster node 1

Private network
for heartbeat

Cluster-storage
RAID array
Interplay Engine cluster node 2
LEGEND

1 GB Ethernet connection
Fibre Channel Connection

The following table describes what happens in the dual-connected configuration as a result of an
outage:

Type of Outage Result

Hardware (CPU, network adapter, The cluster detects the outage and triggers failover to the remaining
memory, cable, power supply) fails node.
The Interplay Engine is still accessible.

Left ISIS VLAN (VLAN10) fails The Interplay Engine is still accessible through the right network.

Right ISIS VLAN (VLAN 20) fails The Interplay Engine is still accessible through the left network.

Server Failover Requirements


You should make sure the server failover system meets the following requirements.

Hardware

The automatic server failover system was qualified with the following hardware:
• Two servers functioning as nodes in a failover cluster. Avid has qualified a Dell™ server and an
HPE® server with minimum specifications, their equivalent, or better. For more information, see
the Avid MediaCentral | Production Management Dell and HPE Server Support or the Avid
Interplay | Production Dell and HP Server Support documents on the Avid Knowledge Base at:
http://avid.force.com/pkb/articles/en_US/readme/Avid-Interplay-Production-Documentation
On-board network interface connectors (NICs) for these servers are qualified. There is no
requirement for an Intel network card.

13
Server Failover Requirements

• Two Fibre Channel host adapters (one for each server in the cluster).
The ATTO Celerity FC-81EN is qualified for these servers. Other Fibre Channel adapters might
work but have not been qualified. Before using another Fibre Channel adapter, contact the
vendor to check compatibility with the server host, the storage area network (SAN), and most
importantly, a Microsoft failover cluster.
• One of the following
- One Infortrend® S12F-R1440 storage array. For more information, see the Infortrend
EonStor®DS S12F-R1440 Installation and Hardware Reference Manual.
- One HPE MSA 2040 SAN storage array. For more information, see the HPE MSA 2040
Quick Start Instructions, available here:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03792354
Also see “HPE MSA Reference Information” on page 22.
- One HPE MSA 2050 SAN storage array. For more information, see the HPE MSA 2050/
2052 Quick Start Instructions, available here:
https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-a00017714en_us
Also see “HPE MSA Reference Information” on page 22.

The servers in a cluster are connected using one or more cluster shared-storage buses and one or
more physically independent networks acting as a heartbeat.

Server Software

The automatic failover system was qualified on the following operating systems:
• Windows Server 2012 R2 Standard
• Windows Server 2016 Standard

Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. One license is used for both nodes in an Interplay Engine failover cluster.
For installation information, see “Installing a Permanent License” on page 84.

Starting with Avid MediaCentral Production Management v2018.11, the software installation is
deployed using a different method from prior releases. If you are familiar with prior versions of this
guide, pay close attention to the changes included in this document.

Space Requirements

The default disk configuration for the shared RAID array is as follows:

Disk Infortrend S12F-R1440

Disk 1 Quorum disk 10 GB

Disk 2 (not used) 10 GB

Disk 3 Database disk 814 GB or larger

14
Installing the Failover Hardware Components

Disk HPE MSA 2040, HPE MSA 2050

Disk 1 Quorum disk 10 GB

Disk 2 Database disk 870 GB or larger

Antivirus Software

You can run antivirus software on a cluster, if the antivirus software is cluster-aware. For information
about cluster-aware versions of your antivirus software, contact the antivirus vendor. If you are
running antivirus software on a cluster, make sure you exclude these locations from the virus
scanning: Q:\ (Quorum disk), C:\Windows\Cluster, and S:\Workgroup_Databases (database).

See also “Configuring Local Software Firewalls” on page 35.

Functions You Need To Know

Before you set up a cluster in an Avid Interplay environment, you should be familiar with the
following functions:
• Microsoft Windows Active Directory domains and domain users
• Microsoft Windows clustering for Windows Server (see “Clustering Technology and
Terminology” on page 23)
• Disk configuration (format, partition, naming)
• Network configuration
For information about Avid Networks and Interplay Production, see “Network Requirements for
ISIS/NEXIS” on the Avid Knowledge Base at http://avid.force.com/pkb/articles/en_US/
compatibility/en244197.

Installing the Failover Hardware Components


The following topics provide information about installing the failover hardware components for the
supported configurations:
• “Slot Locations” on page 15
• “Failover Cluster Connections: Redundant-Switch Configuration” on page 16
• “Failover Cluster Connections, Dual-Connected Configuration” on page 18

Slot Locations
Each server requires a fibre channel host adapter to connect to the shared-storage RAID array.

For more information on Avid qualified servers, see the Avid Audio and Video Compatibility Charts
on the Avid Knowledge Base at the following link:

http://avid.force.com/pkb/articles/en_US/compatibility/Avid-Video-Compatibility-Charts

15
Installing the Failover Hardware Components

Dell PowerEdge Servers

The Avid qualified Dell PowerEdge 630 and 640 servers include three PCIe slots. Avid recommends
installing the fibre channel host adapter in slot 2, as shown in the following example illustration of a
Dell PowerEdge 630.

Dell PowerEdge R630 (Rear View)

Adapter card in PCIe slot 2

n The Dell system is designed to detect what type of card is in each slot and to negotiate optimum
throughput. As a result, using slot 2 for the fibre channel host adapter is recommended but not
required. For more information, see the Dell PowerEdge Owner’s Manual.

HPE ProLiant Servers

The Avid qualified HPE ProLiant DL 360 Gen 9 and Gen 10 servers include two or three slots. Avid
recommends installing the fibre channel host adapter in slot 2, as shown in the following example
illustration of an HPE ProLiant DL360 Gen 9.

HPE ProLiant DL360 Gen 9 (Rear View)

Adapter card in PCIe slot 2

Failover Cluster Connections: Redundant-Switch Configuration


Make the following cable connections to add a failover cluster to an Avid ISIS environment, using
the redundant-switch configuration:
• First cluster node:
- Network interface connector 2 to layer-3 switch 1 (VLAN 30)
- Network interface connector 3 to network interface connector 3 on the second cluster node
(private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 1 (top left) on the Infortrend RAID array or the HPE MSA RAID array.
• Second cluster node:
- Network interface connector 2 to layer-3 switch 1 (VLAN 30)
- Network interface connector 3 to the bottom-left network interface connector on the second
cluster node (private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to the Fibre Channel
connector Port 2 (bottom, second from left) on the Infortrend RAID array or the HPE MSA
RAID array.

16
Installing the Failover Hardware Components

The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.

n This configuration refers to a virtual network (VLAN) that is used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.

Failover Cluster Connections: Avid ISIS, Redundant-Switch Configuration, Infortrend

Interplay Engine Cluster Node 1


Dell PowerEdge R630 Back Panel

Ethernet to Avid network


switch 1 Fibre Channel
to RAID Array

Ethernet to node 2
(Private network) Infortrend RAID Array
Back Panel

Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel

Ethernet to Avid network switch 2

LEGEND

1 GB Ethernet connection
Fibre Channel connection

17
Installing the Failover Hardware Components

Failover Cluster Connections: Avid ISIS, Redundant-Switch Configuration, HPE MSA

Interplay Engine Cluster Node 1


Dell PowerEdge R630 Back Panel

Ethernet to Avid network


switch 1 Fibre Channel
to RAID Array

Ethernet to node 2
(Private network)
HPE MSA RAID Array
Back Panel

Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel

Ethernet to Avid network switch 2

LEGEND

1 GB Ethernet connection
Fibre Channel connection

Failover Cluster Connections, Dual-Connected Configuration


Make the following cable connections to add a failover cluster to an Avid ISIS environment as a dual-
connected configuration:
• First cluster node:
- Network interface connector 2 to the ISIS left subnet (VLAN 10 public network)
- Network interface connector 4 to the ISIS right subnet (VLAN 20 public network)

18
Installing the Failover Hardware Components

- Network interface connector 3 to the bottom-left network interface connector on the second
cluster node (private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 1 (top left) on the Infortrend RAID array or the HPE MSA RAID array.
• Second cluster node:
- Network interface connector 2 to the ISIS left subnet (VLAN 10 public network)
- Network interface connector 4 to the ISIS right subnet (VLAN 20 public network)
- Network interface connector 3 to the bottom-left network interface connector on the first
cluster node (private network for heartbeat)
- Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 2 (bottom, second from left) on the HPE MSA RAID array.

The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.

n This configuration refers to virtual networks (VLANs) that are used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.

19
Installing the Failover Hardware Components

Failover Cluster Connections, Avid ISIS, Dual-Connected Configuration, Infortrend

Interplay Engine Cluster Node 1


Dell PowerEdge R630 Back Panel

Ethernet to ISIS left subnet


Fibre Channel
Ethernet to ISIS right subnet
to RAID Array

Ethernet to node 2
(Private network) Infortrend RAID Array
Back Panel

Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel

Ethernet to ISIS left subnet

Ethernet to ISIS right subnet

LEGEND

1 GB Ethernet connection
Fibre Channel connection

20
Installing the Failover Hardware Components

Failover Cluster Connections, Avid ISIS, Dual-Connected Configuration, HPE MSA

Interplay Engine Cluster Node 1


Dell PowerEdge R630 Back Panel

Ethernet to ISIS left subnet


Fibre Channel
Ethernet to ISIS right subnet
to Infortrend

Ethernet to node 2
(Private network) HPE MSA RAID Array
Back Panel

Fibre Channel
to Infortrend
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel

Ethernet to ISIS left subnet

Ethernet to ISIS right subnet

LEGEND

1 GB Ethernet connection
Fibre Channel connection

21
HPE MSA Reference Information

HPE MSA Reference Information


The following topics provide information about components of the MSA 2040 and MSA 2050, with
references to additional documentation.

n As of November 1st, 2015, servers, storage and networking products are supported by Hewlett
Packard Enterprise (HPE).

HPE MSA Storage Management Utility


The HPE MSA is packaged with a Storage Management Utility (SMU). The SMU is a browser-based
tool that lets you configure, manage, and view information about the HPE MSA. Each controller in
the HPE MSA has a default IP address and host name for connecting over a network.

Default IP Settings
• Management Port IP Address:
- 10.0.0.2 (controller A)
- 10.0.0.3 (controller B)
• IP Subnet Mask: 255.255.255.0
• Gateway IP Address: 10.0.0.1

You can change these settings to match local networks through the SMU, the Command Line
Interface (CLI), or the MSA Device Discovery Tool DVD that ships with the array.

Hostnames

Hostnames are predefined using the MAC address of the controller adapter, using the following
syntax:
• http://hp-msa-storage-<last 6 digits of mac address>

For example:
• http://hp-msa-storage-1dfcfc

You can find the MAC address through the SMU. Go to Enclosure Overview and click the Network
port. The hostname itself is not displayed in the SMU and cannot be changed.

Default User Names, Passwords, and Roles

The following are the default user names/passwords and roles:


• monitor / !monitor – Can monitor the system, with some functions disabled. For example, the
Tools Menu allows log saving, but not Shut Down or Restart of controllers.
• manage / !manage – Can manage the system, with all functions available.

For More Information

See the following HPE documents:


• HPE MSA 1040/2040 SMU Reference Guide
• HPE MSA 1050/2050 SMU Reference Guide
• HPE MSA Event Descriptions Reference Guide

22
Clustering Technology and Terminology

HPE MSA Command Line Interface


The HPE MSA is packaged with a Command Line Interface (CLI). To use the CLI, you need to do
the following:
• Install a Windows USB driver from HPE. Search for the driver on the HPE support site at https:/
/support.hpe.com.
• HPE ships two USB cables with the HPE MSA. Use a USB cable to connect a server to each
controller in the HPE MSA.

For More Information

See the following HPE documents:


• For more information about connecting to the CLI, see Chapter 5 of the HPE MSA 2040 User
Guide or the HPE MSA 2050 User Guide.
• For information about commands, see the HPE MSA 1040/2040 CLI Reference Guide or the
HPE MSA 1050/2050 CLI Reference Guide.

HPE MSA Support Documentation


Documentation for the HPE MSA 2040 is located on the HPE support site:

https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03820042

Documentation for the HPE MSA 2050 is located on the HPE support site:

https://support.hpe.com/hpsc/doc/public/display?docLocale=en_US&docId=emr_na-
a00017812en_us

Clustering Technology and Terminology


Clustering can be complicated, so it is important that you get familiar with the technology and
terminology of failover clusters before you start. A good source of information is the Windows
Server Failover Clustering pages. For more information, see:
• Microsoft Windows Server 2016 Failover Clustering, see: https://docs.microsoft.com/en-us/
windows-server/failover-clustering/failover-clustering-overview
• Microsoft Windows Server 2012 Failover Clustering, see: https://technet.microsoft.com/en-us/
library/hh831579.aspx

Here is a brief summary of the major concepts and terms, adapted from the Microsoft Windows
Server web site:
• failover cluster; A group of independent computers that work together to increase the availability
of clustered roles (formerly called clustered applications and services). The clustered servers
(called nodes) are connected by physical cables and by software. If one of the nodes fails,
another node begins to provide services (a process known as failover).
• Cluster service: The essential software component that controls all aspects of server cluster or
failover cluster operation and manages the cluster configuration database. Each node in a failover
cluster owns one instance of the Cluster service.

23
Clustering Technology and Terminology

• cluster resources: Cluster components (hardware and software) that are managed by the cluster
service. Resources are physical hardware devices such as disk drives, and logical items such as
IP addresses and applications.
• clustered role: A collection of resources that are managed by the cluster service as a single,
logical unit and that are always brought online on the same node.
• quorum: The quorum for a cluster is determined by the number of voting elements that must be
part of active cluster membership for that cluster to start properly or continue running. By
default, every node in the cluster has a single quorum vote. In addition, a quorum witness (when
configured) has an additional single quorum vote. A quorum witness can be a designated disk
resource or a file share resource.
An Interplay Engine failover cluster uses a disk resource, named Quorum, as a quorum witness.

24
2 Creating a Microsoft Failover Cluster

This chapter describes the processes for creating a Microsoft failover cluster for automatic server
failover. It is crucial that you follow the instructions given in this chapter completely, otherwise the
automatic server failover will not work.

This chapter covers the following topics:


• Server Failover Installation Overview
• Before You Begin the Server Failover Installation
• Preparing the Server for the Failover Cluster
• Configuring the Failover Cluster

Instructions for installing the Interplay Engine are provided in “Installing the Interplay | Engine for a
Failover Cluster” on page 66.

Server Failover Installation Overview


Installation and configuration of the automatic server failover consists of the following major tasks:
• Make sure that the network is correctly set up and that you have reserved IP host names and IP
addresses (see “Before You Begin the Server Failover Installation” on page 25).
• Prepare the servers for the failover cluster (see “Preparing the Server for the Failover Cluster” on
page 31). This includes configuring the nodes for the network and formatting the drives.
• Install the Failover Cluster feature and configure the failover cluster (see “Configuring the
Failover Cluster” on page 48).
• Install the Interplay Engine on both nodes (see “Installing the Interplay | Engine for a Failover
Cluster” on page 66).
• Test the complete installation (see “Testing the Complete Installation” on page 84).

n Do not install any other software on the cluster machines except the Interplay Engine. For example,
Media Indexer software needs to be installed on a different server. For complete installation
instructions, see the Interplay | Production Software Installation and Configuration Guide.

Before You Begin the Server Failover Installation


Use the following checklist to help you prepare for the server failover installation.

Cluster Installation Preparation Check List

Task For More Information

b Make sure all cluster hardware connections are See “Installing the Failover Hardware
correct. Components” on page 15.
Before You Begin the Server Failover Installation

Cluster Installation Preparation Check List(Continued)

Task For More Information

b Make sure that the site has a network that is Facility staff
qualified to run Active Directory and DNS
services.

b Make sure the network includes an Active Facility staff


Directory domain.

b Determine the subnet mask, the gateway, DNS, Facility staff


and WINS server addresses on the network.

b Create or select domain user accounts for See “Requirements for Domain User Accounts” on
creating and administering the cluster. page 27.

b Reserve static IP addresses for all network See “List of IP Addresses and Network Names” on
interfaces and host names. page 28.

b If necessary, download the ATTO Configuration See “Changing Default Settings for the ATTO Card
Utility. on Each Node” on page 32.

b Make sure the time settings for both nodes are in Operating system documentation.
sync. If not, you must synchronize the times or
A Guide to Time Synchronization for Avid Interplay
you will not be able to add both nodes to the
Systems on the Avid Knowledge Base.
cluster. You should also sync the shared storage
array. You can use the Network Time Protocol
(NTP).

b Make sure the Remote Registry service is started Operating system documentation
and is enabled for Automatic startup. Open
Server Management and select Configuration >
Services > Remote Registry.

b Create an Avid shared-storage user account with Avid shared-storage documentation.


read and write privileges.
This account is not needed for the installation of
the Interplay Engine, but is required for the
operation of the Interplay Engine (for example,
media deletion from shared storage). The user
name and password must exactly match the user
name and password of the Server Execution
User.

b Install and set up an Avid shared-storage client Avid shared-storage documentation.


on both servers. Check if shared-storage setup
requires an Intel® driver update.
Avid recommends installing and setting up the
shared-storage client before creating the cluster
and installing the Interplay Engine. This avoids a
driver update after the server failover cluster is
running.

26
Before You Begin the Server Failover Installation

Cluster Installation Preparation Check List(Continued)

Task For More Information

b Install a permanent license. A temporary license See “Installing a Permanent License” on page 84.
is installed with the Interplay Engine software.
After the installation is complete, install the
permanent license. Permanent licenses are
supplied in one of two ways:
• As a hardware license that is activated
through an application key (dongle).
• As a software license using the Application
Manager.

Requirements for Domain User Accounts


Before beginning the cluster installation process, you need to select or create the following user
accounts in the domain that includes the cluster:
• Server Execution User: Create or select an account that is used by the Interplay Engine services
(listed as the Avid Workgroup Engine Monitor and the Avid Workgroup TCP COM Bridge in the
list of Windows services). This account must be a domain user. The procedures in this document
use sqauser as an example of a Server Execution User. This account is automatically added to the
Local Administrators group on each node by the Interplay Engine software during the
installation process.
The Server Execution User is critical to the operation of the Interplay Engine. If necessary, you
can change the name of the Server Execution User after the installation. For more information,
see “Troubleshooting the Server Execution User Account” and “Re-creating the Server
Execution User” in the Interplay | Engine and Interplay | Archive Engine Administration Guide
and the Interplay Help.

n The tool that allows you to change the Server Execution User has changed for 2018.11. See the
Interplay 2018.11 ReadMe for details.

• Cluster installation account: Create or select a domain user account to use during the
installation and configuration process. There are special requirements for the account that you
use for the Microsoft cluster installation and creation process (described below).
- If your site allows you to use an account with the required privileges, you can use this
account throughout the entire installation and configuration process.
- If your site does not allow you to use an account with the required privileges, you can work
with the site’s IT department to use a domain administrator’s account only for the Microsoft
cluster creation steps. For other tasks, you can use a domain user account without the
required privileges.
In addition, the account must have administrative permissions on the servers that will become
cluster nodes. You can do this by adding the account to the local Administrators group on each of
the servers that will become cluster nodes.

27
Before You Begin the Server Failover Installation

Requirements for Microsoft cluster creation: To create a user with the necessary rights for
Microsoft cluster creation, you need to work with the site’s IT department to access Active
Directory (AD). Depending on the account policies of the site, you can grant the necessary rights
for this user in one of the following ways:
- Create computer objects for the failover cluster (virtual host name) and the Interplay Engine
(virtual host name) in the Active Directory (AD) and grant the user Full Control on them. In
addition, the failover cluster object needs Full Control over the Interplay Engine object. For
examples, see “List of IP Addresses and Network Names” on page 28.
The account for these objects must be disabled so that when the Create Cluster wizard and
the Interplay Engine installer are run, they can confirm that the account to be used for the
cluster is not currently in use by an existing computer or cluster in the domain. The cluster
creation process then enables the entry in the AD.
- Make the user a member of the Domain Administrators group. There are fewer manual steps
required when using this type of account.
- Grant the user the permissions “Create Computer objects” and “Read All Properties” in the
container in which new computer objects get created, such as the computer’s Organizational
Unit (OU).
For more information, see the Avid Knowledge Base article “How to Prestage Cluster Name
Object and Virtual Interplay Engine Name” at http://avid.force.com/pkb/articles/en_US/
How_To/How-to-prestage-cluster-name-object-and-virtual-Interplay-Engine-name. This article
references the Microsoft article “Failover Cluster Step-by-Step Guide: Configuring Accounts in
Active Directory” at http://technet.microsoft.com/en-us/library/cc731002%28WS.10%29.aspx

n Roaming profiles are not supported in an Interplay Production environment.

• Cluster administration account: Create or select a user account for logging in to and
administering the failover cluster server. Depending on the account policies of your site, this
account could be the same as the cluster installation account, or it can be a different domain user
account with administrative permissions on the servers that will become cluster nodes.

List of IP Addresses and Network Names


You need to reserve IP host names and static IP addresses on the in-network DNS server before you
begin the installation process. The number of IP addresses you need depends on your configuration:
• An environment with a redundant-switch configuration requires 4 public IP addresses and 2
private IP addresses
• An environment with a dual-connected configuration requires 8 public IP addresses and 2 private
IP addresses

n Make sure that these IP addresses are outside of the range that is available to DHCP so they cannot
automatically be assigned to other machines.

n All names must be valid and unique network host names.A hostname must comply with RFC 952
standards. For example, you cannot use an underscore in a hostname. For more information, see
“Naming Conventions in Active Directory for Computers, Domains, Sites, and OUs” on the
Microsoft Support Knowledge Base.

The following table provides a list of example names that you can use when configuring the cluster
for a redundant-switch configuration. You can fill in the blanks with your choices to use as a
reference during the configuration process.

28
Before You Begin the Server Failover Installation

IP Addresses and Node Names: Redundant-Switch Configuration

Node or Service Item Required Example Name Where Used


Cluster node 1 • 1 Host Name SECLUSTER1 See “Creating the Failover
Cluster” on page 53.
_____________________
• 1 shared-storage IP address -
public
_____________________
• 1 IP address - private
(Heartbeat)
_____________________

Cluster node 2 • 1 Host Name SECLUSTER2 See “Creating the Failover


Cluster” on page 53.
_____________________
• 1 shared-storage IP address -
public
_____________________
• 1 IP address - private
(Heartbeat)
_____________________

Microsoft failover • 1 Network Name SECLUSTER See “Creating the Failover


cluster (virtual host name) Cluster” on page 53.
_____________________
• 1 shared-storage IP address -
public
(virtual IP address)
_____________________

Interplay Engine cluster • 1 Network Name SEENGINE


role (virtual host name)
_____________________
• 1 shared-storage IP address -
public
(virtual IP address)
_____________________

The following table provides a list of example names that you can use when configuring the cluster
for an dual-connected configuration. Fill in the blanks to use as a reference.

29
Before You Begin the Server Failover Installation

IP Addresses and Node Names: Dual-Connected Configuration

Node or Service Item Required Example Name Where Used


Cluster node 1 • 1 Host Name SECLUSTER1 See “Creating the Failover
Cluster” on page 53.
______________________
• 2 shared-storage IP addresses
- public
(left) __________________
(right) _________________
• 1 IP address - private
(Heartbeat)
______________________

Cluster node 2 • 1 Host Name SECLUSTER2 See “Creating the Failover


Cluster” on page 53.
______________________
• 2 shared-storage IP addresses
- public
(left)__________________
(right)_________________
• 1 IP address - private
(Heartbeat)
______________________

Microsoft failover • 1 Network Name SECLUSTER See “Creating the Failover


cluster (virtual host name) Cluster” on page 53.
______________________
• 2 shared-storage IP addresses
- public
(virtual IP addresses)
(left) __________________
(right)__________________

Interplay Engine cluster • 1 Network Name SEENGINE


role (virtual host name)
______________________
• 2 shared-storage IP addresses
- public (virtual IP addresses)
(left) __________________
(right) _________________

Active Directory and DNS Requirements


Use the following table to help you add Active Directory accounts for the cluster components to your
site’s DNS.

30
Preparing the Server for the Failover Cluster

Windows Server 2012: DNS Entries

Computer Account in DNS Dynamic DNS Static


Component Active Directory Entry a Entry

Cluster node 1 node_1_name Yes No

Cluster node 2 node_2_name Yes No

Microsoft failover cluster cluster_nameb Yes Yesc

Interplay Engine cluster role ie_nameb Yes Yesc

a. Entries are dynamically added to the DNS when the node logs on to Active Directory.
b. If you manually created Active Directory entries for the Microsoft failover cluster and Interplay Engine cluster
role, make sure to disable the entries in Active Directory in order to build the Microsoft failover cluster (see
“Requirements for Domain User Accounts” on page 27).
c. Add reverse static entries only. Forward entries are dynamically added by the failover cluster. Static entries
must be exempted from scavenging rules.

Preparing the Server for the Failover Cluster


Before you configure the failover cluster, you need to complete the tasks in the following procedures:
• “Downloading the ATTO Driver and Configuration Tool” on page 32
• “Changing Default Settings for the ATTO Card on Each Node” on page 32
• “Changing Windows Server Settings on Each Node” on page 34
• “Configuring Local Software Firewalls” on page 35
• “Renaming the Local Area Network Interface on Each Node” on page 35
• “Configuring the Private Network Adapter on Each Node” on page 38
• “Configuring the Binding Order Networks on Each Node” on page 40
• “Configuring the Public Network Adapter on Each Node” on page 41
• “Configuring the Cluster Shared-Storage RAID Disks on Each Node” on page 42

The tasks in this section do not require the administrative privileges needed for Microsoft cluster
creation (see “Requirements for Domain User Accounts” on page 27).

Configuring the ATTO Fibre Channel Card


The following topics describe steps necessary to prepare the ATTO fibre channel card. This card is
installed in each server in a cluster and is used to communicate with the storage array.
• “Downloading the ATTO Driver and Configuration Tool” on page 32
• “Changing Default Settings for the ATTO Card on Each Node” on page 32

n The ATTO Celerity FC-81EN is qualified for Dell and HPE servers. Other Fibre Channel adapters
supported by Dell and HPE are also supported for an Interplay Engine cluster. This guide does not
contain information about the configuration of these cards; the default factory settings should work
correctly. If the SAN drives are accessible on both nodes, and if the failover cluster validation
succeeds, the adapters are configured correctly.

31
Preparing the Server for the Failover Cluster

Downloading the ATTO Driver and Configuration Tool

You need to download the ATTO drivers and the ATTO Configuration Tool from the ATTO web site
and install it on the server. You must register to download tools and drivers.

To download and install the ATTO Configuration Tool for the FC-81EN card:
1. Go to the 8Gb Celerity HBAs Downloads page and download the ATTO Configuration Tool:
https://www.attotech.com/downloads/70/
Scroll down several pages to find the Windows ConfigTool (currently version 4.22).
2. Double-click the downloaded file win_app_configtool_422.exe, then click Run.
3. Extract the files.
4. Locate the folder to which you extracted the files and double-click ConfigTool_422.exe.
5. Follow the system prompts for a Full Installation.

Then locate, download and install the appropriate driver. The current version for the Celerity FC-
81EN is version 1.85.

Changing Default Settings for the ATTO Card on Each Node

You need to use the ATTO Configuration Tool to change some default settings on each node in the
cluster.

To change the default settings for the ATTO card:


1. On the first node, click Start, and select Programs > ATTO ConfigTool > ATTO ConfigTool.
The ATTO Configuration Tool dialog box opens.
2. In the Device Listing tree (left pane), click the expand box for “localhost.”
A login screen is displayed.

3. Type the user name and password for a local administrator account and click Login.
4. In the Device Listing tree, navigate to the appropriate channel on your host adapter.
5. Click the NVRAM tab.

32
Preparing the Server for the Failover Cluster

6. Change the following settings if necessary:


- Boot driver: Disabled
- Execution Throttle: 128
- Device Discovery: Port WWN
- Data Rate:
- For connection to Infortrend, select 4 Gb/sec.
- For connection to HPE MSA, select 8 Gb/sec.
- Interrupt Coalesce: Low
- Spinup Delay: 0
You can keep the default values for the other settings.
7. Click Commit.
8. Reboot the system.
9. Open the Configuration tool again and verify the new settings.
10. On the other node, repeat steps 1 through 9.

33
Preparing the Server for the Failover Cluster

Changing Windows Server Settings on Each Node


On each node, set the processor scheduling for best performance of programs.

n No other Windows server settings need to be changed. Later, you need to add features for clustering.
See “Installing the Failover Clustering Features” on page 49.

To change the processor scheduling:


1. Select Control Panel > System and Security > System.
2. In the list on the left side of the System dialog box, click “Advanced system settings.”

3. In the Advanced tab, in the Performance section, click the Settings button.
4. In the Performance Options dialog box, click the Advanced tab.
5. In the Processor scheduling section, for “Adjust for best performance of,” select Programs.

6. Click OK.
7. In the System Properties dialog box, click OK.

34
Preparing the Server for the Failover Cluster

Configuring Local Software Firewalls


Note that the 2018.11 Engine installer adds the following rules to the Firewall:
• Allow all incoming TCP traffic for the "Workgroup Server Browser" service
• Allow all incoming TCP traffic for the apache.exe process

Make sure any local software firewalls used in a failover cluster, such as Symantec End Point (SEP),
are configured to allow iPv6 communication and IPv6 over IPv4 communication.

n The Windows Firewall service must be enabled for proper operation of a failover cluster. Note that
enabling the service is different from enabling or disabling the firewall itself and firewall rules

Currently the SEP Firewall does not support IPv6. Allow this communication in the SEP Manager.
Edit the rules shown in the following illustrations:

Renaming the Local Area Network Interface on Each Node


You need to rename the LAN interface on each node to appropriately identify each network.

c Avid recommends that both nodes use identical network interface names. Although you can use
any name for the network connections, Avid suggests that you use the naming conventions
provided in the table in the following procedure.

To rename the local area network connections:


1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens. On a Dell PowerEdge, the Name shows the number of
the hardware (physical) port as it is labeled on the computer. The Device Name shows the name
of the network interface card. Note that the number in the Device Name does not necessarily
match the number of the hardware port.

35
Preparing the Server for the Failover Cluster

n One way to find out which hardware port matches which Windows device name is to plug in
sequentially one network cable into each physical port and check in the Network Connections dialog
which device becomes connected.

3. Right-click a network connection and select Rename.


4. Depending on your Avid network and the device you selected, type a new name for the network
connection and press Enter.
Use the following illustration and table for reference. The illustration uses connections on a Dell
PowerEdge computer in both redundant and dual-connected configurations as an example.

Redundant Switch Configuration


Dell PowerEdge R630 Back Panel

Connector 2 to Avid network switch 1


Fibre Channel
(Public network)
to RAID Array

Connector 3 to node 2 (Private network)

Dual-Connected Configuration
Dell PowerEdge R630 Back Panel

Connector 2 to ISIS left subnet


(Public network)
Fibre Channel
Connector 3 to node 2 (Private network) to RAID Array

Connector 4 to ISIS right subnet


(Public network)

36
Preparing the Server for the Failover Cluster

Naming Network Connections (Using Dell PowerEdge)

Network
Connector New Names
s as (Redundant-switch New Names (Dual-
Labeled configuration) connected configuration) Device Name

1 Not used Not used Broadcom NetXtreme Gigabit


Ethernet #4

2 Public Right Broadcom NetXtreme Gigabit


Ethernet
This is a public network This is a public network
connected to a network connected to network switch.
switch
You can include the subnet
number of the interface. For
example, Right-10.

3 Private Private Broadcom NetXtreme Gigabit


Ethernet #2
This is a private network This is a private network used
used for the heartbeat for the heartbeat between the
between the two nodes in two nodes in the cluster.
the cluster.

4 Not used Left Broadcom NetXtreme Gigabit


Ethernet #3
This is a public network
connected to network switch.
You can include the subnet
number of the interface. For
example, Left-20.

5. Repeat steps 3 and 4 for each network connection.


The following Network Connections window shows the new names used in a redundant-switch
environment.

6. Close the Network Connections window.


7. Repeat this procedure on node 2, using the same names that you used for node 1.

37
Preparing the Server for the Failover Cluster

Configuring the Private Network Adapter on Each Node


Repeat this procedure on each node.

To configure the private network adapter for the heartbeat connection:


1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens.
3. Right-click the Private network connection (Heartbeat) and select Properties.
The Private Properties dialog box opens.
4. On the Networking tab, click the following check box:
- Internet Protocol Version 4 (TCP/IPv4)
Uncheck all other components.

Select this check box.


All others are unchecked.

5. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.


The Internet Protocol Version 4 (TCP/IPv4) Properties dialog box opens.

38
Preparing the Server for the Failover Cluster

Type the private IP


address for the node
you are configuring.

6. On the General tab of the Internet Protocol (TCP/IP) Properties dialog box:
a. Select “Use the following IP address.”
b. IP address: type the IP address for the Private network connection for the node you are
configuring. See “List of IP Addresses and Network Names” on page 28.

n When performing this procedure on the second node in the cluster, make sure you assign a static
private IP address unique to that node. In this example, node 1 uses 192.168.100.1 and node 2 uses
192. 168. 100. 2.

c. Subnet mask: type the subnet mask address

n Make sure you use a completely different IP address scheme from the one used for the public
network.

d. Make sure the “Default gateway” and “Use the Following DNS server addresses” text boxes
are empty.
7. Click Advanced.
The Advanced TCP/IP Settings dialog box opens.

39
Preparing the Server for the Failover Cluster

8. On the DNS tab, make sure no values are defined and that the “Register this connection’s
addresses in DNS” and “Use this connection’s DNS suffix in DNS registration” are not selected.
9. On the WINS tab, do the following:
t Make sure no values are defined in the WINS addresses area.
t Make sure “Enable LMHOSTS lookup” is selected.
t Select “Disable NetBIOS over TCP/IP.”
10. Click OK.
A message might by displayed stating “This connection has an empty primary WINS address.
Do you want to continue?” Click Yes.
11. Repeat this procedure on node 2, using the static private IP address for that node.

Configuring the Binding Order Networks on Each Node


Repeat this procedure on each node and make sure the configuration matches on both nodes.

To configure the binding order networks:


1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens.
3. Press the Alt key to display the menu bar.
4. Select the Advanced menu, then select Advanced Settings.
The Advanced Settings dialog box opens.

40
Preparing the Server for the Failover Cluster

5. In the Connections area, use the arrow controls to position the network connections in the
following order:
- For a redundant-switch configuration, use the following order:
- Public
- Private
- For a dual-connected configuration, use the following order, as shown in the illustration:
- Left
- Right
- Private
6. Click OK.
7. Repeat this procedure on node 2 and make sure the configuration matches on both nodes.

Configuring the Public Network Adapter on Each Node


Make sure you configure the IP address network interfaces for the public network adapters as you
normally would. For examples of public network settings, see “List of IP Addresses and Network
Names” on page 28.

Avid recommends that you disable IPv6 for the public network adapters, as shown in the following
illustration:

41
Preparing the Server for the Failover Cluster

n Disabling IPv6 completely is not recommended.

Configuring the Cluster Shared-Storage RAID Disks on Each Node


Both nodes must have the same configuration for the cluster shared-storage RAID disks. When you
configure the disks on the second node, make sure the disks match the disk configuration you set up
on the first node.

n Make sure the disks are Basic and not Dynamic.

The first procedure describes how to configure disks for the Infortrend array, which contains three
disks. The second procedure describes how to configure disks for the HPE MSA array, which
contains two disks.

To configure the Infortrend RAID disks on each node:


1. Shut down the server node you are not configuring at this time.
2. Open the Disk Management tool in one of the following ways:
t Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1, Disk 2, and Disk 3. In this example they are offline, not initialized, and
unformatted.

42
Preparing the Server for the Failover Cluster

3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 3. Do not bring Disk 2 online.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.

Select Disk 1 and Disk 3 and make sure that MBR is selected. Click OK.

43
Preparing the Server for the Failover Cluster

5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.

Use the following names and drive letters, depending on your storage array:

Disk Name and Drive Letter Infortrend S12F-R1440

Disk 1 Quorum (Q:) 10 GB

Disk 3 Database (S:) 814 GB or larger

n Do not assign a name or drive letter to Disk 2.

n If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.

The following illustration shows Disk 1 and Disk 3 with the required names and drive letters for
the Infortrend S12F-R1440:

44
Preparing the Server for the Failover Cluster

6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a. Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 3 online, as described in step 3.
c. Right-click a partition, select Change Drive Letter, and enter the appropriate letter.

c You must assign the same drive letters on each node.

d. Repeat these actions for the other partitions.


9. Boot the first node.
10. Open the Disk Management tool to make sure that the disks are still online and have the correct
drive letters assigned.
At this point, both nodes should be running.

To configure the HPE MSA RAID disks on each node:


1. Shut down the server node you are not configuring at this time.
2. Open the Disk Management tool in one of the following ways:
t Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.

45
Preparing the Server for the Failover Cluster

t Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. In this example they are initialized and formatted, but offline.

3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 2.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.

Select Disk 1 and Disk 2 and make sure that MBR is selected. Click OK.
5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.

46
Preparing the Server for the Failover Cluster

Use the following names and drive letters.

Disk Name and Drive Letter HPE MSA 2040, MSA 2050

Disk 1 Quorum (Q:) 10 GB

Disk 2 Database (S:) 870 GB or larger

n If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.

The following illustration shows Disk 1 and Disk 2 with the required names and drive letters.

47
Configuring the Failover Cluster

6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a. Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 2 online, as described in step 3.
c. Right-click a partition, select Change Drive Letter, and enter the appropriate letter.
d. Repeat these actions for the other partitions.
9. Boot the first node.
10. Open the Disk Management tool to make sure that the disks are still online and have the correct
drive letters assigned.
At this point, both nodes should be running.

Configuring the Failover Cluster


Take the following steps to configure the failover cluster:
1. Add the servers to the domain. See “Joining Both Servers to the Active Directory Domain” on
page 49.
2. Install the Failover Clustering feature. See “Installing the Failover Clustering Features” on
page 49.
3. Start the Create Cluster Wizard on the first node. See “Creating the Failover Cluster” on page 53.
This procedure creates the failover cluster for both nodes.
4. Rename the cluster networks. See “Renaming the Cluster Networks in the Failover Cluster
Manager” on page 58.

48
Configuring the Failover Cluster

5. Rename the Quorum disk. See “Renaming the Quorum Disk” on page 60.
6. For a dual-connected configuration, add a second IP address. See “Adding a Second IP Address
to the Cluster” on page 61.
7. Test the failover. See “Testing the Cluster Installation” on page 65.

c Creating the failover cluster requires an account with particular administrative privileges. For
more information, see “Requirements for Domain User Accounts” on page 27.

Joining Both Servers to the Active Directory Domain


After configuring the network information described in the previous topics, join the two servers to
the Active Directory domain. Each server requires a reboot to complete this process. At the login
window, use the domain administrator account (see “Requirements for Domain User Accounts” on
page 27).

Installing the Failover Clustering Features


Windows Server requires you to add the following features:
• Failover Clustering (with Failover Cluster Management Tools and Failover Cluster Module for
Windows PowerShell)

You need to install these on both servers.

To install the Failover Clustering features:


1. Open the Server Manager window (for example, right-click This PC and select Manage).
2. In the Server Manager window, select Local Server.
3. From the menu bar, select Manage > Add Roles and Features.
The Add Roles and Features Wizard opens.
4. Click Next.
The Installation Type screen is displayed.

49
Configuring the Failover Cluster

5. Select “Role-based or feature-based installation” and click Next.


The Server Selection screen is displayed.

50
Configuring the Failover Cluster

6. Make sure “Select a server from the server pool” is selected. Then select the server on which you
are working and click Next.
The Server Roles screen is displayed. Two File and Storage Services are installed. No additional
server roles are needed. Make sure that “Application Server” is not selected.

7. Click Next. The Features screen is displayed.

51
Configuring the Failover Cluster

8. Select Failover Clustering.


The Failover Clustering dialog box is displayed.

9. Make sure “Include management tools (if applicable)” is selected, then click Add Features.
The Features screen is displayed again.
10. Verify that the following two features have been added:
- Failover Cluster Management Tools
- Failover Cluster Module for Windows PowerShell

n In previous releases you were instructed to add the Failover Cluster Command Interface feature.
Microsoft has deprecated the feature and it is no longer needed by the installation.

11. Click Next.


The Confirmation screen is displayed.
12. Click Install.
The installation program starts. At the end of the installation, a message states that the
installation succeeded.

52
Configuring the Failover Cluster

13. Click Close.


14. Repeat this procedure on the other server.

Creating the Failover Cluster


To create the failover cluster:
1. Make sure all storage devices are turned on.
2. Log in to the operating system using the cluster installation account (see “Requirements for
Domain User Accounts” on page 27).
3. On the first node, open Failover Cluster Manager. There are several ways to open this window.
For example,
a. On the desktop, right-click This Computer and select Manage.
The Server Manager window opens.
b. In the Server Manager list, click Tools and select Failover Cluster Manager.
The Failover Cluster Manager window opens.
4. In the Management section, click Create Cluster.

53
Configuring the Failover Cluster

The Create Cluster Wizard opens with the Before You Begin window.
5. Review the information and click Next (you will validate the cluster in a later step).
6. In the Select Servers window, type the simple computer name of node 1 and click Add. Then
type the computer name of node 2 and click Add. The Cluster Wizard checks the entries and, if
the entries are valid, lists the fully qualified domain names in the list of servers, as shown in the
following illustration:

54
Configuring the Failover Cluster

c If you cannot add the remote node to the cluster, and receive an error message “Failed to
connect to the service manager on <computer-name>,” check the following:
- Make sure that the time settings for both nodes are in sync.
- Make sure that the login account is a domain account with the required privileges.
- Make sure the Remote Registry service is enabled.
For more information, see “Before You Begin the Server Failover Installation” on page 25.

7. Click Next.
The Validation Warning window opens.
8. Select Yes and click Next several times. When you can select a testing option, select Run All
Tests.
The automatic cluster validation tests begin. The tests take approximately five minutes. After
running these validation tests and receiving notification that the cluster is valid, you are eligible
for technical support from Microsoft.
The following tests display warnings, which you can ignore:
- List Software Updates (Windows Update Service is not running)
- Validate Storage Spaces Persistent Reservation
- Validate All Drivers Signed
- Validate Software Update Levels (Windows Update Service is not running)
9. In the Access Point for Administering the Cluster window, type a name for the cluster, then click
in the Address text box and enter an IP address. This is the name you created in the Active
Directory (see “Requirements for Domain User Accounts” on page 27).

55
Configuring the Failover Cluster

If you are configuring a dual-connected cluster, you need to add a second IP address after
renaming and deleting cluster disks. This procedure is described in “Adding a Second IP Address
to the Cluster” on page 61.
10. Click Next.
A message informs you that the system is validating settings. At the end of the process, the
Confirmation window opens.

56
Configuring the Failover Cluster

11. Review the information. Make sure “Add all eligible storage to the cluster” is selected. If all
information is correct, click Next.
The Create Cluster Wizard creates the cluster. At the end of the process, a Summary window
opens and displays information about the cluster.

You can click View Report to see a log of the entire cluster creation.
12. Click Finish.
Now when you open the Failover Cluster Manager, the cluster you created and information about
its components are displayed, including the networks available to the cluster (cluster networks).
To view the networks, select Networks in the list on the left side of the window.
The following illustration shows components of a cluster in a redundant-switch environment.
Cluster Network 1 is a public network (Cluster and Client) connecting to one of the redundant
switches, and Cluster Network 2 is a private, internal network for the heartbeat (Cluster only).

If you are configuring a dual-connected cluster, three networks are listed. Cluster Network 1 and
Cluster Network 2 are external networks connected to VLAN 10 and VLAN 20 on Avid ISIS,
and Cluster Network 3 is a private, internal network for the heartbeat.

57
Configuring the Failover Cluster

n This configuration refers to virtual networks (VLAN) that are used with ISIS 7000/7500 shared-
storage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.

Renaming the Cluster Networks in the Failover Cluster Manager


You can more easily manage the cluster by renaming the networks that are listed under the Failover
Cluster Manager.

To rename the networks:


1. Right-click This PC and select Manage.
The Server Manager window opens.
2. In the Failover Cluster Manager, select cluster_name > Networks.
3. In the Networks window, right-click Cluster Network 1 and select Properties.

The Properties dialog box opens.


4. Click in the Name text box, and type a meaningful name, for example, a name that matches the
name you used in the TCP/IP properties. For a redundant-switch configuration, use Public, as
shown in the following illustration. For a dual-connected configuration, use Left. For this
network, keep the option “Allow clients to connect through this network.”

58
Configuring the Failover Cluster

n The installer will ask for this name later in the installation process so make note of the name. Avid
recommends that you use the suggested names to make it easier for someone to upgrade or trouble
shoot the system at a later date.

5. Click OK.
6. If you are configuring a dual-connected cluster configuration, rename Cluster Network 2, using
Right. For this network, keep the option “Allow clients to connect through this network.” Click
OK.
7. Rename the other network Private. This network is used for the heartbeat. For this private
network, leave the option “Allow clients to connect through this network” unchecked. Click OK.

The following illustration shows networks for a redundant-switch configuration.

The following illustration shows networks for a dual-connected configuration.

59
Configuring the Failover Cluster

Renaming the Quorum Disk


You can more easily manage the cluster by renaming the disk that is used as the Quorum disk.

To rename the Quorum disk:


1. In the Failover Cluster Manager, select cluster_name > Storage > Disks.
The Disks window opens. Check to make sure the smaller disk is labeled “Disk Witness in
Quorum.” This disk most likely has the number 1 in the Disk Number column.

2. Right-click the disk assigned to “Disk Witness in Quorum” and select Properties

The Properties dialog box opens.


3. In the Name dialog box, type a name for the cluster disk. In this case, Cluster Disk 2 is the
Quorum disk, so type Quorum as the name.

60
Configuring the Failover Cluster

4. Click OK.

Adding a Second IP Address to the Cluster


If you are configuring a dual-connected cluster, you need to add a second IP address for the failover
cluster.

To add a second IP address to the cluster:


1. In the Failover Cluster Manager, select cluster_name > Networks.
Make sure that Cluster Use is enabled as “Cluster and Client” for both networks.

If a network is not enabled, right-click the network, select Properties, and select “Allow clients to
connect through this network.”

61
Configuring the Failover Cluster

2. In the Failover Cluster Manager, select the failover cluster by clicking on the Cluster name in the
left column.

3. In the Actions panel (right column), select Properties in the Name section.

62
Configuring the Failover Cluster

The Properties dialog box opens.

63
Configuring the Failover Cluster

4. In the General tab, do the following:


a. Click Add.
b. Type the IP address for the other network.
c. Click OK.
The General tab shows the IP addresses for both networks.

5. Click Apply.
A confirmation box asks you to confirm that all cluster nodes need to be restarted. You will
restart the nodes later in this procedure, so select Yes.

64
Configuring the Failover Cluster

6. Click the Dependencies tab and check if the new IP address was added with an OR conjunction.

If the second IP address is not there, click “Click here to add a dependency.” Select “OR” from
the list in the AND/OR column and select the new IP address from the list in the Resource
column.

Testing the Cluster Installation


At this point, test the cluster installation to make sure that the failover process is working.

To test the failover:


1. Make sure both nodes are running.
2. Determine which node is the active node (the node that owns the quorum disk). Open the
Failover Cluster Manager and select cluster_name > Storage > Disks. The server that owns the
Quorum disk is the active node.
3. Reboot the active node (node 1).
4. Monitor the activity on the Failover Cluster Monitor to ensure that the second node (node 2)
becomes the active node and that all resources are online.
5. After node 1 is back up and fully online, reboot node 2.
6. Monitor the activity on the Failover Cluster Monitor to ensure that node 1becomes the active
node and that all resources are online.
7. Make sure that node 2 rejoins the cluster as expected.
Configuration of the failover cluster on all nodes is now complete and the cluster is fully
operational. You can now install the Interplay Engine software.

65
3 Installing the Interplay | Engine for a
Failover Cluster

After you set up and configure the cluster, you need to install the Interplay Engine software on both
nodes. The following topics describe installing the Interplay Engine and other related tasks:
• Disabling Any Web Servers
• Installing the Interplay | Engine on the First Node
• Installing the Interplay | Engine on the Second Node
• Bringing the Interplay | Engine Online
• After Installing the Interplay | Engine
• Creating an Interplay | Production Database
• Testing the Complete Installation
• Installing a Permanent License
• Updating a Clustered Installation (Rolling Upgrade)
• Uninstalling the Interplay Engine or Archive Engine on a Clustered System

The tasks in this chapter require local administrator rights to the Interplay Engine servers. Unlike the
process for “Requirements for Domain User Accounts” on page 27, domain administrator privileges
are not required.

Disabling Any Web Servers


The Interplay Engine uses an Apache web server that can only be registered as a service if no other
web server (for example, IIS) is serving the port 80 (or 443). Stop and disable or uninstall any other
http services before you start the installation of the server. You must perform this procedure on both
nodes.

n In a standard installation, you should not be required to take any action as IIS is disabled by default
in Windows Server 2012 R2 and Windows Server 2016.

Installing the Interplay | Engine on the First Node


The following sections provide procedures for installing the Interplay Engine on the first node. For a
list of example entries, see “List of IP Addresses and Network Names” on page 28.
• “Preparation for Installing on the First Node” on page 67
• “Bringing the Shared Database Drive Online if Necessary” on page 67
• “Installing the Interplay Engine Software” on page 70
• “Checking the Status of the Cluster Role” on page 73
Installing the Interplay | Engine on the First Node

• “Adding a Second IP Address (Dual-Connected Configurations only)” on page 75


• “Changing the Resource Name of the Avid Workgroup Server (if applicable)” on page 79

Preparation for Installing on the First Node


You are ready to start installing the Interplay Engine on the first node. During setup you must enter
the following cluster-related information:
• Microsoft failover cluster virtual IP address and virtual host name. The virtual host name and IP
address is how users and services connect to the cluster. For a list of example names, see “List of
IP Addresses and Network Names” on page 28.
• Subnet Mask: the subnet mask on the local network.
• Public Network: the name of the public network connection.
- For a redundant-switch configuration, type Public, or whatever name you assigned in
“Renaming the Local Area Network Interface on Each Node” on page 35.
- For a dual-connection configuration, type Left-subnet or whatever name you assigned in
“Renaming the Cluster Networks in the Failover Cluster Manager” on page 58. For a dual-
connection configuration, you set the other public network connection after the installation.
See “Checking the Status of the Cluster Role” on page 73.
To check the public network connection on the first node, open the Networks view in the
Failover Cluster Manager and look up the name there.
• Shared Drive: the letter for the shared drive that holds the database. Use S: for the shared drive
letter. You need to make sure this drive is online on the first node. See “Bringing the Shared
Database Drive Online if Necessary” on page 67.
• Cluster Account User and Password (Server Execution User): the domain account that is used to
run the clustered engine. See “Before You Begin the Server Failover Installation” on page 25.

n When installing the Interplay Engine for the first time on a machine with a failover cluster, you are
asked to verify the type of installation: cluster or single-server. When you install a cluster, the
installation on the second node reuses the configuration information from the first node without
allowing you to change the cluster-specific settings. In other words, it is not possible to change the
cluster configuration settings without uninstalling the Interplay Engine.

Bringing the Shared Database Drive Online if Necessary


At this point in the installation process the database drive should be under cluster control and the S:
drive should be online and available on one of the nodes. This node will be referred to as “the first
node” in the following installation procedures.

If the shared database drive is not under cluster control, do the following:
• Use the first procedure attempt to add the shared database drive to the Cluster via the Cluster
Manager.
• If that is not successful then use the second procedure to make the S: drive available in the
Windows Disk Management. The shared database drive then will not be under cluster control
when the Engine installer starts, but the Engine installer will try to add it to the Cluster.

67
Installing the Interplay | Engine on the First Node

Using the Cluster Manager to add the shared database drive to the cluster:
1. In the Failover Cluster Manager select cluster_name > Storage > Disks.
2. Click on “Add Disk” in the Actions tab.
3. In the “Add Disks to Cluster” dialog the database disk should be offered (it will have Resource
Name like “Cluster Disk <number>”) and should already by checked as shown in the following
illustration. Click OK to add it.

If an Infortrend storage array is used, the dialog might offer 2 disks. In that case only select the
database disk (which is the larger one).
4. The disk will then be displayed in the Failover Cluster Manager and it should already be online.
If it is not online, select it and click “Bring Online” in the Actions menu.
Note that if this procedure does not bring the shared database drive online, use the alternate
procedure described below.

Using the Windows Disk Management to bring the shared database drive online:
1. Note that this procedure is only necessary if the first procedure did not bring the shared database
drive online.
2. On the first node, open Disk Management by doing one of the following:
t Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. Disk 1 is online, and Disk 2 is offline.

68
Installing the Interplay | Engine on the First Node

3. Right-click Disk 2 and select Online.

4. Make sure the drive letter is correct (S:) and the drive is named Database. If not, you must
change it now. Right-click the disk name and letter (right-column) and select Change Drive
Letter or Path.

69
Installing the Interplay | Engine on the First Node

If you attempt to change the drive letter, you receive a warning tells you that some programs that
rely on drive letters might not run correctly and asks if you want to continue. Click Yes.

Installing the Interplay Engine Software


The Interplay Engine installer does not offer you the ability to return to a previous window. If you
make a mistake and click Next before correcting the error, you must cancel the installation process
and start over.

Also be aware that the Avid software installer does not verify values after you enter them. If you
enter the wrong subnet for the cluster for example, the installer will allow you to proceed. The one
exception to this rule is the password for the Server Execution User. The installer prompts you for the
SEU password — twice, and it verifies that you entered the same password in both instances.

To install the Avid Interplay Engine:


1. Close all other applications before proceeding with the installation.
2. Download and unzip the MediaCentral Production Management installer from the Avid
Download Center.
In the following step, the “first node” is the node on which the database drive (S:) is online.
3. On the first node, launch the Avid Interplay installer by double-clicking autorun.exe.
A start screen appears that provides you with options to install multiple products.
4. Select the following from the Interplay Server Installer Main Menu:
Servers > Interplay Engine > Interplay Engine

n If you are installing MediaCentral Production Management v2018.11 or later on Windows Server
2012, the installer prompts you to install one or more prerequisite Windows components. Follow the
prompts to install these prerequisites and reboot the server when prompted.

The installer opens a new PowerShell command window and displays the Production
Management Engine Installer Welcome screen. During the installation, the command window
displays information about the installation process.

c Avoid marking (highlighting) any text in the PowerShell command window. If you mark any
text in this window, you will pause the installation process. If you accidentally mark text, you
must click once anywhere inside the command window to resume the installation.

5. Read the information in Welcome screen and click Next.


The License Agreement dialog box opens.
6. Read the license agreement and click Accept to continue.

70
Installing the Interplay | Engine on the First Node

The Specify Installation Mode dialog box opens.

7. Select the Cluster option and click Next.


8. Type the virtual host name that you assigned to the Avid Workgroup Server and click Next.
For example: wavd-vie
This is the public name that is used by clients to connect to the server. For a list of examples, see
“List of IP Addresses and Network Names” on page 28.
9. Type the virtual IPv4 address that you assigned to the Avid Workgroup Server and click Next.
For example: 192.168.10.20
This is the Interplay Engine service IP Address, not the failover cluster IP address. For a list of
examples, see “List of IP Addresses and Network Names” on page 28.
For a dual-connected configuration, you set the other public network connection after the
installation. See “Adding a Second IP Address (Dual-Connected Configurations only)” on
page 75.
10. Type the IPv4 subnet mask of the network IP address that you entered in the previous step and
click Next.
For example: 255.255.255.0
11. Type the name of the public network that you assigned to the cluster and click Next. This must
be the cluster resource name.
For example: Public
If necessary, you can verify the name of the public network on the first node through the
Networks view of the Failover Cluster Manager. For more information, see “Renaming the Local
Area Network Interface on Each Node” on page 35 and “Renaming the Cluster Networks in the
Failover Cluster Manager” on page 58.
12. Type the letter of the shared drive that is used to store the Production database, followed by a
colon, and click Next. In most cases, this will be the S: drive.
For example: S:
13. In the following screen, the installer prompts you to specify the path in which to install the
Production software. Accept the default path and click Next.
The default path is: C:\Program Files\Avid\Production Management Engine
14. Type the name of your domain and the name of the cluster account (Server Execution User) used
to run the Avid Interplay Engine, and then click Next.
For example: wavd\nxnuser

71
Installing the Interplay | Engine on the First Node

The Server Execution User is the Windows domain user that runs the Interplay Engine. This
account is automatically added to the Local Administrators group on the server. See “Before You
Begin the Server Failover Installation” on page 25.

c When typing the domain name do not use the full DNS name such as mydomain.company.com,
because the DCOM part of the server will be unable to start. You should use the NetBIOS
name, for example, mydomain.

15. Type the password for the cluster account (Server Execution User) specified above and click
Next.
16. Retype the password for the cluster account and click Next.
The Production installer verifies that this password matches the password that you entered in the
previous step. If it does not match, you are returned to the previous step where you must enter
and reconfirm your password again.
17. In the following screen, the installer prompts you to specify the path for the Production database.
Accept the default path and click Next.
The default path is: S:\Workgroup_Databases
This folder must reside on the shared drive that is owned by the cluster role of the server. You
must use this shared drive resource so that it can be monitored and managed by the Cluster
service. The drive must be assigned to the physical drive resource that is mounted under the same
drive letter on both nodes.
18. The installer asks if you want to enable the Interplay SNMP (Simple Network Management
Protocol) service.
t If you do not need to enable SNMP, keep the default selection of No and click Next.
t If you need to enable SNMP, click the Yes button and click Next.
The installer verifies that the local SNMP service is installed on the engine. The installer
does not verify that the SNMP service is configured or running, only that it is installed. If
you click Yes and Windows SNMP is not installed, a second window appears to confirm that
you still wish to install the Interplay SNMP service.
For more information on configuring Production Management with SNMP, contact Avid
Customer Care.
19. The installer asks if you want to install the Sentinel USB Dongle Driver.
t If your Production Engine is licensed using a software license only (no dongle), keep the
default selection of No and click Next.
t If your Production Engine is licensed using USB dongles that are attached directly to each
Production Engine, click the Yes button and click Next.
The USB driver is installed automatically for you during the Production Engine installation
process.
20. The installer presents a confirmation window that details the information that you specified in
the steps above.
t If you see an error, click the Cancel button to exit the installer.
In this case, you must restart the installation process from the beginning.
t If the information is correct, click the Start button to begin the installation process.

72
Installing the Interplay | Engine on the First Node

As shown in the following illustration, the PowerShell command window that was opened
when you first initiated the installation process begins to provide feedback about the
installation tasks.

If you see any errors during the installation process, you can review the logs under
<drive>\<path to Production installer>\Engineinstaller\Logs for more information.

n If the system displays the following warning message, you can ignore the message and continue with
the installation.

WARNING: The properties were stored, but not all changes will take effect until Avid
Workgroup Disk is taken offline and then online again.
21. At the end of the installation process, you should see an “Installation finished” message as in the
following illustration.

Click inside the command window and press any key to close the window.

Checking the Status of the Cluster Role


After installing the Interplay Engine, you should check the status of the resources in the Avid
Workgroup Server cluster role. This is an optional, but recommended step.

73
Installing the Interplay | Engine on the First Node

To check the status of the cluster role:


1. After the installation is complete, right-click My Computer and select Manage.
The Server Manager window opens.
2. In the Server Manager list, open Features > Failover Cluster Manager > cluster_name.
3. Click Roles.
The Avid Workgroup Server role is displayed.
4. Click the Resources tab.
The list of resources should look similar to those in the following illustration.

The Avid Workgroup Disk resources, Server Name, and File Server should be online and all
other resources offline. S$ and WG_Database$ should be listed in the Shares tab.
Take one of the following steps:
- If you are setting up a redundant-switch configuration, leave this node running so that it
maintains ownership of the cluster role and proceed to “Installing the Interplay | Engine on
the Second Node” on page 81.
- If you are setting up a dual-connected configuration, proceed to “Adding a Second IP
Address (Dual-Connected Configurations only)” on page 75.

n Avid does not recommend starting the server at this stage, because it is not installed on the other
node and a failover would be impossible.

74
Installing the Interplay | Engine on the First Node

Adding a Second IP Address (Dual-Connected Configurations only)


If you are setting up a dual-connected configuration, you need use the Failover Cluster Manager to
add a second IP address.

To add a second IP address:


1. Right-click My Computer and select Manage.
The Server Manager window opens.
2. In the Server Manager list, open Features > Failover Cluster Manager > cluster_name.
3. Select Avid Workgroup Server and click the Resources tab.
4. Bring the Name, IP Address, and File Server resources offline by doing one of the following:
- Right-click the resource and select “Take Offline.”
- Select all resources and select “Take Offline” in the Actions panel of the Server Manager
window.
The following illustration shows the resources offline.

5. Right-click the Name resource and select Properties.


The Properties dialog box opens.

75
Installing the Interplay | Engine on the First Node

c Note that the Resource Name is listed as “Avid Workgroup Name.” Make sure to check the
Resource Name after adding the second IP address and bringing the resources on line in step 9.

If the Kerberos Status is offline, you can continue with the procedure. After bringing the server
online, the Kerberos Status should be OK.
6. Click the Add button below the IP Addresses list.
The IP Address dialog box opens.

The second sub-network and a static IP Address are already displayed.


7. Type the second Interplay Engine service IP address. See “List of IP Addresses and Network
Names” on page 28. Click OK.
The Properties dialog box is displayed with two networks and two IP addresses.

76
Installing the Interplay | Engine on the First Node

8. Check that you entered the IP address correctly, then click Apply.
9. Click the Dependencies tab and check that the second IP address was added, with an OR in the
AND/OR column.

10. Click OK.


The Resources screen should look similar to the following illustration.

77
Installing the Interplay | Engine on the First Node

11. Bring the Name, both IP addresses, and the File Server resource online by doing one of the
following:
- Right-click the resource and select “Bring Online.”
- Select the resources and select “Bring Online” in the Actions panel.
The following illustration shows the resources online.

12. Right-click the Name resource and select Properties.

78
Installing the Interplay | Engine on the First Node

The Resource Name must be listed as “Avid Workgroup Name.” If it is not, see “Changing the
Resource Name of the Avid Workgroup Server (if applicable)” on page 79.
13. Leave this node running so that it maintains ownership of the cluster role and proceed to
“Installing the Interplay | Engine on the Second Node” on page 81.

Changing the Resource Name of the Avid Workgroup Server (if applicable)
If you find that the resource name of the Avid Workgroup Server application is not “Avid Workgroup
Name” (as displayed in the properties for the Server Name), you need to change the name in the
Windows registry.

To change the resource name of the Avid Workgroup Server:


1. On the node hosting the Avid Workgroup Server (the active node), open the registry editor and
navigate to the key HKEY_LOCAL_MACHINE\Cluster\Resources.

c If you are installing a dual-connected cluster, make sure to edit the “Cluster” key. Do not edit
other keys that include the word “Cluster,” such as the “0.Cluster” key.

2. Browse through the GUID named subkeys looking for the one subkey where the value “Type” is
set to “Network Name” and the value “Name” is set to <incorrect_name>.
3. Change the value “Name” to “Avid Workgroup Name.”
4. Do the following to shut down the cluster:

79
Installing the Interplay | Engine on the First Node

c Make sure you have edited the registry entry before you shut down the cluster.

a. In the Failover Cluster Manager tree (left panel) select the cluster. In the following example,
the cluster name is muc-vtlasclu1.VTL.local.

b. In the context menu or the Actions panel on the right side, select “More Actions > Shutdown
Cluster.”

5. Do the following to bring the cluster on line:


a. In the Failover Cluster Manager tree (left panel) select the cluster.
b. In the context menu or the Actions panel on the right side, select “Start Cluster.”

80
Installing the Interplay | Engine on the Second Node

Installing the Interplay | Engine on the Second Node


To install the Interplay Engine on the second node:
1. Leave the first node running so that it maintains ownership of the cluster role.

c Do not attempt to move the cluster role over to the second node, or similarly, do not shut down
the first node while the second is up, before the installation is completed on the second node.

c Do not attempt to initiate a failover before installation is completed on the second node and you
create an Interplay database. See “Testing the Complete Installation” on page 84.

2. Perform the installation procedure for the second node as described in “Installing the Interplay |
Engine on the First Node” on page 66 and note the following differences:
- When you are prompted to select either Cluster or Standalone mode, select Cluster.
If you select the Standalone option and you are on a cluster, the Engine installer detects that
you have a partially installed cluster configuration and prevents you from proceeding with
the single-server (standalone) installation.
- After you click Next in the Specify Installation Mode window, the install pulls all
configuration information from the first node and displays a confirmation window as shown
in the following illustration.

3. Review the information and click Continue.


4. The installer presents the same installation dialog boxes that you saw on the first node. Enter the
required information and allow the installation to proceed.

c Make sure that you specify the same values for the second node as you entered on the first node.
Using different values results in a corrupted installation.

c If you receive a message that the Avid Workgroup Name resource was not found, you need to
check the registry. See “Changing the Resource Name of the Avid Workgroup Server (if
applicable)” on page 79.

81
Bringing the Interplay | Engine Online

Bringing the Interplay | Engine Online


To bring the Interplay Engine online:
1. Open the Failover Cluster Manager and select cluster_name > Roles.
The Avid Workgroup Server role is displayed.

2. Right-click on the Avid Workgroup Server and select Start Role.


After a few moments, all resources are started on the master node — as shown in the following
illustration. To view the resources, click the Resources tab.

82
After Installing the Interplay | Engine

After Installing the Interplay | Engine


After you install the Interplay Engine, install the following applications on both nodes:
• Interplay Access: From the Interplay Server Installer Main Menu, select Servers > Avid Interplay
Engine > Avid Interplay Access.
• Avid shared-storage client (if not already installed).

n If you cannot log in or connect to the Interplay Engine, make sure the database share
WG_Database$ exists. You might get the following error message when you try to log in: “The
network name cannot be found (0x80070043).”

Then create an Interplay database, as described in “Creating an Interplay | Production Database” on


page 83.

Creating an Interplay | Production Database


Before testing the failover cluster, you need to create a database. The following procedure describes
basic information about creating a database. For complete information, see the Interplay | Engine
and Interplay | Archive Engine Administration Guide.

To create an Interplay database:


1. Start the Interplay Administrator and log in.

n If this a completely fresh installation (without a pre-existing database), then the only database user is
"Administrator" with an empty password.

2. In the Database section of the Interplay Administrator window, click the Create Database icon.
The Create Database view opens.
3. In the New Database Information area, leave the default “AvidWG” in the Database Name text
box. For an archive database, leave the default “AvidAM.” These are the only two supported
database names.
4. Type a description for the database in the Description text box, such as “Main Production
Server.”
5. Select “Create default Avid Interplay structure.”
After the database is created, a set of default folders within the database are visible in Interplay
Access and other Interplay clients. For more information about these folders, see the
Interplay | Access User’s Guide.
6. Keep the root folder for the New Database Location (Meta Data).
The metadata database must reside on the Interplay Engine server.
7. Keep the root folder for the New Data Location (Assets).
8. Click Create to create directories and files for the database.
The Interplay database is created.

83
Testing the Complete Installation

Testing the Complete Installation


After you complete all the previously described steps, you are now ready to test the installation.
Make yourself familiar with the Failover Cluster Manager and review the different failover-related
settings.

n If you want to test the Microsoft cluster failover process again, see “Testing the Cluster Installation”
on page 65.

To test the complete installation:


1. Start Interplay Access and add some files to the database.
At this time you can either use the default license for testing, or install a permanent license using
the process described in “Installing a Permanent License” on page 84.
2. In the Failover Cluster Manager, initiate a failover by selecting Avid Workgroup Server and then
selecting Move > Best Possible Node from the Actions menu. Select another node.
After the move is complete, all resources should remain online and the target node should be the
current owner.
You can also simulate a failure by right-clicking a resource and selecting More Actions >
Simulate Failure.

n A failure of a resource does not necessarily initiate failover of the complete Avid Workgroup Server
role.

3. You might also want to experiment by terminating the Interplay Engine manually using the
Windows Task Manager (NxNServer.exe). This is also a good way to get familiar with the
failover settings which can be found in the Properties dialog box of the Avid Workgroup Server
and on the Policies tab in the Properties dialog box of the individual resources.
4. Look at the related settings of the Avid Workgroup Server. If you need to change any
configuration files, make sure that the Avid Workgroup Disk resource is online; the configuration
files can be found on the resource drive in the Workgroup_Data folder.

Installing a Permanent License


During Interplay Engine installation a temporary license for one user is activated automatically so
that you can administer and install the system. There is no time limit for this license.

Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. In previous versions, licenses were managed through hardware application
keys (dongles). Dongles continue to be supported for existing licenses, but new licenses require
software licensing.

A set of permanent licenses is provided by Avid in one of two ways:


• As a software license
For a clustered engine, Avid supplies a single license that should be used for both nodes in the
cluster. If you are licensing a clustered engine, follow the published procedure to activate the
license on each node of the cluster, using the same System ID and Activation ID for each node.

84
Updating a Clustered Installation (Rolling Upgrade)

There are no special requirements to activate or deactivate a node before licensing. Log in
directly to each node and use the local version of the Avid License Control application or Avid
Application Manager (for Interplay Engine v3.8 and later) to install the license.
• As a file with the extension .nxn on a USB flash drive or another delivery mechanism
For hardware licensing (dongle), these permanent licenses must match the Hardware ID of the
dongle. After installation, the license information is stored in a Windows registry key. Licenses
for an Interplay Engine failover cluster are associated with two Hardware IDs.

To install a permanent license through software licensing:


t Use the Avid License Control application or Avid Application Manager (for Interplay Engine
v3.8 and later).
See “Software Licensing for Interplay Production” in the Interplay | Production Software
Installation and Configuration Guide.

To install a permanent license by using a dongle:


1. Make sure a dongle is connected to a USB port on each server.
2. Make a folder for the license file on the root directory (C:\) of an Interplay Engine server or
another server. For example:
C:\Interplay_Licenses
3. Connect the USB drive containing the license file and access the drive:
a. Double-click the computer icon on the desktop.
b. Double-click the USB flash drive icon.
4. Copy the license file (*.nxn) into the new folder you created.

n You can copy the license file from the USB flash drive. The advantage of copying the license file to a
server is that you have easy access to installer files if you should ever need them in the future.

5. Start and log in to the Interplay Administrator.


6. In the Server section of the Interplay Administrator window, click the Licenses icon.
7. Click the Import license button.
8. Browse for the *.nxn file.
9. Select the file and click Open.
You see information about the permanent license in the License Types area.

For more information on managing licenses, see the Interplay | Engine and Interplay | Archive
Engine Administration Guide.

Updating a Clustered Installation (Rolling Upgrade)


A major benefit of a clustered installation is that you can perform “rolling upgrades.” You can keep a
node in production while updating the installation on the other, then move the resource over and
update the second node as well.

n For information about updating specific versions of the Interplay Engine and a cluster, see the Avid
Interplay ReadMe. The ReadMe describes an alternative method of updating a cluster, in which you
lock and deactivate the database before you begin the update.

85
Updating a Clustered Installation (Rolling Upgrade)

Starting in 2018.11, there is no longer a Typical installation mode. When updating a clustered
installation, Avid recommends that you use the default settings presented by the installer. These
settings represent the values that your system administrator entered during the original installation or
previous upgrade. Avid highly recommends that some settings remain unchanged – such as the name
of the database folder. However, if you need to change other settings such as your cluster account
(Server Execution User) or your SNMP selection, now would be a good time to do so. If you decide
to change any settings, you must make sure that the same information is entered on both nodes.

Make sure you follow the procedure in this order, otherwise you might end up with a corrupted
installation.

To update a cluster:
1. On either node, determine which node is active:
a. Right-click My Computer and select Manage. The Server Manager window opens.
b. In the Server Manager list, open Features and click Failover Cluster Manager.
c. Click Roles.
d. On the Summary tab, check the name of the Owner Node.

Consider this the active node or the first node.


2. Run the Interplay Engine installer to update the installation on the non-active node (second
node). Select the values suggested by the installer to reuse values set during the previous
installation on that node.

n Starting with Interplay Production v2018.11, you are not required to restart the node following the
software upgrade. However if you have another reason to reboot your upgraded node at this time, it
is safe to do so.

c Do not move the Avid Workgroup Server to the second node yet.

3. Make sure that first node is active. Run the Interplay Engine installer to update the installation on
the first node. Accept the parameters suggested by the installer so that all values are reused.
4. The installer displays a dialog box that displays the following message:
“To proceed with the installation, the installer will now trigger a failover to the offline node."
5. Click OK in the dialog box to continue.

After completing the above steps, your entire clustered installation is updated to the new version.
Should you encounter any complications or face a specialized situation, contact Avid Support as
instructed in “If You Need Help” on page 8.

86
Uninstalling the Interplay Engine or Archive Engine on a Clustered System

Uninstalling the Interplay Engine or Archive Engine on


a Clustered System
To uninstall the Avid Interplay Engine or the Avid Archive Production Engine, use the Avid Interplay
Engine uninstaller, first on the inactive node, then on the active node. Note that compared to previous
releases, the uninstall procedure is simpler starting with Interplay 2018.11.

To uninstall the Interplay Engine or Archive Production Engine:


1. If you plan to reinstall the Interplay Engine or Archive Engine and reuse the existing database,
create a complete backup of the AvidWG (or AvidAM) database and the _InternalData database
in S:\Workgroup_Databases. Store the backup folders/files on a safe location (that is, not on the
cluster shared drive).For information about creating a backup, see “Creating and Restoring
Database Backups” in the Interplay | Engine and Interplay | Archive Engine Administration
Guide.
2. Uninstall the Interplay Engine or Archive Engine software on the offline node. Use Programs
and Features to perform the uninstall as you would install any other software package.
Select the appropriate item from Programs and Features:
- For the Interplay Engine, select Avid Production Management Engine
- For the Archive Engine, select Avid Archive Production Engine
3. On the Failover Cluster Monitor, select the “Avid Workgroup Server” cluster group/application,
and select Remove from the Action menu. It doesn’t matter which node you do this from.
This will remove the Interplay Engine from the cluster.
4. Uninstall the Interplay Engine or Archive Engine software on the online node as described
above.

87
4 Automatic Server Failover Tips and Rules

This chapter provides some important tips and rules to use when configuring the automatic server
failover.

Don't Access the Interplay Engine Through Individual Nodes

Don't access the Interplay Engine database directly through the individual machines (nodes) of the
cluster. Use the virtual network name or IP address that has been assigned to the Interplay Engine
resource group (see “List of IP Addresses and Network Names” on page 28).

Do Not Install the Interplay Engine Server on a Shared Disk

The Interplay Engine must be installed on the local disk of the cluster nodes and not on a shared
resource. This is because local changes are also necessary on both machines. Also, with independent
installations you can later use a rolling upgrade approach, upgrading each node individually without
affecting the operation of the cluster. The Microsoft documentation is also strongly against installing
on shared disks.

Do Not Edit the Registry While the Server is Offline

If you edit the registry on the offline node or on the online node while the Avid Workgroup Monitor
is offline, you will lose your changes. This is something that most likely will happen to you since it is
very easy to forget the implications of the registry replication. Remember that the registry is restored
by the resource monitor before the process is put online, thereby wiping out any changes that you
made while the resource (the server) was offline. Only changes that take place while the resource is
online are accepted.

Consider Disabling Failover When Experimenting

If you are performing changes that could make the Avid Interplay Engine fail, consider disabling
failover. The default behavior is to restart the server twice (threshold = 3) and then initiate the
failover, with the entire procedure repeating several times before final failure. This can take quite a
while.

Changing the CCS

You can change your CCS using the Interplay Administrator tool. Alternatively, if you cannot log
into the database you can use the following procedure to change the CCS via Registry settings.

If you specify the wrong Central Configuration Server (CCS), you can change the setting later on the
server machine in the Windows Registry under:

(64-bit OS) HKEY_LOCAL_MACHINE\Software\Avid Technology\Workgroup\DatabaseServer

The string value CMS specifies the server. Make sure to set the CMS to a valid entry while the
Interplay Engine is online, otherwise your changes to the registry won't be effective. After the
registry is updated, stop and restart the server using the Cluster Administrator (in the Administration
Tools folder in Windows).
Specifying an incorrect CCS can prevent login. See “Troubleshooting Login Problems” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.

For more information, see “Understanding the Central Configuration Server” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.

89
A Configuring the HPE MSA 2050

Use these instructions to configure the HPE MSA 2050 storage array. To configure the array, use the
HPE Storage Management Utility (SMU).

n For complete information about the SMU, see the HPE document MSA 1050/2050 SMU Reference
Guide, located here:

https://support.hpe.com/hpsc/doc/public/display?docId=a00017707en_us

Before You Begin


The SMU is a web-based application. To access the SMU, it needs to be connected to a LAN through
at least one of its Ethernet ports. The following are default settings for accessing the application:
• IP address: http://10.0.0.2
• User name: manage
• Password: !manage

Creating a Disk Group and Disk Volumes


To create a disk group
1. Open the HPE SMU by typing the IP address in a browser.
The splash screen for the SMU opens.

2. Type the user name and password and click Sign In.
Creating a Disk Group and Disk Volumes

The Home screen opens.


3. In the navigation bar on the left side of the window, click Pools.
4. From the Action menu, select Add Disk Group.

The Add Disk Group dialog box is displayed.


5. For Type, select Virtual.
The dialog box displays the Virtual options.

6. Specify the following information:


a. For Name, accept the default name (dgA01).
b. For RAID Level, select RAID-10.
c. For Pool, accept the default label (A)
d. For Number of Subgroups, select 3.
Three selection sets are displayed.Use the first six SAS drives for these selection sets.
e. For RAID-1, select the first two SAS drives. For RAID-2, select the next two SAS drives.
For RAID-3, select the next two SAS drives.
The following illustration shows the drives configured as subgroups.

91
Creating a Disk Group and Disk Volumes

n Reserve the seventh drive as a spare. There is no procedure to configure it as a spare.

7. Click Add.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.

To create the disk volumes:


1. In the navigation bar on the left side of the window, click Volumes.
2. From the Action menu, select Create Virtual Volumes.

92
Creating a Disk Group and Disk Volumes

The Create Virtual Volumes dialog box is displayed.


3. Specify the following information:
a. For Volume Name, enter “Databases”.
b. For Size, select the default 896GB.
c. For Number of Volumes, select 1.
d. For Preference, select Performance.
e. For Pool, accept A.

4. To add information for the second volume, click Add Row and specify the following:
a. For Volume Name, enter “Quorum”.
b. For Size, enter “10 GB”.

93
Creating a Disk Group and Disk Volumes

c. For Number of Volumes, select 1.


d. For Preference, select Performance
e. For Pool, accept A.

5. Click OK.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.

To map the disk volumes:


1. In the navigation bar on the left side of the window, click Mapping.
2. From the Action menu, select Map.

The Map dialog box is displayed.

94
Creating a Disk Group and Disk Volumes

3. To map the Quorum volume, specify the following information:


a. In the left column, select “All Other Initiators”
b. In the right column, select “Quorum.”
4. Click the Map button.

The Map button changes to Reset and the mapped Quorum volume is listed.

5. Make sure the Mode is set to read-write, the LUN is set to 0, and all four Ports are selected. Then
click OK.
A confirmation message is displayed. Click Yes. A progress bar is displayed. At the end of the
process, a success message is displayed. Click OK.
6. To map the Databases volume, specify the following information:
a. In the left column, click “All Other Initiators”
b. In the right column, click “Databases.”
7. Click the Map button.

95
Creating a Disk Group and Disk Volumes

The Map button changes to Reset and the mapped Databases volume is listed.

8. Make sure the Mode is set to read-write, the LUN is set to 1, and all four Ports are selected. Click
OK.
A confirmation message is displayed. Click Yes. A progress bar is displayed. At the end of the
process, a success message is displayed. Click OK.
Both volumes are displayed as mapped.

The configuration process is complete.

96
B Expanding the Database Volume for an
Interplay Engine Cluster

This document describes how to add drives to the HPE MSA 2040 storage array to expand the drive
space available for the Interplay Production database. The procedure is described in the following
topics:
• Before You Begin
• Task 1: Add Drives to the MSA Storage Array
• Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)
• Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)
• Task 3: Extend the Databases Volume in Windows Disk Management

n You can adapt these instructions for the HPE MSA 2050. Use Version 3 of the Storage Management
Utility.

Before You Begin


• Obtain additional drives. The MSA array expansion was qualified with five of the following
drives:
- HPE 300GB 3.5in Internal Hard Drive - SAS - 15K RPM
Certified HPE Vendor Item ID:J9V68A
Four will be configured to expand the Database volume and one will be configured as a spare.
• Schedule a convenient time to perform the expansion. You do not need to take the Interplay
Engine offline. However, performance might be affected, so consider performing the expansion
during a maintenance window. Adding the drives to the existing RAID 10 Vdisk takes several
hours. Allow approximately 3 to 4 hours for the entire expansion.
• Make sure you have a complete, recent backup of the Interplay database, created through the
Interplay Administrator.
• Make sure you can access the HPE Storage Management Utility (SMU).
The HPE SMU is a web-based application. The following are default settings for accessing the
application:
- IP address: http://10.0.0.2
- User name: manage
- Password: !manage
Check if these settings have been changed by an administrator.
Task 1: Add Drives to the MSA Storage Array

The HPE MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using either version:
- “Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)” on page 98
- “Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)” on page 103

Task 1: Add Drives to the MSA Storage Array


You do not need to shut down the storage array to add new drives.

To add the new drives:


1. Remove the blank insert from an available slot.
2. Insert the hard drive and tray.
3. Repeat this for each drive.

Task 2: Expand the Databases Volume Using the HPE


SMU (Version 2)
This task requires you to use the HPE Storage Management Utility (SMU) Version 2 to add the new
drives to the RAID 10 Vdisk and to expand the Databases volume. See “Before You Begin” on
page 97 for login information.

To expand the HPE MSA Vdisk:


1. Open the HPE SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.

2. Click “Click to launch previous version.”


The splash screen for SMU V2 opens.

98
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)

3. Supply the user name and password and click Sign In.
4. In the Configuration View, select Physical > Enclosure 1.
The following illustration shows the five additional drives, labeled AVAIL.

5. In the Configuration View, right-click the Vdisk (named dg01 in the illustration) and select
Tools > Expand Vdisk.

The Expand Vdisk page is displayed.


6. In the “Additional number of sub-vdesks” field, select 2.
The SMU automatically creates a mirrored pair of two new sub-vdisks, named RAID1-4 and
RAID 1-5.

99
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)

7. In the table, assign the available disks:


t Select Disk-1.8 and Disk-1.9 for RAID1-4.
t Select Disk-1.10 and Disk-1.11 for RAID1-5.
Leave Disk-1.12 as a spare.
The following illustration shows these assignments.

100
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)

8. Click Expand Vdisk.


A message box asks you to confirm the operation. Click Yes.

Another message box tells you that expansion of the Vdisk was started. Click OK.
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. When the process is complete, the SMU displays the additional space
as unallocated (green in the following illustration).

101
Task 2: Expand the Databases Volume Using the HPE SMU (Version 2)

9. In the Configuration View, right-click Volume Databases and select Tools > Expand Volume.

10. On the Expand Volume page, select the entire amount of available space, then click Expand
Volume.

At the end of the process, the expanded Vdisk and Databases Volume are displayed.

102
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

11. Close the SMU.

Task 2: Expand the Databases Volume Using the HPE


SMU (Version 3)
This task requires you to use the HPE Storage Management Utility (SMU) Version 3 to add the new
drives to the RAID 10 disk group and to expand the Databases volume. See “Before You Begin” on
page 97 for login information.

To expand the HPE MSA disk group:


1. Open the HPE SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.

2. Sign in using the user name and password.


3. In the navigation bar on the left side of the screen, click System.

103
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

The following illustration shows the five additional drives, labeled SAS but without the gray
highlight.

4. In the navigation bar, click Pools.


5. From the Action menu, select Modify Disk Group.

6. In the Modify Disk Group dialog box, select Expand.

The dialog box enlarges to show the disk group and the available disks.

104
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

7. From the Additional sub-groups menu, select 2.

The SMU automatically creates a mirrored pair of two new sub-groups, named RAID1-4 and
RAID 1-5.
8. For each new RAID group, assign two of the available disks:
t For RAID1-4, click the first two side-by-side disks.
t For RAID1-5, click the next two side-by-side disks.
Leave one disk as a spare.
The following illustration shows these assignments.

105
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

9. Click Modify.
A message box describes how the expansion can take a significant amount of time and asks you
to confirm the operation. Click Yes.
Another message box tells you that the disk group was successfully modified. Click OK.
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. You can track the progress on the Pools page, in the Related Disk
Groups section, under Current Job.

106
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

When the process is complete, the SMU displays the additional space as available. Note the
amount of available space, which you will need to enter in the Modify Volume dialog box.

10. In the navigation bar, click Volumes.


11. From the Action menu, click Modify Volume.

12. In the Modify Volume dialog box, type the available space exactly as displayed on the Pools
page (in this example, 599.4GB) and click OK.

107
Task 2: Expand the Databases Volume Using the HPE SMU (Version 3)

At the end of the process, the new size of the expanded Databases Volume is displayed on the
Volumes page.

The new size of the disk group is also displayed on the Pools page.

13. Close the SMU.

108
Task 3: Extend the Databases Volume in Windows Disk Management

Task 3: Extend the Databases Volume in Windows Disk


Management
This task requires you to open the Windows Disk Management page and extend the Databases
volume.

To extend the Databases volume:


1. On the online node of the cluster, open Computer Management > Disk Management.
2. Right-click Disk2, Database (S:), and select Extend Volume.

The Extend Volume Wizard opens.


3. On the Welcome page, click Next.
The Select Disks page is displayed, with Disk 2 selected.

109
Task 3: Extend the Databases Volume in Windows Disk Management

4. Click Next.
The Completing page is displayed.

5. Click Finish.
The Database volume is extended.

110
Task 3: Extend the Databases Volume in Windows Disk Management

6. Close the Disk Management window and the Computer Management window.
7. Perform a cluster failover.
The expansion is complete and the Interplay Database has the new space available. You can
check the size of the Database disk (Avid Workgroup Disk) in the Failover Cluster Manager.

111
C Adding Storage for File Assets for an
Interplay Engine Cluster

This document describes how to add drives to the HPE MSA 2040 storage array to expand the drive
space available for the Interplay Production database’s file assets. The procedure is described in the
following topics:
• Before You Begin
• Task 1: Add Drives to the MSA Storage Array
• Task 2: Create a Disk and Volume Using the HPE SMU V3
• Task 3: Initialize the Volume in Windows Disk Management
• Task 4: Add the Disk to the Failover Cluster Manager
• Task 5: Copy the File Assets to the New Drive
• Task 6: Mount the FileAssets Partition in the _Master Folder
• Task 7: Create Cluster Dependencies for the New Disk

n You can adapt these instructions for the HPE MSA 2050.

Before You Begin


• Obtain additional drives. The MSA array expansion was qualified with three of the following
drives:
- HPE MSA 4TB 12G SAS 7.2K LFF (3.5in) 512e Midline 1yr Warranty Hard Drive
Certified HPE Vendor Item ID:K2Q82A (Seagate ST4000NM0034 disk)
Other configurations are possible depending on customer requirements and the number of slots
available in the HPE MSA. Creating at least one separate volume for file assets is required.
• Determine the RAID Level for configuring the new volume. The MSA array expansion was
qualified with three drives configured as RAID Level 5. The RAID level you select depends on
the number of drives you are adding and the customer’s requirements. Use the following table for
guidance. If necessary, consult technical information about RAID levels.

Number of drives Recommended RAID Level

2 RAID Level 1

3 One of the following:


• RAID Level 1 plus spare
• RAID Level 5
Task 1: Add Drives to the MSA Storage Array

Number of drives Recommended RAID Level

4 One of the following:


• RAID Level 1 (two pairs)
• RAID Level 5
• RAID Level 6

• Schedule a convenient time to perform the expansion. You need to bring the Interplay Engine
offline during the process, so this procedure is best performed during a maintenance window.
The configuration itself will take approximately one hour, with the engine offline for
approximately 5 to 15 minutes. In addition, allow time for the copying of file assets, which
depends on the number of file assets in the database.
• Decide if you want to allocate the entire drive space to file assets, or reserve space for snapshots
or future expansion. See “Task 2: Create a Disk and Volume Using the HPE SMU V3” on
page 114.
• Make sure you have the following complete, recent backups:
- Interplay database, created through the Interplay Administrator
- _Master folder (file assets), created through a backup utility.
The Interplay Administrator does not have a backup mechanism for the _Master folder.
• Make sure you can access the HPE Storage Management Utility (SMU).
The HPE SMU is a web-based application. To access the SMU, it needs to be connected to a
LAN through at least one of its Ethernet ports. The following are default settings for accessing
the application:
- IP address: http://10.0.0.2
- User name: manage
- Password: !manage
Check if these settings have been changed by an administrator.
The HPE MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using SMU version 3.

Task 1: Add Drives to the MSA Storage Array


You do not need to shut down the storage array to add new drives.

To add the new drives:


1. Remove the blank insert from an available slot.
2. Insert the hard drive and tray.
3. Repeat this for each drive.

113
Task 2: Create a Disk and Volume Using the HPE SMU V3

Task 2: Create a Disk and Volume Using the HPE SMU


V3
This topic provides instructions for creating a disk group for the added disks, and then creating a
volume in the new disk group, using the HPE Storage Management Utility (SMU) Version 3.

To create a disk group:


1. Open the HPE SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.

2. Sign in using the user name and password.


The Home screen opens.

3. In the navigation bar on the left side of the screen, click System and select View System.

114
Task 2: Create a Disk and Volume Using the HPE SMU V3

The following illustration shows the three additional drives, labeled MDL, which is an HPE
name for a “midline” drive. Click the drive to display disk information.

4. In the navigation bar, click Pools.


5. From the Action menu, select Add Disk Group.

The Add Disk Group dialog box is displayed.


6. For Type, select Linear.

115
Task 2: Create a Disk and Volume Using the HPE SMU V3

The dialog box changes to the Linear options.


7. Specify the following information:
a. Enter a name, for example, one that increments the name of the existing disk group.
b. Select the RAID Level. In this example, the three drives have been qualified with RAID
Level 5. For more information about RAID levels, see “Before You Begin” on page 112.
c. Select the check boxes for the new drives.
The following illustration shows this information.

d. Click Add.
A progress bar is displayed. At the end of the process, a success message is displayed. Click
OK.The new disk group is displayed. If you select the name, information is displayed in the
Related Disk Groups section.

116
Task 2: Create a Disk and Volume Using the HPE SMU V3

To create a new volume:


1. In the navigation bar, click Volumes.
2. In the Action menu, click Create Linear Volumes.

The Create Linear Volumes dialog box opens.


3. Do the following:
a. For Pool, click the down arrow and select the new disk group, in this case, vd0002.
b. For Volume Name, enter a meaningful name, such as FileAssets.
c. For Volume Size, you can specify the entire volume (the default) or reserve some of the
volume for future use. For example, you could enable snapshots (see the HPE MSA
documentation). In this example, the entire volume is included.

117
Task 2: Create a Disk and Volume Using the HPE SMU V3

d. Click OK.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.
The new volume is added to the list of volumes.

To map the new volume:


1. Select the new volume.
2. From the Action menu, select Map Volumes.
The Map dialog box is displayed.

118
Task 2: Create a Disk and Volume Using the HPE SMU V3

3. Select “All Other Initiators” and click the Map button.

The default mapping information is displayed. Accept these defaults.

4. Click Apply. A confirmation dialog is displayed. Click Yes. At the end of the process, a success
message is displayed. Click OK.
The Volumes page shows the new volume fully configured.

5. Sign out of the SMU.

119
Task 3: Initialize the Volume in Windows Disk Management

Task 3: Initialize the Volume in Windows Disk


Management
This topic provides instructions for naming, bringing online, and initializing the new FileAssets
volume you created in the HPE SMU, using the Windows Disk Management utility.

To initialize the FileAssets volume:


1. On Node 1, right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
The Disk Management window opens. The FileAssets volume is displayed as an unknown disk
that is offline.

2. Right-click the new disk and select Online.


3. Right-click the new disk, and select Initialize Disk.
The Initialize Disk dialog box opens.

120
Task 3: Initialize the Volume in Windows Disk Management

4. Select the new disk, select GPT, and click OK.

n The MBR partition style has a limit of 2 TB.

5. Use the New Simple Volume wizard to configure the volume as a partition.
a. Right-click the new disk.
b. Select New Simple Volume.
The New Simple Volume wizard opens with the Specify Volume Size page.

6. Accept the volume size and click Next.


7. Assign the drive letter L and click Next.
You will remove this drive letter in a later step.
8. Name the volume label FileAssets, select “Perform a quick format,” and click Next.
The completion screen is displayed.

9. Click Finish.

121
Task 3: Initialize the Volume in Windows Disk Management

At the end of the process the new disk is named and online.

Close Disk Management on Node 1.


10. On Node 2, open Disk Management and do the following:
a. Right-click the new disk and select Online.
b. Right-click the FileAssets partition and select Change Drive Letter and Paths.
The Change Drive Letter and Paths dialog box opens.
c. Click Change.

The Change Drive Letter or Path dialog box opens.


d. From the drive letter drop down menu, select L.

122
Task 4: Add the Disk to the Failover Cluster Manager

n The disk does not need to be initialized, because the initialization was done on Node 1.

e. Click OK.
A confirmation box asks if you want to continue. Click Yes.
The new disk is now named and online on Node 2.
11. Close Disk Management on Node 2.

Task 4: Add the Disk to the Failover Cluster Manager


This topic provides instructions for adding the FileAssets volume as a disk in the Windows Failover
Cluster Manager.

c This task and all remaining tasks must be performed on the online node.

To add the new disk to the cluster:


1. On the online node, right-click This PC and select Manage. From the Tools menu, select Failover
Cluster Manager.
The Failover Cluster Manager opens.
2. In the navigation panel, select Storage > Disks.
The Disks pane is displayed.
3. In the Actions panel, select Add Disk.

The Add Disks to a Cluster dialog box opens.


4. In the dialog box, select the new disk and click OK.

123
Task 4: Add the Disk to the Failover Cluster Manager

The new disk is displayed in the cluster list as Cluster Disk 1.

5. Right-click Cluster Disk 1 and select Properties.


The Cluster Disk 1 Properties dialog box opens.
6. On the General tab, type a name for the disk, for example, Avid Workgroup File Assets, and
click OK.

7. In the Failover Cluster Manager navigation pane, select Roles.


8. In the Avid Workgroup Server menu, select Add Storage.

124
Task 4: Add the Disk to the Failover Cluster Manager

The Add Storage dialog box opens.


9. Select the check box for the new disk and click OK.

The new disk, named Avid Workgroup File Assets, is listed as storage for the Avid Workgroup
Server.

10. Keep Failover Cluster Manager open for use in Tasks 6 and 7.

125
Task 5: Copy the File Assets to the New Drive

Task 5: Copy the File Assets to the New Drive


This topic provides instructions for using the Robocopy program to copy the existing file assets
folder to the new drive.

c This task and all remaining tasks must be performed on the online node.

Copying the existing file assets folder is likely to be the most time-consuming part of the
configuration process, depending on the size of the folder. The Interplay Engine can continue to be
running during the copying process, but best practice is to perform this copy during a maintenance
window. The following illustration shows the contents of the _Master folder, which holds the file
assets.

Avid recommends using a copy program such as Robocopy, which preserves timestamps for the file
assets and copies empty files, through the /E parameter. The following procedure uses Robocopy,
executed from a command line.

To copy the file assets to the new drive:


1. On the online node, open a Windows command prompt with administrative rights.
2. Type the source, target, and /E parameter, using the following syntax:
C:\Windows\system32>robocopy source_directory destination_partition /E
For example:
C:\Windows\system32>robocopy S:\Workgroup_Databases\AvidWG\_Master L: /E

126
Task 6: Mount the FileAssets Partition in the _Master Folder

Task 6: Mount the FileAssets Partition in the _Master


Folder
This topic provides instructions for mounting the FileAssets partition in the _Master folder, using the
Disk Management utility.

c You must take the Engine services offline for this task. The other resources, especially the disk
resources, must stay online.

c This task and all remaining tasks must be performed on the online node.

To mount the FileAssets partition in the _Master folder:


1. In the Failover Cluster Manager, select Roles.
2. In the Roles section of the Avid Workgroup Server, right-click Avid Workgroup Engine Monitor
and select Take Offline.

Wait until all roles are offline.

127
Task 6: Mount the FileAssets Partition in the _Master Folder

3. In Windows Explorer, rename the original S:\Workgroup_Databases\AvidWG\_Master folder


_Master_Old. (_Master_Old will serve as a backup.)
4. Create a new folder named _Master.
The following illustration shows the new folder and the renamed folder.

5. Open the Disk Management utility.


6. Right-click the new FileAssets partition and select Change Drive Letter and Paths.

7. In the Change Drive letter and Paths dialog box, select drive letter L: and click Remove.

128
Task 6: Mount the FileAssets Partition in the _Master Folder

A warning is displayed. Click Yes.


8. Right-click the FileAssets partition again and again select Change Drive Letter and Paths.
9. In the Change Drive letter and Paths dialog box, click Add.
The Add Drive Letter or Path dialog box opens.
10. Select Mount in the following empty NTFS folder and click Browse.

11. Navigate to the new _Master folder and click OK.


The path is displayed.

12. Click OK, then click OK again.


The disk now points to a path.
In Windows Explorer, the icon for the _Master folder has changed from folder to mount point. If
you double-click _Master, you see the file assets subfolders, which are located on the new drive.

129
Task 7: Create Cluster Dependencies for the New Disk

Task 7: Create Cluster Dependencies for the New Disk


This topic provides instructions for creating dependencies for the new disk resource, using the
Failover Cluster Manager. You bring the cluster online as part of this task.

c This task must be performed on the online node.

To create cluster dependencies for the new disk resource:


1. On the online node, open Failover Cluster Manager, if it is not already open.
2. In the navigation pane, click Roles.
3. In the Avid Workgroup Server section, right-click the Avid Workgroup File Assets disk and
select Properties.
The Properties dialog box opens.
4. Click the Dependencies tab, click in the Resource column, click the drop-down arrow, and select
the Avid Workgroup Disk.

130
Task 7: Create Cluster Dependencies for the New Disk

5. Click Apply, then click OK.


The dependency is created.
6. In the Avid Workgroup Server section , right-click File Server and select Properties.
The Properties dialog box opens.
7. Click the Dependencies tab, click in the AND/OR column to add AND. Then click in the
Resource column, click the drop-down arrow, and select Avid Workgroup File Assets.

8. Click Apply, then click OK.

131
Task 7: Create Cluster Dependencies for the New Disk

9. Now bring the cluster back online. Select Avid Workgroup Server, and in the Actions list, click
Start Role.

At the end of the process, all resources are online.

10. Close the Failover Cluster Manager.

The File Assets volume configuration is complete.

132

You might also like