Installation Guide VCS 6.0
Installation Guide VCS 6.0
Solaris
6.0
November 2011
Legal Notice
Copyright 2011 Symantec Corporation. All rights reserved. Symantec, the Symantec logo, Veritas, Veritas Storage Foundation, CommandCentral, NetBackup, Enterprise Vault, and LiveUpdate are trademarks or registered trademarks of Symantec corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. The product described in this document is distributed under licenses restricting its use, copying, distribution, and decompilation/reverse engineering. No part of this document may be reproduced in any form by any means without prior written authorization of Symantec Corporation and its licensors, if any. THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE. The Licensed Software and Documentation are deemed to be commercial computer software as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19 "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in Commercial Computer Software or Commercial Computer Software Documentation", as applicable, and any successor regulations. Any use, modification, reproduction release, performance, display or disclosure of the Licensed Software and Documentation by the U.S. Government shall be solely in accordance with the terms of this Agreement.
Technical Support
Symantec Technical Support maintains support centers globally. Technical Supports primary role is to respond to specific queries about product features and functionality. The Technical Support group also creates content for our online Knowledge Base. The Technical Support group works collaboratively with the other functional areas within Symantec to answer your questions in a timely fashion. For example, the Technical Support group works with Product Engineering and Symantec Security Response to provide alerting services and virus definition updates. Symantecs support offerings include the following:
A range of support options that give you the flexibility to select the right amount of service for any size organization Telephone and/or Web-based support that provides rapid response and up-to-the-minute information Upgrade assurance that delivers software upgrades Global support purchased on a regional business hours or 24 hours a day, 7 days a week basis Premium service offerings that include Account Management Services
For information about Symantecs support offerings, you can visit our Web site at the following URL: www.symantec.com/business/support/index.jsp All support services will be delivered in accordance with your support agreement and the then-current enterprise technical support policy.
Hardware information Available memory, disk space, and NIC information Operating system Version and patch level Network topology Router, gateway, and IP address information Problem description:
Error messages and log files Troubleshooting that was performed before contacting Symantec Recent software configuration changes and network changes
Customer service
Customer service information is available at the following URL: www.symantec.com/business/support/ Customer Service is available to assist with non-technical questions, such as the following types of issues:
Questions regarding product licensing or serialization Product registration updates, such as address or name changes General product information (features, language availability, local dealers) Latest information about product updates and upgrades Information about upgrade assurance and support contracts Information about the Symantec Buying Programs Advice about Symantec's technical support options Nontechnical presales questions Issues that are related to CD-ROMs or manuals
Documentation
Product guides are available on the media in PDF format. Make sure that you are using the current version of the documentation. The document version appears on page 2 of each guide. The latest product documentation is available on the Symantec Web site. https://sort.symantec.com/documents Your feedback on product documentation is important to us. Send suggestions for improvements and reports on errors or omissions. Include the title and document version (located on the second page), and chapter and section titles of the text on which you are reporting. Send feedback to: [email protected]
Contents
Section 1
Chapter 1
Chapter 2
Contents
I/O fencing requirements .............................................................. Coordinator disk requirements for I/O fencing ............................ CP server requirements ........................................................... Non-SCSI-3 I/O fencing requirements ........................................ Number of nodes supported ........................................................... Discovering product versions and various requirement information ..........................................................................
37 38 38 41 42 42
Chapter 3
Chapter 4
Section 2
Chapter 5
Contents
Section 3
Chapter 6
Chapter 7
Chapter 8
10
Contents
Adding VCS users ....................................................................... Configuring SMTP email notification ............................................. Configuring SNMP trap notification ............................................... Configuring global clusters .......................................................... Completing the VCS configuration ................................................. Verifying and updating licenses on the system ................................. Checking licensing information on the system ........................... Updating product licenses using vxlicinst .................................
Chapter 9
Section 4
Chapter 10
Chapter 11
Contents
11
Section 5
Chapter 12
Chapter 13
Chapter 14
Section 6
Chapter 15
Chapter 16
12
Contents
Overview of JumpStart installation tasks .................................. Generating the finish scripts .................................................. Preparing installation resources .............................................. Adding language pack information to the finish file .................... Using a Flash archive to install VCS and the operating system ..........................................................................
Chapter 17
Chapter 18
233 233 234 234 235 236 238 238 239 242 248 249 250 252
Contents
13
Section 7
Chapter 19
Chapter 20
Chapter 21
Chapter 22
14
Contents
Chapter 23
Chapter 24
Chapter 25
Section 8
Chapter 26
Contents
15
Chapter 27
Chapter 28
Section 9
Chapter 29
16
Contents
Chapter 30
Chapter 31
Section 10
Chapter 32
Chapter 33
Contents
17
Verifying the status of nodes and service groups ........................ Deleting the departing node from VCS configuration .................. Modifying configuration files on each remaining node ................ Removing the node configuration from the CP server .................. Removing security credentials from the leaving node ................. Unloading LLT and GAB and removing VCS on the departing node ............................................................................
Section 11
Appendix A Appendix B
Appendix C
Appendix D
18
Contents
Appendix E
Appendix F
Appendix G
Configuring the secure shell or the remote shell for communications ..........................................................
493
Setting up inter-system communication ......................................... 493 Setting up ssh on cluster systems ............................................ 493 Configuring ssh .................................................................... 494
Appendix H
Contents
19
Starting and stopping processes for the Veritas products .................. Installer cannot create UUID for the cluster .................................... LLT startup script displays errors .................................................. The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails ................................................................................... Issues during fencing startup on VCS cluster nodes set up for server-based fencing .............................................................
Appendix I
Sample VCS cluster setup diagrams for CP server-based I/O fencing ............................................ 505
Configuration diagrams for setting up server-based I/O fencing .......... Two unique client clusters served by 3 CP servers ....................... Client cluster served by highly available CPS and 2 SCSI-3 disks ............................................................................ Two node campus cluster served by remote CP server and 2 SCSI-3 disks .................................................................. Multiple client clusters served by highly available CP server and 2 SCSI-3 disks ................................................................ 505 505 506 508 510
Appendix J
513
Reconciling major/minor numbers for NFS shared disks .................... 513 Checking major and minor numbers for disk partitions ............... 514 Checking the major and minor number for VxVM volumes ........... 517
Appendix K
Compatability issues when installing Veritas Cluster Server with other products ........................................ 521
Installing, uninstalling, or upgrading Storage Foundation products when other Veritas products are present .................................. 521 Installing, uninstalling, or upgrading Storage Foundation products when VOM is already present ................................................. 522 Installing, uninstalling, or upgrading Storage Foundation products when NetBackup is already present .......................................... 522
20
Contents
Section
Chapter 1. Introducing VCS Chapter 2. System requirements Chapter 3. Planning to install VCS Chapter 4. Licensing VCS
22
Chapter
Introducing VCS
This chapter includes the following topics:
About Veritas Cluster Server About VCS basics About VCS features About VCS optional components Symantec Operations Readiness Tools About configuring VCS clusters for data integrity
24
Figure 1-1 illustrates a typical VCS configuration of four nodes that are connected to shared storage. Figure 1-1 Example of a four-node VCS cluster
Client workstation
Client workstation
Public network
Shared storage
Client workstations receive service over the public network from applications running on VCS nodes. VCS monitors the nodes and their services. VCS nodes in the cluster communicate over a private network.
25
Figure 1-2
26
Figure 1-3
galaxy
nebula
An unseeded node communicates with a seeded node All nodes in the cluster are unseeded but can communicate with each other
When the last system starts and joins the cluster, the cluster seeds and starts VCS on all nodes. You can then bring down and restart nodes in any combination. Seeding remains in effect as long as at least one instance of VCS is running somewhere in the cluster. Perform a manual seed to run VCS from a cold start when one or more systems of the cluster are unavailable. VCS does not start service groups on a system until it has a seed. However, if you have I/O fencing enabled in your cluster, you can still configure GAB to automatically seed the cluster even when some cluster nodes are unavailable. See the Veritas Cluster Server Administrator's Guide.
27
Configure SNMP trap notification of VCS events using the VCS Notifier component. Configure SMTP email notification of VCS events using the VCS Notifier component.
The nodes that must retain access to the shared storage The nodes that must be ejected from the cluster
This decision prevents possible data corruption. The installer installs the I/O fencing driver, VRTSvxfen, when you install VCS. To protect data on shared disks, you must configure I/O fencing after you install and configure VCS.
28
I/O fencing technology uses coordination points for arbitration in the event of a network partition. I/O fencing coordination points can be coordinator disks or coordination point servers (CP servers) or both. You can configure disk-based or server-based I/O fencing:
Disk-based I/O fencing I/O fencing that uses coordinator disks is referred to as disk-based I/O fencing. Disk-based I/O fencing ensures data integrity in a single cluster. Server-based I/O fencing I/O fencing that uses at least one CP server system is referred to as server-based I/O fencing. Server-based fencing can include only CP servers, or a mix of CP servers and coordinator disks. Server-based I/O fencing ensures data integrity in multiple clusters. In virtualized environments that do not support SCSI-3 PR, VCS supports non-SCSI-3 server-based I/O fencing.
See About planning to configure I/O fencing on page 89. Note: Symantec recommends that you use I/O fencing to protect your cluster against split-brain situations. See the Veritas Cluster Server Administrator's Guide.
29
30
You can administer VCS Simulator from the Java Console or from the command line. To download VCS Simulator, go to http://go.symantec.com/vcsm_download.
Generate server-specific reports that describe how to prepare your servers for installation or upgrade of Symantec enterprise products. Access a single site with the latest production information, including patches, agents, and documentation. Create automatic email notifications for changes in patches, documentation, and array-specific modules.
Broken set of private networks If a system in a two-node cluster fails, the system stops sending heartbeats over the private interconnects. The remaining node then takes corrective action. The failure of the private interconnects, instead of the actual nodes, presents identical symptoms and causes each node to determine its peer has departed. This situation typically results in data corruption because both nodes try to take control of data storage in an uncoordinated manner System that appears to have a system-hang If a system is so busy that it appears to stop responding, the other nodes could declare it as dead. This declaration may also occur for the nodes that use the hardware that supports a "break" and "resume" function. When a node drops
31
to PROM level with a break and subsequently resumes operations, the other nodes may declare the system dead. They can declare it dead even if the system later returns and begins write operations. I/O fencing is a feature that prevents data corruption in the event of a communication breakdown in a cluster. VCS uses I/O fencing to remove the risk that is associated with split-brain. I/O fencing allows write access for members of the active cluster. It blocks access to storage from non-members so that even a node that is alive is unable to cause damage. After you install and configure VCS, you must configure I/O fencing in VCS to ensure data integrity. See About planning to configure I/O fencing on page 89.
About I/O fencing for VCS in virtual machines that do not support SCSI-3 PR
In a traditional I/O fencing implementation, where the coordination points are coordination point servers (CP servers) or coordinator disks, Veritas Clustered Volume Manager and Veritas I/O fencing modules provide SCSI-3 persistent reservation (SCSI-3 PR) based protection on the data disks. This SCSI-3 PR protection ensures that the I/O operations from the losing node cannot reach a disk that the surviving sub-cluster has already taken over. See the Veritas Cluster Server Administrator's Guide for more information on how I/O fencing works. In virtualized environments that do not support SCSI-3 PR, VCS attempts to provide reasonable safety for the data disks. VCS requires you to configure non-SCSI-3 server-based I/O fencing in such environments. Non-SCSI-3 fencing uses CP servers as coordination points with some additional configuration changes to support I/O fencing in such environments. See Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program on page 154. See Setting up non-SCSI-3 fencing in virtual environments manually on page 250.
Data disksStore shared data See About data disks on page 32. Coordination pointsAct as a global lock during membership changes
32
Coordinator disks Disks that act as coordination points are called coordinator disks. Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the VCS configuration. You can configure coordinator disks to use Veritas Volume Manager Dynamic Multi-pathing (DMP) feature. Dynamic Multi-pathing (DMP) allows coordinator disks to take advantage of the path failover and the dynamic adding and removal capabilities of DMP. So, you can configure I/O fencing to use either DMP devices or the underlying raw character devices. I/O fencing uses SCSI-3 disk policy that is either raw or dmp based on the disk device that you use. The disk policy is dmp by default. See the Veritas Storage Foundation Administrators Guide. Coordination point servers
33
The coordination point server (CP server) is a software solution which runs on a remote system or cluster. CP server provides arbitration functionality by allowing the VCS cluster nodes to perform the following tasks:
Self-register to become a member of an active VCS cluster (registered with CP server) with access to the data drives Check which other nodes are registered as members of this active VCS cluster Self-unregister from this active VCS cluster
Forcefully unregister other nodes (preempt) as members of this active VCS cluster In short, the CP server functions as another arbitration mechanism that integrates within the existing I/O fencing module. Note: With the CP server, the fencing arbitration logic still remains on the VCS cluster. Multiple VCS clusters running different operating systems can simultaneously access the CP server. TCP/IP based communication is used between the CP server and the VCS clusters.
Enable system-based preferred fencing policy to give preference to high capacity systems. Enable group-based preferred fencing policy to give preference to service groups for high priority applications. Disable preferred fencing policy to use the default node count-based race policy.
See the Veritas Cluster Server Administrator's Guide for more details. See Enabling or disabling the preferred fencing policy on page 155.
34
Chapter
System requirements
This chapter includes the following topics:
Important preinstallation information for VCS Hardware requirements for VCS Disk space requirements Supported operating systems Supported software for VCS I/O fencing requirements Number of nodes supported Discovering product versions and various requirement information
The hardware compatibility list contains information about supported hardware and is updated regularly. For the latest information on supported hardware visit the following URL: http://www.symantec.com/docs/TECH170013 Before installing or upgrading VCS, review the current compatibility list to confirm the compatibility of your hardware and software. For important updates regarding this release, review the Late-Breaking News TechNote on the Symantec Technical Support website: http://www.symantec.com/docs/TECH164885 You can install VCS on clusters of up to 64 systems.
36
Every system where you want to install VCS must meet the hardware and the software requirements.
Disks
Disk space
Note: VCS may require more temporary disk space during installation
than the specified disk space. Ethernet controllers In addition to the built-in public Ethernet controller, VCS requires at least one more Ethernet interface per system. Symantec recommends two additional network interfaces for private interconnects. You can also configure aggregated interfaces. Symantec recommends that you turn off the spanning tree algorithm on the switches used to connect private network interfaces.. Fibre Channel or SCSI host bus adapters RAM Typical VCS configuration requires at least one SCSI or Fibre Channel Host Bus Adapter per system for shared data disks.
37
Use the "Perform a Preinstallation Check" (P) menu for the Web-based installer or the -precheck option of the script-based installer to determine whether there is sufficient space.
# ./installer -precheck
If you have downloaded VCS, you must use the following command:
# ./installvcs -precheck
Veritas Storage Foundation (SF): Veritas Volume Manager (VxVM) with Veritas File System (VxFS) VCS 6.0 supports the following versions of SF:
SF 6.0
SF5.1SP1
Note: VCS supports the previous and the current versions of SF to facilitate product upgrades.
Coordinator disks See Coordinator disk requirements for I/O fencing on page 38. CP servers See CP server requirements on page 38.
38
To configure disk-based fencing or to configure server-based fencing with at least one coordinator disk, make sure a version of Veritas Volume Manager (VxVM) that supports SCSI-3 persistent reservations (SCSI-3 PR) is installed on the VCS cluster. See the Veritas Storage Foundation and High Availability Installation Guide. If you have installed VCS in a virtual environment that is not SCSI-3 PR compliant, review the requirements to configure non-SCSI-3 server-based fencing. See Non-SCSI-3 I/O fencing requirements on page 41.
For disk-based I/O fencing, you must have three coordinator disks. The coordinator disks can be raw devices, DMP devices, or iSCSI devices. Each of the coordinator disks must use a physically separate disk or LUN. Symantec recommends using the smallest possible LUNs for coordinator disks. Each of the coordinator disks should exist on a different disk array, if possible. The coordinator disks must support SCSI-3 persistent reservations. Symantec recommends using hardware-based mirroring for coordinator disks. Coordinator disks must not be used to store data or must not be included in disk groups that store user data. Coordinator disks cannot be the special devices that array vendors use. For example, you cannot use EMC gatekeeper devices as coordinator disks.
CP server requirements
VCS 6.0 clusters (application clusters) support coordination point servers (CP servers) which are hosted on the following VCS and SFHA versions:
VCS 6.0, 5.1SP1, or 5.1 single-node cluster Single-node VCS clusters with VCS 5.1 SP1 RP1 and later or VCS 6.0 and later that hosts CP server does not require LLT and GAB to be configured. SFHA 6.0, 5.1SP1, or 5.1 cluster
39
Warning: Before you upgrade 5.1 CP server nodes to use VCS or SFHA 6.0, you must upgrade all the application clusters that use this CP server to version 6.0. Application clusters at version 5.1 cannot communicate with CP server that runs VCS or SFHA 5.1 SP1 or later. Make sure that you meet the basic hardware requirements for the VCS/SFHA cluster to host the CP server. See the Veritas Storage Foundation High Availability Installation Guide. See Hardware requirements for VCS on page 36. Note: While Symantec recommends at least three coordination points for fencing, a single CP server as coordination point is a supported server-based fencing configuration. Such single CP server fencing configuration requires that the coordination point be a highly available CP server that is hosted on an SFHA cluster. Make sure you meet the following additional CP server requirements which are covered in this section before you install and configure CP server:
Hardware requirements Operating system requirements Networking requirements (and recommendations) Security requirements
Table 2-2 lists additional requirements for hosting the CP server. Table 2-2 CP server hardware requirements Description
To host the CP server on a VCS cluster or SFHA cluster, each host requires the following file system space: 550 MB in the /opt directory (additionally, the language pack requires another 15 MB) 300 MB in /usr
Hardware required
Disk space
Storage
When CP server is hosted on an SFHA cluster, there must be shared storage between the CP servers. Each CP server requires at least 512 MB.
RAM
40
Table 2-2
Hardware required
Network
Table 2-3 displays the CP server supported operating systems and versions. An application cluster can use a CP server that runs any of the following supported operating systems. Table 2-3 CP server CP server supported operating systems and versions Operating system and version
CP server hosted on a VCS CP server supports any of the following operating systems: single-node cluster or on an AIX 6.1 and 7.1 SFHA cluster HP-UX 11i v3
Linux: RHEL 5
Solaris 10
Review other details such as supported operating system levels and architecture for the supported operating systems. See the Veritas Cluster Server Release Notes or the Veritas Storage Foundation High Availability Release Notes for that platform.
Symantec recommends that network access from the application clusters to the CP servers should be made highly-available and redundant. The network connections require either a secure LAN or VPN. The CP server uses the TCP/IP protocol to connect to and communicate with the application clusters by these network paths. The CP server listens for messages from the application clusters using TCP port 14250. This is the default port that can be changed during a CP server configuration. Symantec recommends that you configure multiple network paths to access a CP server. If a network path fails, CP server does not require a restart and continues to listen on one of the other available virtual IP addresses.
41
The CP server supports either Internet Protocol version 4 or version 6 (IPv4 or IPv6 addresses) when communicating with the application clusters. If the CP server is configured to use an IPv6 virtual IP address, then the application clusters should also be on the IPv6 network where the CP server is being hosted. When placing the CP servers within a specific network configuration, you must take into consideration the number of hops from the different application cluster nodes to the CP servers. As a best practice, Symantec recommends that the number of hops and network latency from the different application cluster nodes to the CP servers should be equal. This ensures that if an event occurs that results in an I/O fencing scenario, there is no bias in the race due to the number of hops between the nodes.
For secure communication between the VCS cluster (application cluster) and the CP server, review the following support matrix:
CP server in secure mode VCS cluster in secure mode VCS cluster in non-secure mode CP server cluster in secure mode CP server cluster in non-secure mode Yes Yes Yes No CP server in non-secure mode Yes Yes No Yes
For secure communications between the VCS cluster and CP server, consider the following requirements and suggestions:
In a secure communication environment, all CP servers that are used by the application cluster must be configured with security enabled. A configuration where the application cluster uses some CP servers running with security enabled and other CP servers running with security disabled is not supported. For non-secure communication between CP server and application clusters, there is no need to configure Symantec Product Authentication Service. In non-secure mode, authorization is still provided by CP server for the application cluster users. The authorization that is performed only ensures that authorized users can perform appropriate actions as per their user privileges on the CP server.
For information about establishing secure communications between the application cluster and CP server, see the Veritas Cluster Server Administrator's Guide.
42
Solaris 10 Update 7 and later Oracle VM Server for SPARC 2.0 and 2.1 Guest operating system: Solaris 10
Make sure that you also meet the following requirements to configure non-SCSI-3 fencing in the virtual environments that do not support SCSI-3 PR:
VCS must be configured with Cluster attribute UseFence set to SCSI3 All coordination points must be CP servers
The installed version of all released Storage Foundation and High Availability Suite of products The required packages or patches (if applicable) that are missing The available updates (including patches or hotfixes) from Symantec Operations Readiness Tools (SORT) for the installed products
1 2
Mount the media. Start the installer with the -version option.
# ./installer -version system1 system2
Chapter
Use to install and configure just VCS. The script-based installer asks you a series of questions and installs and configures VCS based on the information you provide. Interactive installation using the web-based installer You can use a web-interface to install and configure VCS.
44
Automated installation using the VCS Use response files to perform unattended response files installations. You can generate a response file in one of the following ways: Use the automatically generated response file after a successful installation. Use the -makeresponsefile option to create a response file.
You can install VCS using the operating system pkgadd command and then manually configure VCS as described in the section on Manual installation. You can also install VCS using the JumpStart utility.
Licensing VCS Installing VCS packages on multiple cluster systems Configuring VCS, by creating several detailed configuration files on each system Starting VCS processes
You can choose to configure different optional features, such as the following:
SNMP and SMTP notification VCS configuration in secure mode The wide area Global Cluster feature Cluster Virtual IP address
Review the highlights of the information for which installvcs program prompts you as you proceed to configure. See About preparing to install VCS on page 59.
45
The uninstallvcs program, a companion to installvcs program, uninstalls VCS packages. See Preparing to uninstall VCS on page 359.
Check the systems for VCS installation requirements. See Performing automated preinstallation check on page 72. Upgrade VCS if a previous version of VCS currently runs on a cluster. See Upgrading VCS using the script-based installer on page 265. Start or stop VCS processes See Starting and stopping processes for the Veritas products on page 500. Enable or disable a cluster to run in secure mode See the Veritas Cluster Server Administrators Guide. Configure I/O fencing for the clusters to prevent data corruption See Setting up disk-based I/O fencing using installvcs program on page 137. See Setting up server-based I/O fencing using installvcs program on page 145. See Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program on page 154. Create a single-node cluster See Creating a single-node cluster using the installer program on page 472. Add a node to an existing cluster See Adding nodes using the VCS installer on page 385. Create a jumpstart finish script to install VCS using the JumpStart utility. See Installing using JumpStart on page 216. Perform automated installations using the values that are stored in a configuration file. See Installing VCS using response files on page 179. See Configuring VCS using response files on page 185. See Upgrading VCS using response files on page 291.
46
response within parentheses is the default, which you can select by pressing the Enter key. Enter the ? character to get help to answer the prompt. Enter q to quit the installation. Installation of VCS packages takes place only after you have confirmed the information. However, you must remove the partially installed VCS files before you run the installvcs program again. See Preparing to uninstall VCS on page 359. During the installation, the installer prompts you to type information. The installer expects your responses to be within a certain range or in a specific format. The installer provides examples. If you are prompted to enter an item from a list, enter your selection exactly as it is shown in the list. The installer also prompts you to answer a series of questions that are related to a configuration activity. For such questions, you can enter the b character to return to the first prompt in the series. When the installer displays a set of information items you have entered, you are prompted to confirm it. If you answer n, the program lets you reenter all of the information for the set. You can install the VCS Java Console on a single system, which is not required to be part of the cluster. Note that the installvcs program does not install the VCS Java Console. See Installing the Java Console on page 339.
47
See Before using the Veritas Web-based installer on page 161. See Starting the Veritas Web-based installer on page 162.
installscript may be, for example: installer, webinstaller, installvcs program, or uninstallvcs program YYYYMMDDHHSS is the current date when the installscript is run and xxx are three random letters that the script generates for an installation instance
For example: /opt/VRTS/install/logs/installer-200910101010ldS/installer-200910101010ldS.response You can customize the response file as required to perform unattended installations using the -responsefile option of the installer. This method of automated installations is useful in the following cases:
To perform multiple installations to set up a large VCS cluster. See Installing VCS using response files on page 179. To upgrade VCS on multiple systems in a large VCS cluster. See Upgrading VCS using response files on page 291. To uninstall VCS from multiple systems in a large VCS cluster. See Uninstalling VCS using response files on page 365.
48
Basic VCS cluster with two nodes See Typical configuration of two-node VCS cluster on page 48. VCS clusters in secure mode See Typical configuration of VCS clusters in secure mode on page 49. VCS clusters centrally managed using Veritas Operations Manager (VOM) See Typical configuration of VOM-managed VCS clusters on page 50. VCS clusters with I/O fencing for data protection See Typical VCS cluster configuration with disk-based I/O fencing on page 92. See Typical VCS cluster configuration with server-based I/O fencing on page 93. VCS clusters such as global clusters, replicated data clusters, or campus clusters for disaster recovery See the Veritas Cluster Server Administrator's Guide for disaster recovery cluster configuration models.
49
Figure 3-1
Node: galaxy
hme0
Figure 3-2 illustrates a a simple VCS cluster setup with two Solaris x64 systems. Figure 3-2
Node: galaxy e1000g:0 e1000g:1 e1000g:0 e1000g:1
bge0
50
Figure 3-3
Multiple clusters Cluster 1
Cluster 2
Single cluster
node1
node2
node3
51
Figure 3-4
Cluster 1
Cluster 2
52
Chapter
Licensing VCS
This chapter includes the following topics:
About Veritas product licensing Obtaining VCS license keys Installing Veritas product license keys
Install a license key for the product and features that you want to install. When you purchase a Symantec product, you receive a License Key certificate. The certificate specifies the product keys and the number of product licenses purchased. Continue to install without a license key. The installer prompts for the product modes and options that you want to install, and then sets the required product level.
54
Within 60 days of choosing this option, you must install a valid license key corresponding to the license level entitled or continue with keyless licensing by managing the server or cluster with a management server, such as Veritas Operations Manager (VOM). If you do not comply with the above terms, continuing to use the Symantec product is a violation of your end user license agreement, and results in warning messages. For more information about keyless licensing, see the following URL: http://go.symantec.com/sfhakeyless If you upgrade to this release from a prior release of the Veritas software, the product installer does not change the license keys that are already installed. The existing license keys may not activate new features in this release. If you upgrade with the product installer, or if you install or upgrade with a method other than the product installer, you must do one of the following to license the products:
Run the vxkeyless command to set the product level for the products you have purchased. This option also requires that you manage the server or cluster with a management server. See Setting or changing the product level for keyless licensing on page 213. See the vxkeyless(1m) manual page. Use the vxlicinst command to install a valid product license key for the products you have purchased. See Installing Veritas product license keys on page 55. See the vxlicinst(1m) manual page.
You can also use the above options to change the product levels to another level that you are authorized to use. For example, you can add the replication option to the installed product. You must ensure that you have the appropriate license for the product level and options in use. Note: In order to change from one product group to another, you may need to perform additional steps.
55
the product on the number and type of systems for which you purchased the license. A key may enable the operation of more products than are specified on the certificate. However, you are legally limited to the number of product licenses purchased. The product installation procedure describes how to activate the key. To register and receive a software license key, go to the Symantec Licensing Portal at the following location: https://licensing.symantec.com Make sure you have your Software Product License document. You need information in this document to retrieve and manage license keys for your Symantec product. After you receive the license key, you can install the product. Click the Help link at this site to access the License Portal User Guide and FAQ. The VRTSvlic package enables product licensing. For information about the commands that you can use after the installing VRTSvlic: See Installing Veritas product license keys on page 55. You can only install the Symantec software products for which you have purchased a license. The enclosed software discs might include other products for which you have not purchased a license.
Even though other products are included on the enclosed software discs, you can only use the Symantec software products for which you have purchased a license. To install a new license
Run the following commands. In a cluster environment, run the commands on each node in the cluster:
# cd /opt/VRTS/bin # ./vxlicinst -k xxxx-xxxx-xxxx-xxxx-xxxx-xxx
56
Section
Preinstallation tasks
58
Chapter
About preparing to install VCS Performing preinstallation tasks Getting your VCS installation and configuration information ready
60
Set up shared storage for See Setting up shared storage on page 64. I/O fencing (optional) Set the PATH and the MANPATH variables. Disable the abort sequence on SPARC systems. See Setting the PATH variable on page 68. See Setting the MANPATH variable on page 68. See Disabling the abort sequence on SPARC systems on page 69.
Review basic See Optimizing LLT media speed settings on private NICs instructions to optimize on page 70. LLT media speeds. Review guidelines to help See Guidelines for setting the media speed of the LLT you set the LLT interconnects on page 70. interconnects. Install the patches that For instructions, see the Oracle documentation. are required for Java Run Time environment from Oracle. Prepare zone environments Mount the product disc Verify the systems before installation See Preparing zone environments on page 70.
See Mounting the product disc on page 71. See Performing automated preinstallation check on page 72.
61
The duplicate MAC address on the two switch ports can cause the switch to incorrectly redirect IP traffic to the LLT interface and vice versa. To avoid this issue, configure the system to assign unique MAC addresses by setting the eeprom(1M) parameter local-mac-address to true. The following products make extensive use of the private cluster interconnects for distributed locking:
Veritas Storage Foundation Cluster File System (SFCFS) Veritas Storage Foundation for Oracle RAC (SF Oracle RAC)
Symantec recommends network switches for the SFCFS and the SF Oracle RAC clusters due to their performance characteristics. Refer to the Veritas Cluster Server Administrator's Guide to review VCS performance considerations. Figure 5-1 shows two private networks for use with VCS. Figure 5-1
Public network
Symantec recommends configuring two independent networks between the cluster nodes with a network switch for each network. You can also interconnect multiple layer 2 switches for advanced failure protection. Such connections for LLT are called cross-links. Figure 5-2 shows a private network configuration with crossed links between the network switches.
62
Figure 5-2
Public network
Private networks
Crossed link
Install the required network interface cards (NICs). Create aggregated interfaces if you want to use these to set up private network.
2 3
Connect the VCS private Ethernet controllers on each system. Use crossover Ethernet cables, switches, or independent hubs for each VCS communication network. Note that the crossover Ethernet cables are supported only on two systems. Ensure that you meet the following requirements:
The power to the switches or hubs must come from separate sources. On each system, you must use two independent network cards to provide redundancy. If a network interface is part of an aggregated interface, you must not configure the network interface under LLT. However, you can configure the aggregated interface under LLT. When you configure Ethernet switches for LLT private interconnect, disable the spanning tree algorithm on the ports used for the interconnect.
During the process of setting up heartbeat connections, consider a case where a failure removes all communications between the systems. Note that a chance for data corruption exists under the following conditions:
The systems still run, and The systems can access the shared storage.
Configure the Ethernet devices that are used for the private network such that the autonegotiation protocol is not used. You can achieve a more stable configuration with crossover cables if the autonegotiation protocol is not used.
63
Edit the /etc/system file to disable autonegotiation on all Ethernet devices system-wide. Create a qfe.conf or bge.conf file in the /kernel/drv directory to disable autonegotiation for the individual devices that are used for private network.
Refer to the Oracle Ethernet driver product documentation for information on these methods.
Test the network connections. Temporarily assign network addresses and use telnet or ping to verify communications. LLT uses its own protocol, and does not use TCP/IP. So, you must ensure that the private network connections are used only for LLT communication and not for TCP/IP traffic. To verify this requirement, unplumb and unconfigure any temporary IP addresses that are configured on the network interfaces. The installer configures the private network in the cluster during configuration. You can also manually configure LLT. See Configuring LLT manually on page 223.
When you add new nodes to an existing cluster. When the nodes are in a subcluster during a phased upgrade. When you perform installer sessions using a response file.
64
Use the same procedure to set up shared storage. Make sure to meet the following requirements:
The storage devices have power before any of the systems Only one node runs at one time until each node's address is set to a unique value
Install the required SCSI host adapters on each node that connects to the storage, and make cable connections to the storage. Refer to the documentation that is shipped with the host adapters, the storage, and the systems.
2 3
With both nodes powered off, power on the storage devices. Power on one system, but do not allow it to boot. If necessary, halt the system so that you can use the ok prompt. Note that only one system must run at a time to avoid address conflicts.
65
The example output shows the path to one host adapter. You must include the path information without the "/sd" directory, in the nvramrc script. The path information varies from system to system.
Edit the nvramrc script on to change the scsi-initiator-id to 5. (The Solaris OpenBoot 3.x Command Reference Manual contains a full list of nvedit commands and keystrokes.) For example:
{0} ok nvedit
Each line is numbered, 0:, 1:, 2:, and so on, as you enter the nvedit commands. On the line where the scsi-initiator-id is set, insert exactly one space after the first quotation mark and before scsi-initiator-id.
66
Store the changes you make to the nvramrc script. The changes you make are temporary until you store them.
{0} ok nvstore
If you are not sure of the changes you made, you can re-edit the script without risk before you store it. You can display the contents of the nvramrc script by entering:
{0} ok printenv nvramrc
Instruct the OpenBoot PROM Monitor to use the nvramrc script on the node.
{0} ok setenv use-nvramrc? true
Reboot the node. If necessary, halt the system so that you can use the ok prompt.
67
Verify that the scsi-initiator-id has changed. Go to the ok prompt. Use the output of the show-disks command to find the paths for the host adapters. Then, display the properties for the paths. For example:
{0} ok show-disks ...b) /sbus@6,0/QLGC,isp@2,10000/sd {0} ok cd /sbus@6,0/QLGC,isp@2,10000 {0} ok .properties scsi-initiator-id 00000005
10 Boot the second node. If necessary, halt the system to use the ok prompt.
Verify that the scsi-initiator-id is 7. Use the output of the show-disks command to find the paths for the host adapters. Then, display the properties for that paths. For example:
{0} ok show-disks ...b) /sbus@6,0/QLGC,isp@2,10000/sd {0} ok cd /sbus@6,0/QLGC,isp@2,10000 {0} ok .properties scsi-initiator-id 00000007
1 2
Install the required FC-AL controllers. Connect the FC-AL controllers and the shared storage devices to the same hub or switch. All systems must see all the shared devices that are required to run the critical application. If you want to implement zoning for a fibre switch, make sure that no zoning prevents all systems from seeing all these shared devices.
After all systems have booted, use the format(1m) command to verify that each system can see all shared devices. If Volume Manager is used, the same number of external disk devices must appear, but device names (c#t#d#s#) may differ.
68
If Volume Manager is not used, then you must meet the following requirements:
The same number of external disk devices must appear. The device names must be identical for all devices on all systems.
For the Bourne Shell (sh), Bourne-again Shell (bash), or Korn shell (ksh), type:
$ PATH=/opt/VRTS/bin:$PATH; export PATH
For the Bourne Shell (sh), Bourne-again Shell (bash), or Korn shell (ksh), type:
$ MANPATH=/opt/VRTS/man:$MANPATH; export MANPATH
69
The only action that you must perform following a system abort is to reset the system to achieve the following:
Preserve data integrity Prevent the cluster from taking additional corrective actions
Do not resume the processor as cluster membership may have changed and failover actions may already be in progress. To remove this potential problem on SPARC systems, you should alias the go function in the OpenBoot eeprom to display a message.
2 3 4
Press Ctrl+L to display the current contents of the nvramrc buffer. Press Ctrl+N until the editor displays the last line of the buffer. Add the following lines exactly as shown. Press Enter after adding each line.
." Aliasing the OpenBoot 'go' command! " : go ." It is inadvisable to use the 'go' command in a clustered environment. " cr ." Please use the 'power-off' or 'reset-all' commands instead. " cr ." Thank you, from your friendly neighborhood sysadmin. " ;
5 6
Press Ctrl+C to exit the nvramrc editor. To verify that no errors exist, type the nvrun command. You should see only the following text:
Aliasing the OpenBoot 'go' command!
70
7 8
Type the nvstore command to commit your changes to the non-volatile RAM (NVRAM) for use in subsequent reboots. After you perform these commands, at reboot you see this output:
Aliasing the OpenBoot 'go' command! go isn't unique.
Symantec recommends that you manually set the same media speed setting on each Ethernet card on each node. If you use different media speed for the private NICs, Symantec recommends that you configure the NICs with lesser speed as low-priority links to enhance LLT performance. If you have hubs or switches for LLT interconnects, then set the hub or switch port to the same setting as used on the cards on each node. If you use directly connected Ethernet links (using crossover cables), Symantec recommends that you set the media speed to the highest value common to both cards, typically 1000_Full_Duplex.
Details for setting the media speeds for specific devices are outside of the scope of this manual. Consult the devices documentation for more information.
When you install or upgrade VCS using the installer program, all zones are upgraded (both global and non-global) unless they are detached and unmounted.
71
Make sure that all non-global zones are booted and in the running state before you install or upgrade the VCS packages in the global zone. If the non-global zones are not mounted and running at the time of upgrade, you must upgrade each package in each non-global zone manually. If you install VCS on Solaris 10 systems that run non-global zones, you need to make sure that non-global zones do not inherit the /opt directory. Run the following command to make sure that the /opt directory is not in the inherit-pkg-dir clause:
# zonecfg -z zone_name info zonepath: /export/home/zone1 autoboot: false pool: yourpool inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr
If the /opt directory appears in the output, remove the /opt directory from the zone's configuration and reinstall the zone.
Log in as superuser on a system where you want to install VCS. The system from which you install VCS need not be part of the cluster. The systems must be in the same subnet.
Insert the product disc into a DVD drive that is connected to your system.
72
3 4
If Solaris volume management software is running on your system, the software disc automatically mounts as /cdrom/cdrom0. If Solaris volume management software is not available to mount the DVD, you must mount it manually. After you insert the software disc, enter:
# mount -F hsfs -o ro /dev/dsk/c0t6d0s2 /cdrom
You can also run the installer -precheck command. See Symantec Operations Readiness Tools on page 30. You can use the Veritas Operation Services to assess your setup for VCS installation. To check the systems
The program proceeds in a noninteractive mode to examine the systems for licenses, packages, disk space, and system-to-system communications.
Review the output as the program displays the results of the check and saves the results of the check in a log file. See Command options for installvcs program on page 419.
Preparing to install VCS Getting your VCS installation and configuration information ready
73
If you have manually edited any of the configuration files, you need to perform one of the following before you run the installation program:
On a running cluster, perform an haconf -dump command. This command saves the configuration files and ensures that they do not have formatting errors before you run the installer. On cluster that is not running, perform the hacf -cftocmd and then the hacf -cmdtocf commands to format the configuration files.
Note: Remember to make back up copies of the configuration files before you edit them. You also need to use this procedure if you have manually changed the configuration files before you perform the following actions using the installer:
For more information about the main.cf and types.cf files, refer to the Veritas Cluster Server Administrator's Guide. To display the configuration files in the correct format on a running cluster
Run the following commands to display the configuration files in the correct format:
# haconf -dump
Run the following commands to display the configuration files in the correct format:
# hacf -cftocmd config # hacf -cmdtocf config
74
Preparing to install VCS Getting your VCS installation and configuration information ready
Table 5-2 lists the information you need to install the VCS packages. Table 5-2 Information
System names
If you decide to use keyless licensing, you do not need to obtain license keys. However, you require to set up management server within 60 days to manage the cluster. See About Veritas product licensing on page 53. Depending on the type of installation, keys can include:
A valid site license key A valid demo license key A valid license key for VCS global clusters
See Obtaining VCS license keys on page 54. Decide which packages Minimum packagesprovides basic VCS functionality. to install Recommended packagesprovides full functionality of VCS without advanced features. All packagesprovides advanced feature functionality of VCS. The default option is to install the recommended packages. See Viewing the list of VCS packages on page 210.
Table 5-3 lists the information you need to configure VCS cluster name and ID. Table 5-3 Information
A name for the cluster
Information you need to configure VCS cluster name and ID Your value
A unique ID number for A number in the range of 0-65535. If multiple distinct and the cluster separate clusters share the same network, then each cluster must have a unique cluster ID. Example: 12133
Table 5-4 lists the information you need to configure VCS private heartbeat links.
Preparing to install VCS Getting your VCS installation and configuration information ready
75
Information you need to configure VCS private heartbeat links Your value
Decide how you want to You can configure LLT over Ethernet or LLT over UDP. configure LLT Symantec recommends that you configure heartbeat links that use LLT over Ethernet, unless hardware requirements force you to use LLT over UDP. If you want to configure LLT over UDP, make sure you meet the prerequisites. See Using the UDP layer for LLT on page 475. Decide which Installer provides you with three options: configuration mode you 1. Configure heartbeat links using LLT over Ethernet want to choose 2. Configure heartbeat links using LLT over UDP
You must manually enter details for options 1 and 2, whereas the installer detects the details for option 3. For option 1: LLT over Ethernet
The device names of the NICs that the private networks use among systems A network interface card or an aggregated interface. Do not use the network interface card that is used for the public network, which is typically hme0 for SPARC and bge0 for x64. For example on a SPARC system: qfe0, qfe1 For example on an x64 system: e1000g1, e1000g2 Choose whether to use the same NICs on all systems. If you want to use different NICs, enter the details for each system.
For each system, you must have the following details: The device names of the NICs that the private networks use among systems IP address for each NIC
Table 5-5 lists the information you need to configure virtual IP address of the cluster (optional).
76
Preparing to install VCS Getting your VCS installation and configuration information ready
The name of the public The device name for the NIC that provides public network access. NIC for each node in the A network interface card or an aggregated interface. cluster Example: hme0 A virtual IP address of the NIC You can enter either an IPv4 or an IPv6 address. This virtual IP address becomes a resource for use by the ClusterService group. The "Cluster Virtual IP address" can fail over to another cluster system. Example IPv4 address: 192.168.1.16 Example IPv6 address: 2001:454e:205a:110:203:baff:feee:10 The netmask for the virtual IPv4 address The prefix for the virtual IPv6 address The subnet that you use with the virtual IPv4 address. Example: 255.255.240.0 The prefix length for the virtual IPv6 address. Example: 64
Table 5-6 lists the information you need to add VCS users. Table 5-6 Information
User names
User passwords
VCS passwords are restricted to 255 characters. Enter the password at the prompt.
Table 5-7 lists the information you need to configure SMTP email notification (optional).
Preparing to install VCS Getting your VCS installation and configuration information ready
77
Information you need to configure SMTP email notification (optional) Your value
The name of the public The device name for the NIC that provides public network access. NIC for each node in the A network interface card or an aggregated interface. cluster Examples: hme0 The domain-based address of the SMTP server The SMTP server sends notification emails about the events within the cluster. Example: smtp.symantecexample.com
The email address of Example: [email protected] each SMTP recipient to be notified To decide the minimum Events have four levels of severity, and the severity levels are severity of events for cumulative: SMTP email notification Information VCS sends notifications for important events that exhibit normal behavior. Warning VCS sends notifications for events that exhibit any deviation from normal behavior. Notifications include both Warning and Information type of events. Error VCS sends notifications for faulty behavior. Notifications include both Error, Warning, and Information type of events. SevereError VCS sends notifications for a critical error that can lead to data loss or corruption. Notifications include both Severe Error, Error, Warning, and Information type of events. Example: Error
Table 5-8 lists the information you need to configure SNMP trap notification (optional). Table 5-8 Information Information you need to configure SNMP trap notification (optional) Your value
The name of the public The device name for the NIC that provides public network access. NIC for each node in the A network interface card or an aggregated interface. cluster Examples: hme0
78
Preparing to install VCS Getting your VCS installation and configuration information ready
Table 5-8
Information you need to configure SNMP trap notification (optional) (continued) Your value
Information
The port number for the The default port number is 162. SNMP trap daemon The system name for each SNMP console Example: saturn
To decide the minimum Events have four levels of severity, and the severity levels are severity of events for cumulative: SNMP trap notification Information VCS sends notifications for important events that exhibit normal behavior. Warning VCS sends notifications for events that exhibit any deviation from normal behavior. Notifications include both Warning and Information type of events. Error VCS sends notifications for faulty behavior. Notifications include both Error, Warning, and Information type of events. SevereError VCS sends notifications for a critical error that can lead to data loss or corruption. Notifications include both Severe Error, Error, Warning, and Information type of events. Example: Error
Table 5-9 lists the information you need to configure global clusters (optional). Table 5-9 Information Information you need to configure global clusters (optional) Your value
The name of the public You can use the same NIC that you used to configure the virtual NIC IP of the cluster. Otherwise, specify appropriate values for the NIC. A network interface card or an aggregated interface. For example for SPARC systems: hme0 For example for x64 systems: bge0
Preparing to install VCS Getting your VCS installation and configuration information ready
79
Table 5-9
Information you need to configure global clusters (optional) (continued) Your value
Information
The virtual IP address of the NIC
You can use the same netmask that you used to configure the virtual IP of the cluster. Otherwise, specify appropriate values for the netmask. Example: 255.255.240.0
Review the information you need to configure I/O fencing. See About planning to configure I/O fencing on page 89.
80
Preparing to install VCS Getting your VCS installation and configuration information ready
Section
Chapter 6. Installing VCS Chapter 7. Preparing to configure VCS Chapter 8. Configuring VCS Chapter 9. Configuring VCS clusters for data integrity
82
Chapter
Installing VCS
This chapter includes the following topics:
Installing VCS using the installer Installing language packages using the installer
Confirm that you are logged in as the superuser and you mounted the product disc. See Mounting the product disc on page 71.
Start the installation program. If you obtained VCS from an electronic download site, which does not include the Veritas product installer, use the installvcs program.
Veritas product installer
2 3
From the opening Selection Menu, choose I for "Install a Product." From the displayed list of products to install, choose: Veritas Cluster Server.
84
installvcs program Perform the following steps to start the product installer:
Start the installvcs program. # ./installvcs The installer starts with a copyright message and specifies the directory where the logs are created.
Choose the VCS packages that you want to install. See Veritas Cluster Server installation packages on page 413. Based on what packages you want to install, enter one of the following:
1 Installs only the minimal required VCS packages that provides basic functionality of the product. Installs the recommended VCS packages that provides complete functionality of the product. This option does not install the optional VCS packages. Note that this option is the default. 3 Installs all the VCS packages. You must choose this option to configure any optional VCS feature. 4 Displays the VCS packages for each option.
Enter the names of the systems where you want to install VCS.
Enter the system names separated by spaces: [q,?] (galaxy) galaxy nebula
For a single-node VCS installation, enter one name for the system.
85
See Creating a single-node cluster using the installer program on page 472. The installer does the following for the systems:
Checks that the local system that runs the installer can communicate with remote systems. If the installer finds ssh binaries, it confirms that ssh can operate without requests for passwords or passphrases. If the default communication method ssh fails, the installer attempts to use rsh. Makes sure the systems use one of the supported operating systems. Makes sure that the systems have the required operating system patches. If the installer reports that any of the patches are not available, install the patches on the system before proceeding with the VCS installation. Makes sure the systems install from the global zone. Checks for product licenses. Checks whether a previous version of VCS is installed. If a previous version of VCS is installed , the installer provides an option to upgrade to VCS 6.0. See About upgrading to VCS 6.0 on page 259. Checks for the required file system space and makes sure that any processes that are running do not conflict with the installation. If requirements for installation are not met, the installer stops and indicates the actions that you must perform to proceed with the process. Checks whether any of the packages already exists on a system. If the current version of any package exists, the installer removes the package from the installation list for the system. If a previous version of any package exists, the installer replaces the package with the current version.
Review the list of packages and patches that the installer would install on each node. The installer installs the VCS packages and patches on the systems galaxy and nebula.
86
Based on what license type you want to use, enter one of the following:
1 You must have a valid license key. Enter the license key at the prompt: Enter a VCS license key: [b,q,?] XXXX-XXXX-XXXX-XXXX-XXXX If you plan to configure global clusters, enter the corresponding license keys when the installer prompts for additional licenses. Do you wish to enter additional licenses? [y,n,q,b] (n) y
The keyless license option enables you to install VCS without entering a key. However, to ensure compliance, keyless licensing requires that you manage the systems with a management server. For more information, go to the following website: http://go.symantec.com/sfhakeyless Note that this option is the default.
The installer registers the license and completes the installation process.
8 9
To install the Global Cluster Option, enter y at the prompt. To configure VCS, enter y at the prompt. You can also configure VCS later.
Would you like to configure VCS on galaxy nebula [y,n,q] (n) n
See Overview of tasks to configure VCS using the script-based installer on page 113.
The installer provides an option to collect data about the installation process each time you complete an installation, upgrade, configuration, or uninstall of the product. The installer transfers the contents of the install log files to an internal Symantec site. The information is used only to gather metrics about how you use the installer. No personal customer data is collected, and no information will be shared by any other parties. Information gathered may include the product and the version installed or upgraded, how many systems were installed, and the time spent in any section of the install process.
87
11 The installer checks for online updates and provides an installation summary. 12 After the installation, note the location of the installation log files, the
summary file, and the response file for future reference. The files provide the useful information that can assist you with the configuration and can also assist future configurations.
summary file log file response file Lists the packages that are installed on each system. Details the entire installation. Contains the installation information that can be used to perform unattended or automated installations on other systems. See Installing VCS using response files on page 179.
Make sure install_lp command uses the ssh or rsh commands as root on all systems in the cluster. Make sure that permissions are granted for the system on which install_lp is run.
Insert the language disc into the drive. The Solaris volume-management software automatically mounts the disc as /cdrom/cdrom0.
88
Chapter
90
Figure 7-1 illustrates a high-level flowchart to configure I/O fencing for the VCS cluster. Figure 7-1 Workflow to configure I/O fencing
Install and configure VCS
Three disks
Preparatory tasks
vxdiskadm or vxdisksetup utilities
Preparatory tasks
Identify an existing CP server
Install and configure VCS or SFHA on CP server systems Establish TCP/IP connection between CP server and VCS cluster
Configuration tasks
Use one of the following methods
If the CP server is clustered, set up shared storage for the CP server Run the configure_cps utility and follow the prompts (or) Manually configure CP server
For the disks that will serve as coordination points
Run the installvcs -fencing, choose option 2, and follow the prompts
or
Edit the values in the response file you created and use them with installvcs -responsefile command
or
Initialize disks as VxVM disks and Check disks for I/O fencing compliance
Configuration tasks
Use one of the following methods
Run the installvcs -fencing, choose option 1, and follow the prompts
or
Edit the values in the response file you created and use them with installvcs -responsefile command
or
91
Figure 7-2 illustrates a high-level flowchart to configure non-SCSI-3 server-based I/O fencing for the VCS cluster in virtual environments that do not support SCSI-3 PR. Figure 7-2 Workflow to configure non-SCSI-3 server-based I/O fencing
VCS in nonSCSI3 compliant virtual environment ? Configure server-based fencing (customized mode) with CP servers Preparatory tasks
Identify existing CP servers
Install and configure VCS or SFHA on CP server systems Establish TCP/IP connection between CP server and VCS cluster If the CP server is clustered, set up shared storage for the CP server Run the configure_cps utility and follow the prompts (or) Manually configure CP server
Configuration tasks
Use one of the following methods
Run the installvcs -fencing, choose option 1, enter n to confirm that storage is not SCSI3compliant, and follow the prompts
or
Edit the values in the response file you created and use them with installvcs -responsefile command
or
After you perform the preparatory tasks, you can use any of the following methods to configure I/O fencing:
92
See Setting up disk-based I/O fencing using installvcs program on page 137. See Setting up server-based I/O fencing using installvcs program on page 145. See Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program on page 154.
See Configuring VCS for data integrity using the Web-based installer on page 172.
See Response file variables to configure disk-based I/O fencing on page 198. See Response file variables to configure server-based I/O fencing on page 200. See Response file variables to configure non-SCSI-3 server-based I/O fencing on page 202. See Configuring I/O fencing using response files on page 197.
See Setting up disk-based I/O fencing manually on page 233. See Setting up server-based I/O fencing manually on page 238. See Setting up non-SCSI-3 fencing in virtual environments manually on page 250.
You can also migrate from one I/O fencing configuration to another. See the Veritas Cluster Server Administrator's Guide for more details.
coordinator disk2
coordinator disk3
Shared storage
VxVM-managed and SCSI3 PR-compliant
Public network
93
Fiber channel
Client Cluster
LLT links
Node 1 Node 2
Application Storage
Multiple application clusters use three CP servers as their coordination points See Figure 7-5 on page 94. Multiple application clusters use a single CP server and multiple pairs of coordinator disks (two) as their coordination points See Figure 7-6 on page 95. Multiple application clusters use a single CP server as their coordination point This single coordination point fencing configuration must use a highly available CP server that is configured on an SFHA cluster as its coordination point. See Figure 7-7 on page 95.
94
Warning: In a single CP server fencing configuration, arbitration facility is not available during a failover of the CP server in the SFHA cluster. So, if a network partition occurs on any application cluster during the CP server failover, the application cluster is brought down. Although the recommended CP server configurations use three coordination points, you can use more than three coordination points for I/O fencing. Ensure that the total number of CP servers you use is an odd number. In a configuration where multiple application clusters share a common set of CP server coordination points, the application cluster as well as the CP server use a Universally Unique Identifier (UUID) to uniquely identify an application cluster. Figure 7-5 displays a configuration using three CP servers that are connected to multiple application clusters. Figure 7-5 Three CP servers connecting to multiple application clusters
TCP/IP TCP/IP
Public network
application clusters
(clusters which run VCS, SFHA, SFCFS, SVS, or SF Oracle RAC to provide high availability for applications)
Figure 7-6 displays a configuration using a single CP server that is connected to multiple application clusters with each application cluster also using two coordinator disks.
95
Figure 7-6
Single CP server with two coordinator disks for each application cluster
CP server hosted on a single-node VCS cluster
(can also be hosted on an SFHA cluster)
TCP/IP TCP/IP
Public network
Fibre channel coordinator disks coordinator disks Fibre channel Public network TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, SVS, or SF Oracle RAC to provide high availability for applications)
Figure 7-7 displays a configuration using a single CP server that is connected to multiple application clusters. Figure 7-7 Single CP server connecting to multiple application clusters
CP server hosted on an SFHA cluster
TCP/IP TCP/IP
Public network
application clusters
(clusters which run VCS, SFHA, SFCFS, SVS, or SF Oracle RAC to provide high availability for applications)
See Configuration diagrams for setting up server-based I/O fencing on page 505.
96
Configure the CP server cluster in secure mode Set up shared storage for the CP server database Configure the CP server
Decide whether you want to host the CP server on a single-node VCS cluster, or on an SFHA cluster. Symantec recommends hosting the CP server on an SFHA cluster to make the CP server highly available.
If you host the CP server on an SFHA cluster, review the following information. Make sure you make the decisions and meet these prerequisites when you set up the CP server:
You must configure disk-based fencing during the SFHA configuration. You must set up shared storage for the CP server database during your CP server setup.
97
Decide whether you want to configure server-based fencing for the VCS cluster (application cluster) with a single CP server as coordination point or with at least three coordination points. Symantec recommends using at least three coordination points.
Decide whether you want to configure the CP server cluster in secure mode. Symantec recommends configuring the CP server cluster in secure mode to secure the communication between the CP server and its clients (VCS clusters). It also secures the HAD communication on the CP server cluster.
Set up the hardware and network for your CP server. See CP server requirements on page 38.
Name for the CP server The CP server name should not contain any special characters. CP server name can include alphanumeric characters, underscore, and hyphen. Port number for the CP server Allocate a TCP/IP port for use by the CP server. Valid port range is between 49152 and 65535. The default port number is 14250. Virtual IP address, network interface, netmask, and networkhosts for the CP server You can configure multiple virtual IP addresses for the CP server.
Depending on whether your CP server uses a single system or multiple systems, perform the following tasks:
Install and configure VCS to create a single-node VCS cluster. During installation, make sure to select all packages for installation. The VRTScps package is installed only if you select to install all packages. Proceed to configure the CP server. See Configuring the CP server using the configuration utility on page 99. See Configuring the CP server manually on page 109.
98
Install and configure SFHA to create an SFHA cluster. This makes the CP server highly available. Meet the following requirements for CP server: During installation, make sure to select all packages for installation. The VRTScps package is installed only if you select to install all packages. During configuration, configure disk-based fencing (scsi3 mode).
See the Veritas Storage Foundation and High Availability Installation Guide for instructions on installing and configuring SFHA. Proceed to set up shared storage for the CP server database.
Run the installer as follows to configure the CP server cluster in secure mode. If you have VCS installed on the CP server, run the following command:
# installvcs -security
If you have SFHA installed on the CP server, run the following command:
# installsfha -security
99
Create a disk group containing the disks. You require two disks to create a mirrored volume. For example:
# vxdg init cps_dg disk1 disk2
Create a file system over the volume. The CP server configuration utility only supports vxfs file system type. If you use an alternate file system, then you must configure CP server manually. Depending on the operating system that your CP server runs, enter the following command:
AIX HP-UX Linux Solaris # mkfs -V vxfs /dev/vx/rdsk/cps_dg/cps_volume
100
1 2
Verify that the VRTScps package is installed on the node. Run the CP server configuration script on the node where you want to configure the CP server:
# /opt/VRTScps/bin/configure_cps.pl
Enter 1 at the prompt to configure CP server on a single-node VCS cluster. The configuration utility then runs the following preconfiguration checks:
Checks to see if a single-node VCS cluster is running with the supported platform. The CP server requires VCS to be installed and configured before its configuration. Checks to see if the CP server is already configured on the system. If the CP server is already configured, then the configuration utility informs the user and requests that the user unconfigure the CP server before trying to configure it.
Enter valid virtual IP addresses on which the CP server process should depend on:
101
Enter the CP server port number or press Enter to accept the default value (14250).
Enter a port number for virtual IP 10.209.83.85 in range [49152, 65535], or press enter for default port (14250) : Using default port: 14250 Enter a port number for virtual IP 10.209.83.87 in range [49152, 65535], or press enter for default port (14250) : Using default port: 14250
Choose whether the communication between the CP server and the VCS clusters has to be made secure. If you have not configured the CP server cluster in secure mode, enter n at the prompt. Warning: If the CP server cluster is not configured in secure mode, and if you enter y, then the script immediately exits. You must configure the CP server cluster in secure mode and rerun the CP server configuration script.
Veritas recommends secure communication between the CP server and application clusters. Enabling security requires Symantec Product Authentication Service to be installed and configured on the cluster. Do you want to enable Security for the communications? (y/n) (Default:y) :
Enter the absolute path of the CP server database or press Enter to accept the default value (/etc/VRTScps/db).
CP Server uses an internal database to store the client information. Note: As the CP Server is being configured on a single node VCS, the database can reside on local file system. Enter absolute path of the database (Default:/etc/VRTScps/db):
102
10 The configuration utility proceeds with the configuration process, and creates
a vxcps.conf configuration file.
Successfully generated the /etc/vxcps.conf configuration file. Successfully created directory /etc/VRTScps/db. Configuring CP Server Service Group (CPSSG) for this cluster ----------------------------------------------
11 Enter the number of NIC resources that you want to configure. You must use
a public NIC.
Enter how many NIC resources you want to configure [1 to 2]: 2
Answer the following questions for each NIC resource that you want to configure.
12 Enter a valid network interface for the virtual IP address for the CP server
process.
Enter a valid network interface for virtual IP 10.209.83.85 on mycps1.symantecexample.com: hme0 Enter a valid network interface for virtual IP 10.209.83.87 on mycps1.symantecexample.com: hme0
13 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you virtual IP 10.209.83.85 [1 Enter the NIC resource you virtual IP 10.209.83.87 [1 want to to 2] : want to to 2] : associate with the 1 associate with the 2
103
If you entered an IPv6 address, enter the prefix details at the prompt.
104
# hagrp -state CPSSG #Group CPSSG Attribute State System mycps1.symantecexample.com Value |ONLINE|
It also generates the configuration file for CP server (/etc/vxcps.conf). The configuration utility adds the vxcpserv process and other resources to the VCS configuration in the CP server service group (CPSSG). For information about the CPSSG, refer to the Veritas Cluster Server Administrator's Guide. In addition, the main.cf samples contain details about the vxcpserv resource and its dependencies. See Sample configuration files for CP server on page 464. To configure the CP server on an SFHA cluster
1 2 3
Verify that the VRTScps package is installed on each node. Make sure that you have configured passwordless ssh or rsh on the CP server cluster nodes. Run the CP server configuration script on any node in the cluster:
# /opt/VRTScps/bin/configure_cps.pl [-n]
The CP server configuration utility uses ssh by default to communicate between systems. Use the -n option for rsh communication.
Enter 2 at the prompt to configure CP server on an SFHA cluster. The configuration utility then runs the following preconfiguration checks:
105
Checks to see if an SFHA cluster is running with the supported platform. The CP server requires SFHA to be installed and configured before its configuration. Checks to see if the CP server is already configured on the system. If the CP server is already configured, then the configuration utility informs the user and requests that the user unconfigure the CP server before trying to configure it.
Enter valid virtual IP addresses on which the CP server process should depend on:
Enter the CP server port number or press Enter to accept the default value (14250).
Enter a port number for virtual IP 10.209.83.85 in range [49152, 65535], or press enter for default port (14250) : Using default port: 14250 Enter a port number for virtual IP 10.209.83.87 in range [49152, 65535], or press enter for default port (14250) : Using default port: 14250
106
Choose whether the communication between the CP server and the VCS clusters has to be made secure. If you have not configured the CP server cluster in secure mode, enter n at the prompt. Warning: If the CP server cluster is not configured in secure mode, and if you enter y, then the script immediately exits. You must configure the CP server cluster in secure mode and rerun the CP server configuration script.
Veritas recommends secure communication between the CP server and application clusters. Enabling security requires Symantec Product Authentication Service to be installed and configured on the cluster. Do you want to enable Security for the communications? (y/n) (Default:y) :
Enter the absolute path of the CP server database or press Enter to accept the default value (/etc/VRTScps/db).
CP Server uses an internal database to store the client information. Note: As the CP Server is being configured on SFHA cluster, the database should reside on shared storage with vxfs file system. Please refer to documentation for information on setting up of shared storage for CP server database. Enter absolute path of the database (Default:/etc/VRTScps/db):
107
11 The configuration utility proceeds with the configuration process, and creates
a vxcps.conf configuration file on each node. The following output is for one node:
Successfully generated the /etc/vxcps.conf configuration file. Successfully created directory /etc/VRTScps/db. Creating mount point /etc/VRTScps/db on mycps1.symantecexample.com. Copying configuration file /etc/vxcps.conf to mycps1.symantecexample.com Configuring CP Server Service Group (CPSSG) for this cluster ----------------------------------------------
12 Enter the number of NIC resources that you want to configure. You must use
a public NIC.
Enter how many NIC resources you want to configure [1 to 2]: 2
Answer the following questions for each NIC resource that you want to configure.
13 Confirm whether you use the same NIC name for the virtual IP on all the
systems in the cluster.
Is the name of network interfaces for NIC resource - 1 same on all the systems?[y/n] : y
14 Enter a valid network interface for the virtual IP address for the CP server
process.
Enter a valid interface for virtual IP 10.209.83.85 on all the systems : hme0
15 Enter the NIC resource you want to associate with the virtual IP addresses.
Enter the NIC resource you virtual IP 10.209.83.85 [1 Enter the NIC resource you virtual IP 10.209.83.87 [1 want to to 2] : want to to 2] : associate with the 1 associate with the 2
108
If you entered an IPv6 address, enter the prefix details at the prompt.
18 Enter the name of the disk group for the CP server database.
Enter the name of diskgroup for cps database : cps_dg
19 Enter the name of the volume that is created on the above disk group.
Enter the name of volume created on diskgroup cps_dg : cps_volume
109
# hagrp -state CPSSG #Group CPSSG CPSSG Attribute State State System mycps1 mycps2 Value |ONLINE| |OFFLINE|
It also generates the configuration file for CP server (/etc/vxcps.conf). The configuration utility adds the vxcpserv process and other resources to the VCS configuration in the CP server service group (CPSSG). For information about the CPSSG, refer to the Veritas Cluster Server Administrator's Guide. In addition, the main.cf samples contain details about the vxcpserv resource and its dependencies. See Sample configuration files for CP server on page 464.
110
Stop VCS on each node in the CP server cluster using the following command:
# hastop -local
Edit the main.cf file to add the CPSSG service group on any node. Use the CPSSG service group in the main.cf as an example: See Sample configuration files for CP server on page 464. Customize the resources under the CPSSG service group as per your configuration.
Create the /etc/vxcps.conf file using the sample configuration file provided at /etc/vxcps/vxcps.conf.sample. Based on whether you have configured the CP server cluster in secure mode or not, do the following:
For a CP server cluster which is configured in secure mode, edit the /etc/vxcps.conf file to set security=1. For a CP server cluster which is not configured in secure mode, edit the /etc/vxcps.conf file to set security=0.
Symantec recommends enabling security for communication between CP server and the application clusters.
111
Verify that the following configuration files are updated with the information you provided during the CP server configuration process:
/etc/vxcps.conf (CP server configuration file) /etc/VRTSvcs/conf/config/main.cf (VCS configuration file) /etc/VRTScps/db (default location for CP server database)
Run the cpsadm command to check if the vxcpserv process is listening on the configured Virtual IP.
# cpsadm -s cp_server -a ping_cps
where cp_server is the virtual IP address or the virtual hostname of the CP server.
112
Chapter
Configuring VCS
This chapter includes the following topics:
Overview of tasks to configure VCS using the script-based installer Starting the software configuration Specifying systems for configuration Configuring the cluster name Configuring private heartbeat links Configuring the virtual IP of the cluster Configuring the cluster in secure mode Configuring a secure cluster node by node Adding VCS users Configuring SMTP email notification Configuring SNMP trap notification Configuring global clusters Completing the VCS configuration Verifying and updating licenses on the system
114
Specify the systems where you want to configure VCS Configure the basic cluster
Configure virtual IP address of the cluster (optional) Configure the cluster in secure mode (optional) Add VCS users (required if you did not configure the cluster in secure mode)
See Configuring the virtual IP of the cluster on page 119. See Configuring the cluster in secure mode on page 121. See Adding VCS users on page 126.
Configure SMTP email notification (optional) See Configuring SMTP email notification on page 127. Configure SNMP email notification (optional) See Configuring SNMP trap notification on page 129. Configure global clusters (optional) See Configuring global clusters on page 131.
115
1 2
Confirm that you are logged in as the superuser and that you have mounted the product disc. Start the installer.
# ./installer
The installer starts the product installation program with a copyright message and specifies the directory where the logs are created.
3 4
From the opening Selection Menu, choose: C for "Configure an Installed Product." From the displayed list of products to configure, choose the corresponding number for your product: Veritas Cluster Server
1 2
Confirm that you are logged in as the superuser. Start the installvcs program.
# /opt/VRTS/install/installvcs -configure
The installer begins with a copyright message and specifies the directory where the logs are created.
Enter the names of the systems where you want to configure VCS.
Enter the operating_system system names separated by spaces: [q,?] (galaxy) galaxy nebula
Review the output as the installer verifies the systems you specify. The installer does the following tasks:
Checks that the local node running the installer can communicate with remote nodes If the installer finds ssh binaries, it confirms that ssh can operate without requests for passwords or passphrases.
116
Makes sure that the systems are running with the supported operating system Makes sure the installer started from the global zone Checks whether VCS is installed Exits if VCS 6.0 is not installed
Review the installer output about the I/O fencing configuration and confirm whether you want to configure fencing in enabled mode.
Do you want to configure I/O Fencing in enabled mode? [y,n,q,?] (y)
1 2
Review the configuration instructions that the installer presents. Enter a unique cluster name.
Enter the unique cluster name: [q,?] clus1
Choose one of the following options at the installer prompt based on whether you want to configure LLT over Ethernet or UDP.
Option 1: LLT over Ethernet (answer installer questions) Enter the heartbeat link details at the installer prompt to configure LLT over Ethernet. Skip to step 2.
117
Option 2: LLT over UDP (answer installer questions) Make sure that each NIC you want to use as heartbeat link has an IP address configured. Enter the heartbeat link details at the installer prompt to configure LLT over UDP. If you had not already configured IP addresses to the NICs, the installer provides you an option to detect the IP address for a given NIC. Skip to step 3. Option 3: Automatically detect configuration for LLT over Ethernet Allow the installer to automatically detect the heartbeat link details to configure LLT over Ethernet. The installer tries to detect all connected links between all systems. Skip to step 5.
If you chose option 1, enter the network interface card details for the private heartbeat links. The installer discovers and lists the network interface cards. Answer the installer prompts. The following example shows different NICs based on architecture:
For Solaris SPARC: You must not enter the network interface card that is used for the public network (typically hme0.)
Enter the NIC for the first private heartbeat link on galaxy: [b,q,?] qfe0 Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y) Enter the NIC for the second private heartbeat link on galaxy: [b,q,?] qfe1 Would you like to configure a third private heartbeat link? [y,n,q,b,?](n) Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n)
For Solaris x64: You must not enter the network interface card that is used for the public network (typically bge0.)
Enter the NIC for the first private heartbeat link on galaxy: [b,q,?] e1000g1 Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
118
Enter the NIC for the second private heartbeat link on galaxy: [b,q,?] e1000g2 Would you like to configure a third private heartbeat link? [y,n,q,b,?](n)
If you chose option 2, enter the NIC details for the private heartbeat links. This step uses examples such as private_NIC1 or private_NIC2 to refer to the available names of the NICs.
Enter the NIC for the first private heartbeat link on galaxy: [b,q,?] private_NIC1 Do you want to use address 192.168.0.1 for the first private heartbeat link on galaxy: [y,n,q,b,?] (y) Enter the UDP port for the first private heartbeat link on galaxy: [b,q,?] (50000) ? Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y) Enter the NIC for the second private heartbeat link on galaxy: [b,q,?] private_NIC2 Do you want to use address 192.168.1.1 for the second private heartbeat link on galaxy: [y,n,q,b,?] (y) Enter the UDP port for the second private heartbeat link on galaxy: [b,q,?] (50001) ? Do you want to configure an additional low priority heartbeat link? [y,n,q,b,?] (n) y Enter the NIC for the low priority heartbeat link on galaxy: [b,q,?] (private_NIC0) Do you want to use address 192.168.3.1 for the low priority heartbeat link on galaxy: [y,n,q,b,?] (y) Enter the UDP port for the low priority heartbeat link on galaxy: [b,q,?] (50004)
119
Choose whether to use the same NIC details to configure private heartbeat links on other systems.
Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y)
If you want to use the NIC details that you entered for galaxy, make sure the same NICs are available on each system. Then, enter y at the prompt. For LLT over UDP, if you want to use the same NICs on other systems, you still must enter unique IP addresses on each NIC for other systems. If the NIC device names are different on some of the systems, enter n. Provide the NIC details for each system as the program prompts.
If you chose option 3, the installer detects NICs on each system and network links, and sets link priority. If the installer fails to detect heartbeat links or fails to find any high-priority links, then choose option 1 or option 2 to manually configure the heartbeat links. See step 2 for option 1, or step 3 for option 2.
The cluster cannot be configured if the cluster ID 60842 is in use by another cluster. Installer performs a check to determine if the cluster ID is duplicate. The check takes less than a minute to complete.
Would you like to check if the cluster ID is in use by another cluster? [y,n,q] (y)
120
1 2 3
Review the required information to configure the virtual IP of the cluster. When the system prompts whether you want to configure the virtual IP, enter y. Confirm whether you want to use the discovered public NIC on the first system. Do one of the following:
If the discovered NIC is the one to use, press Enter. If you want to use a different NIC, type the name of a NIC to use and press Enter.
Active NIC devices discovered on galaxy: hme0 Enter the NIC for Virtual IP of the Cluster to use on galaxy: [b,q,?](hme0)
Confirm whether you want to use the same public NIC on all nodes. Do one of the following:
If all nodes use the same public NIC, enter y. If unique NICs are used, enter n and enter a NIC for each node.
Enter the virtual IP address for the cluster. You can enter either an IPv4 address or an IPv6 address.
121
For IPv4:
Enter the virtual IP address. Enter the Virtual IP address for the Cluster: [b,q,?] 192.168.1.16
Confirm the default netmask or enter another one: Enter the netmask for IP 192.168.1.16: [b,q,?] (255.255.240.0)
Verify and confirm the Cluster Virtual IP information. Cluster Virtual IP verification: NIC: hme0 IP: 192.168.1.16 Netmask: 255.255.240.0 Is this information correct? [y,n,q] (y)
For IPv6
Enter the virtual IP address. Enter the Virtual IP address for the Cluster: [b,q,?] 2001:454e:205a:110:203:baff:feee:10
Enter the prefix for the virtual IPv6 address you provided. For example: Enter the Prefix for IP 2001:454e:205a:110:203:baff:feee:10: [b,q,?] 64
Verify and confirm the Cluster Virtual IP information. Cluster Virtual IP verification: NIC: hme0 IP: 2001:454e:205a:110:203:baff:feee:10 Prefix: 64 Is this information correct? [y,n,q] (y)
122
Would you like to configure the VCS cluster in secure mode? [y,n,q,?] (n)
1 2
Ensure that you are logged in as superuser on one of the nodes in the cluster. Enter the following command:
# /opt/VRTS/install/installvcs -securitytrust
The installer specifies the location of the log files. It then lists the cluster information such as cluster name, cluster ID, node names, and service groups.
When the installer prompts you for the broker information, specify the IP address, port number, and the data directory for which you want to establish trust relationship with the broker.
Input the broker name of IP address: 15.193.97.204 Input the broker port: (14545)
Specify a valid data directory or press Enter to accept the default directory.
The installer sets up trust relationship with the broker for all nodes in the cluster and displays a confirmation.
123
Setup trust with broker 15.193.97.204 on cluster node1 ........Done Setup trust with broker 15.193.97.204 on cluster node2 ........Done
The installer specifies the location of the log files, summary file, and response file and exits.
If you entered incorrect details for broker IP address, port number, or directory name, the installer displays an error. It specifies the location of the log files, summary file, and response file and exits.
Configure security on one node See Configuring the first node on page 123. Configure security on the remaining nodes Complete the manual configuration steps See Configuring the remaining nodes on page 124.
124
1 2
Ensure that you are logged in as superuser. Enter the following command:
# /opt/VRTS/install/installvcs -securityonenode
The installer lists information about the cluster, nodes, and service groups. If VCS is not configured or if VCS is not running on all nodes of the cluster, the installer prompts whether you want to continue configuring security. It then prompts you for the node that you want to configure.
VCS is not running on all systems in this cluster. All VCS systems must be in RUNNING state. Do you want to continue? [y,n,q] (n) y 1) Perform security configuration on first node and export security configuration files. 2) Perform security configuration on remaining nodes with security configuration files. Select the option you would like to perform [1-2,q.?] 1
Warning: All configurations about cluster users are deleted when you configure the first node. You can use the /opt/VRTSvcs/bin/hauser command to create cluster users manually.
The installer completes the secure configuration on the node. It specifies the location of the security configuration files and prompts you to copy these files to the other nodes in the cluster. The installer also specifies the location of log files, summary file, and response file. Copy the security configuration files from the /var/VRTSvcs/vcsauth/bkup directory to temporary directories on the other nodes in the cluster.
125
1 2
Ensure that you are logged in as superuser. Enter the following command:
# /opt/VRTS/install/installvcs -securityonenode
The installer lists information about the cluster, nodes, and service groups. If VCS is not configured or if VCS is not running on all nodes of the cluster, the installer prompts whether you want to continue configuring security. It then prompts you for the node that you want to configure. Enter 2.
VCS is not running on all systems in this cluster. All VCS systems must be in RUNNING state. Do you want to continue? [y,n,q] (n) y 1) Perform security configuration on first node and export security configuration files. 2) Perform security configuration on remaining nodes with security configuration files. Select the option you would like to perform [1-2,q.?] 2
The installer completes the secure configuration on the node. It specifies the location of log files, summary file, and response file.
On the first node, freeze all service groups except the ClusterService service group.
# /opt/VRTSvcs/bin/haconf -makerw # /opt/VRTSvcs/bin/hagrp -list Frozen=0 # /opt/VRTSvcs/bin/hagrp -freeze groupname -persistent # /opt/VRTSvcs/bin/haconf -dump -makero
126
On the first node, edit the /etc/VRTSvcs/conf/config/main.cf file to resemble the following:
cluster clus1 ( SecureClus = 1 )
On the first node, start VCS. Then start VCS on the remaining nodes.
# /opt/VRTSvcs/bin/hastart
127
1 2
Review the required information to add VCS users. Reset the password for the Admin user, if necessary.
Do you wish to accept the default cluster credentials of 'admin/password'? [y,n,q] (y) n Enter the user name: [b,q,?] (admin) Enter the password: Enter again:
Review the summary of the newly added users and confirm the information.
128
1 2
Review the required information to configure the SMTP email notification. Specify whether you want to configure the SMTP notification.
Do you want to configure SMTP notification? [y,n,q,?] (n) y
If you do not want to configure the SMTP notification, you can skip to the next configuration option. See Configuring SNMP trap notification on page 129.
If you want to add another SMTP recipient, enter y and provide the required information at the prompt.
Would you like to add another SMTP recipient? [y,n,q,b] (n) y Enter the full email address of the SMTP recipient
129
(example: [email protected]): [b,q,?] [email protected] Enter the minimum severity of events for which mail should be sent to [email protected] [I=Information, W=Warning, E=Error, S=SevereError]: [b,q,?] E
1 2
Review the required information to configure the SNMP notification feature of VCS. Specify whether you want to configure the SNMP notification.
Do you want to configure SNMP notification? [y,n,q,?] (n) y
If you skip this option and if you had installed a valid HA/DR license, the installer presents you with an option to configure this cluster as global cluster. If you did not install an HA/DR license, the installer proceeds to configure VCS based on the configuration details you provided. See Configuring global clusters on page 131.
130
Provide information to configure SNMP trap notification. Provide the following information:
If you want to add another SNMP console, enter y and provide the required information at the prompt.
Would you like to add another SNMP console? [y,n,q,b] (n) y Enter the SNMP console system name: [b,q,?] jupiter Enter the minimum severity of events for which SNMP traps should be sent to jupiter [I=Information, W=Warning, E=Error, S=SevereError]: [b,q,?] S
131
1 2
Review the required information to configure the global cluster option. Specify whether you want to configure the global cluster option.
Do you want to configure the Global Cluster Option? [y,n,q] (n) y
If you skip this option, the installer proceeds to configure VCS based on the configuration details you provided.
132
Provide information to configure this cluster as global cluster. The installer prompts you for a NIC, a virtual IP address, and value for the netmask. If you had entered virtual IP address details, the installer discovers the values you entered. You can use the same virtual IP address for global cluster configuration or enter different values. You can also enter an IPv6 address as a virtual IP address.
Verify and confirm the configuration of the global cluster. For example:
For IPv4: Global Cluster Option configuration verification: NIC: hme0 IP: 192.168.1.16 Netmask: 255.255.240.0 Is this information correct? [y,n,q] (y) On Solaris x64, an example for the NIC's port is bge0. For IPv6 Global Cluster Option configuration verification: NIC: hme0 IP: 2001:454e:205a:110:203:baff:feee:10 Prefix: 64 Is this information correct? [y,n,q] (y) On Solaris x64, an example for the NIC's port is bge0.
133
2 3
Review the output as the installer stops various processes and performs the configuration. The installer then restarts VCS and its related processes. Enter y at the prompt to send the installation information to Symantec.
Would you like to send the information about this installation to Symantec to help improve installation in the future? [y,n,q,?] (y) y
After the installer configures VCS successfully, note the location of summary, log, and response files that installer creates. The files provide the useful information that can assist you with the configuration and can also assist future configurations.
summary file log file response file Describes the cluster and its configured resources. Details the entire configuration. Contains the configuration information that can be used to perform secure or unattended installations on other systems. See Configuring VCS using response files on page 185.
134
The license key The type of license The product for which it applies Its expiration date, if any. Demo keys have expiration dates. Permanent keys and site keys do not have expiration dates.
= = = = = xxx-xxx-xxx-xxx-xxx Veritas Cluster Server xxxxx PERMANENT xxxxx
License Key Product Name Serial Number License Type OEM ID Features := Platform Version Tier Reserved Mode
= = = = =
135
1 2
Make sure you have permissions to log in as root on each of the nodes in the cluster. Shut down VCS on all nodes in the cluster:
# hastop -all -force
Enter the permanent license key using the following command on each node:
# vxlicinst -k XXXX-XXXX-XXXX-XXXX-XXXX-XXX
Make sure demo licenses are replaced on all cluster nodes before starting VCS.
# vxlicrep
136
Chapter
Setting up disk-based I/O fencing using installvcs program Setting up server-based I/O fencing using installvcs program Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program Enabling or disabling the preferred fencing policy
List the new external disks or the LUNs as recognized by the operating system. On each node, enter:
# devfsadm
To initialize the disks as VxVM disks, use one of the following methods:
Use the interactive vxdiskadm utility to initialize the disks as VxVM disks.
138
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
For more information see the Veritas Storage Foundation Administrators Guide.
Repeat this command for each disk you intend to use as a coordinator disk.
The installvcs program starts with a copyright message and verifies the cluster information. Note the location of log files which you can access in the event of any problem with the configuration process.
Confirm that you want to proceed with the I/O fencing configuration at the prompt. The program checks that the local node running the script can communicate with remote nodes and checks whether VCS 6.0 is configured properly.
Review the I/O fencing configuration options that the program presents. Type 2 to configure disk-based I/O fencing.
Select the fencing mechanism to be configured in this Application Cluster [1-4,b,q] 2
Review the output as the configuration program checks whether VxVM is already started and is running.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
139
If the check fails, configure and enable VxVM before you repeat this procedure. If the check passes, then the program prompts you for the coordinator disk group information.
Choose whether to use an existing disk group or create a new disk group to configure as the coordinator disk group. The program lists the available disk group names and provides an option to create a new disk group. Perform one of the following:
To use an existing disk group, enter the number corresponding to the disk group at the prompt. The program verifies whether the disk group you chose has an odd number of disks and that the disk group has a minimum of three disks. To create a new disk group, perform the following steps:
Enter the number corresponding to the Create a new disk group option. The program lists the available disks that are in the CDS disk format in the cluster and asks you to choose an odd number of disks with at least three disks to be used as coordinator disks. Symantec recommends that you use three disks as coordination points for disk-based I/O fencing. If the available VxVM CDS disks are less than the required, installer asks whether you want to initialize more disks as VxVM disks. Choose the disks you want to initialize as VxVM disks and then use them to create new disk group. Enter the numbers corresponding to the disks that you want to use as coordinator disks. Enter the disk group name.
Verify that the coordinator disks you chose meet the I/O fencing requirements. You must verify that the disks are SCSI-3 PR compatible using the vxfentsthdw utility and then return to this configuration program. See Checking shared disks for I/O fencing on page 140.
7 8
After you confirm the requirements, the program creates the coordinator disk group with the information you provided. Enter the I/O fencing disk policy that you chose to use. For example:
Enter disk policy for the disk(s) (raw/dmp): [b,q,?] raw
140
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
Populates the /etc/vxfendg file with this disk group information Populates the /etc/vxfenmode file on each cluster node with the I/O fencing mode information and with the SCSI-3 disk policy information
Verify and confirm the I/O fencing configuration information that the installer summarizes.
Stops VCS and I/O fencing on each node. Configures disk-based I/O fencing and starts the I/O fencing process. Updates the VCS configuration file main.cf if necessary. Copies the /etc/vxfenmode file to a date and time suffixed file /etc/vxfenmode-date-time. This backup file is useful if any future fencing configuration fails. Starts VCS on each node to make sure that the VCS is cleanly configured to use the I/O fencing feature.
11 Review the output as the configuration program displays the location of the
log files, the summary files, and the response files.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
141
Verifying the Array Support Library (ASL) See Verifying Array Support Library (ASL) on page 141. Verifying that nodes have access to the same disk See Verifying that the nodes have access to the same disk on page 142. Testing the shared disks for SCSI-3 See Testing the disks using vxfentsthdw utility on page 143.
142
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
If the Array Support Library (ASL) for the array that you add is not installed, obtain and install it on each node before proceeding. The ASL for the supported storage device that you add is available from the disk array vendor or Symantec technical support.
Verify that the ASL for the disk array is installed on each of the nodes. Run the following command on each node and examine the output to verify the installation of ASL. The following output is a sample:
# vxddladm listsupport all
LIBNAME VID PID =========================================================== libvx3par.so 3PARdata VV libvxCLARiiON.so DGC All libvxFJTSYe6k.so FUJITSU E6000 libvxFJTSYe8k.so FUJITSU All libvxap.so SUN All libvxatf.so VERITAS ATFNODES libvxcompellent.so COMPELNT Compellent Vol libvxcopan.so COPANSYS 8814, 8818
Scan all disk drives and their attributes, update the VxVM device list, and reconfigure DMP with the new devices. Type:
# vxdisk scandisks
See the Veritas Volume Manager documentation for details on how to add and configure disks.
1 2
Verify the connection of the shared storage for data to two of the nodes on which you installed VCS. Ensure that both nodes are connected to the same disk during the testing. Use the vxfenadm command to verify the disk serial number.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
143
# vxfenadm -i diskpath
Refer to the vxfenadm (1M) manual page. For example, an EMC disk is accessible by the /dev/rdsk/c1t1d0s2 path on node A and the /dev/rdsk/c2t1d0s2 path on node B. From node A, enter:
# vxfenadm -i /dev/rdsk/c1t1d0s2 Vendor id : EMC Product id : SYMMETRIX Revision : 5567 Serial Number : 42031000a
The same serial number information should appear when you enter the equivalent command on node B using the /dev/rdsk/c2t1d0s2 path. On a disk from another manufacturer, Hitachi Data Systems, the output is different and may resemble:
# vxfenadm -i /dev/rdsk/c3t1d2s2 Vendor id Product id Revision Serial Number : : : : HITACHI OPEN-3 0117 0401EB6F0002
-SUN
For more information on how to replace coordinator disks, refer to the Veritas Cluster Server Administrator's Guide.
144
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing using installvcs program
Make sure system-to-system communication functions properly. See Setting up inter-system communication on page 493.
From one node, start the utility. Run the utility with the -n option if you use rsh for communication.
# vxfentsthdw [-n]
The script warns that the tests overwrite data on the disks. After you review the overview and the warning, confirm to continue the process and enter the node names. Warning: The tests overwrite and destroy data on the disks unless you use the -r option.
******** WARNING!!!!!!!! ******** THIS UTILITY WILL DESTROY THE DATA ON THE DISK!! Do you still want to continue : [y/n] (default: n) y Enter the first node of the cluster: galaxy Enter the second node of the cluster: nebula
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
145
Enter the names of the disks that you want to check. Each node may know the same disk by a different name:
Enter the disk name to be checked for SCSI-3 PGR on node IP_adrs_of_galaxy in the format: for dmp: /dev/vx/rdmp/cxtxdxsx for raw: /dev/rdsk/cxtxdxsx Make sure it's the same disk as seen by nodes IP_adrs_ofgalaxy and IP_adrs_of_nebula /dev/rdsk/c2t13d0s2 Enter the disk name to be checked for SCSI-3 PGR on node IP_adrs_of_nebula in the format: for dmp: /dev/vx/rdmp/cxtxdxsx for raw: /dev/rdsk/cxtxdxsx Make sure it's the same disk as seen by nodes IP_adrs_ofgalaxy and IP_adrs_of_nebula /dev/rdsk/c2t13d0s2
If the serial numbers of the disks are not identical, then the test terminates.
5 6
Review the output as the utility performs the checks and reports its activities. If a disk is ready for I/O fencing on each node, the utility reports success for each node. For example, the utility displays the following message for the node galaxy.
The disk is now ready to be configured for I/O Fencing on node galaxy ALL tests on the disk /dev/rdsk/c1t1d0s2 have PASSED The disk is now ready to be configured for I/O Fencing on node galaxy
Run the vxfentsthdw utility for each disk you intend to verify.
146
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
Combination of CP servers and SCSI-3 compliant coordinator disks CP servers only Symantec also supports server-based fencing with a single highly available CP server that acts as a single coordination point.
See About planning to configure I/O fencing on page 89. See Recommended CP server configurations on page 93. This section covers the following example procedures:
Mix of CP servers and coordinator disks Single CP server See To configure server-based fencing for the VCS cluster (one CP server and two coordinator disks) on page 146. See To configure server-based fencing for the VCS cluster (single CP server) on page 151.
To configure server-based fencing for the VCS cluster (one CP server and two coordinator disks)
Depending on the server-based configuration model in your setup, make sure of the following:
CP servers are configured and are reachable from the VCS cluster. The VCS cluster is also referred to as the application cluster or the client cluster. See Setting up the CP server on page 96. The coordination disks are verified for SCSI3-PR compliance. See Checking shared disks for I/O fencing on page 140.
The installvcs program starts with a copyright message and verifies the cluster information. Note the location of log files which you can access in the event of any problem with the configuration process.
Confirm that you want to proceed with the I/O fencing configuration at the prompt. The program checks that the local node running the script can communicate with remote nodes and checks whether VCS 6.0 is configured properly.
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
147
Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
Select the fencing mechanism to be configured in this Application Cluster [1-4,b,q] 1
Make sure that the storage supports SCSI3-PR, and answer y at the following prompt.
Does your storage environment support SCSI3 PR? [y,n,q] (y)
Provide the following details about the coordination points at the installer prompt:
Enter the total number of coordination points including both servers and disks. This number should be at least 3.
Enter the total number of co-ordination points including both Coordination Point servers and disks: [b] (3)
Enter the total number of coordinator disks among the coordination points.
Enter the total number of disks among these: [b] (0) 2
Enter the total number of virtual IP addresses or the total number of fully qualified host names for each of the CP servers.
Enter the total number of Virtual IP addresses or fully qualified host name for the Coordination Point Server #1: [b,q,?] (1) 2
Enter the virtual IP addresses or the fully qualified host name for each of the CP servers. The installer assumes these values to be identical as viewed from all the application cluster nodes.
Enter the Virtual IP address or fully qualified host name #1 for the Coordination Point Server #1: [b] 10.209.80.197
The installer prompts for this information for the number of virtual IP addresses you want to configure for each CP server.
148
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
Enter the port in the range [49152, 65535] which the Coordination Point Server 10.209.80.197 would be listening on or simply accept the default port suggested: [b] (14250)
Enter the I/O fencing disk policy for the coordinator disks.
Enter disk policy for the disk(s) (raw/dmp): [b,q,?] raw
Choose the coordinator disks from the list of available disks that the installer displays. Ensure that the disk you choose is available from all the VCS (application cluster) nodes. The number of times that the installer asks you to choose the disks depends on the information that you provided in step 6. For example, if you had chosen to configure two coordinator disks, the installer asks you to choose the first disk and then the second disk:
Select disk number 1 for co-ordination point 1) c1t1d0s2 2) c2t1d0s2 3) c3t1d0s2 Please enter a valid disk which is available from all the cluster nodes for co-ordination point [1-3,q] 1
If you have not already checked the disks for SCSI-3 PR compliance in step 1, check the disks now. The installer displays a message that recommends you to verify the disks in another window and then return to this configuration procedure. Press Enter to continue, and confirm your disk selection at the installer prompt. Enter a disk group name for the coordinator disks or accept the default.
Enter the disk group name for coordinating disk(s): [b] (vxfencoorddg)
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
149
Verify and confirm the coordination points information for the fencing configuration. For example:
Total number of coordination points being used: 3 Coordination Point Server ([VIP or FQHN]:Port): 1. 10.109.80.197 ([10.109.80.197]:14250) SCSI-3 disks: 1. c1t1d0s2 2. c2t1d0s2 Disk Group name for the disks in customized fencing: vxfencoorddg Disk policy used for customized fencing: raw
The installer initializes the disks and the disk group and deports the disk group on the VCS (application cluster) node.
150
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
12 Review the output as the installer updates the application cluster information
on each of the CP servers to ensure connectivity between them. The installer then populates the /etc/vxfenmode file with the appropriate details in each of the application cluster nodes.
Updating client cluster information on Coordination Point Server 10.210.80.197 Adding the client cluster to the Coordination Point Server 10.210.80.197 .......... Done Registering client node galaxy with Coordination Point Server 10.210.80.197...... Done Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done Registering client node nebula with Coordination Point Server 10.210.80.197 ..... Done Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 ..Done Updating /etc/vxfenmode file on galaxy .................................. Done Updating /etc/vxfenmode file on nebula ......... ........................ Done
13 Review the output as the installer stops and restarts the VCS and the fencing
processes on each application cluster node, and completes the I/O fencing configuration.
15 Note the location of the configuration log files, summary files, and response
files that the installer displays for later use.
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
151
Make sure that the CP server is configured and is reachable from the VCS cluster. The VCS cluster is also referred to as the application cluster or the client cluster. See Setting up the CP server on page 96.
The installvcs program starts with a copyright message and verifies the cluster information. Note the location of log files which you can access in the event of any problem with the configuration process.
Confirm that you want to proceed with the I/O fencing configuration at the prompt. The program checks that the local node running the script can communicate with remote nodes and checks whether VCS 6.0 is configured properly.
Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
Select the fencing mechanism to be configured in this Application Cluster [1-4,b,q] 1
Make sure that the storage supports SCSI3-PR, and answer y at the following prompt.
Does your storage environment support SCSI3 PR? [y,n,q] (y)
Read the installer warning carefully before you proceed with the configuration.
Enter the total number of virtual IP addresses or the total numner of fully qualified host names for each of the CP servers.
Enter the total number of Virtual IP addresses or fully qualified host name for the Coordination Point Server #1: [b,q,?] (1) 2
152
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
Enter the virtual IP address or the fully qualified host name for the CP server. The installer assumes these values to be identical as viewed from all the application cluster nodes.
Enter the Virtual IP address or fully qualified host name #1 for the Coordination Point Server #1: [b] 10.209.80.197
The installer prompts for this information for the number of virtual IP addresses you want to configure for each CP server.
Verify and confirm the coordination points information for the fencing configuration. For example:
Total number of coordination points being used: 1 Coordination Point Server ([VIP or FQHN]:Port): 1. 10.109.80.197 ([10.109.80.197]:14250)
If the CP server is configured for security, the installer sets up secure communication between the CP server and the VCS (application cluster). After the installer establishes trust between the authentication brokers of the CP servers and the application cluster nodes, press Enter to continue.
Configuring VCS clusters for data integrity Setting up server-based I/O fencing using installvcs program
153
11 Review the output as the installer updates the application cluster information
on each of the CP servers to ensure connectivity between them. The installer then populates the /etc/vxfenmode file with the appropriate details in each of the application cluster nodes. The installer also populates the /etc/vxfenmode file with the entry single_cp=1 for such single CP server fencing configuration.
Updating client cluster information on Coordination Point Server 10.210.80.197 Adding the client cluster to the Coordination Point Server 10.210.80.197 .......... Done Registering client node galaxy with Coordination Point Server 10.210.80.197...... Done Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done Registering client node nebula with Coordination Point Server 10.210.80.197 ..... Done Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done Updating /etc/vxfenmode file on galaxy .................................. Done Updating /etc/vxfenmode file on nebula ......... ........................ Done
12 Review the output as the installer stops and restarts the VCS and the fencing
processes on each application cluster node, and completes the I/O fencing configuration.
14 Note the location of the configuration log files, summary files, and response
files that the installer displays for later use.
154
Configuring VCS clusters for data integrity Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program
Setting up non-SCSI-3 server-based I/O fencing in virtual environments using installvcs program
If you have installed VCS in virtual environments that do not support SCSI-3 PR-compliant storage, you can configure non-SCSI-3 fencing. To configure I/O fencing using the installvcs program in a non-SCSI-3 PR-compliant setup
The installvcs program starts with a copyright message and verifies the cluster information.
Confirm that you want to proceed with the I/O fencing configuration at the prompt. The program checks that the local node running the script can communicate with remote nodes and checks whether VCS 6.0 is configured properly.
Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
Select the fencing mechanism to be configured in this Application Cluster [1-4,b,q] 1
Enter n to confirm that your storage environment does not support SCSI-3 PR.
Does your storage environment support SCSI3 PR? [y,n,q] (y) n
5 6 7
Confirm that you want to proceed with the non-SCSI-3 I/O fencing configuration at the prompt. Enter the number of CP server coordination points you want to use in your setup. Enter the following details for each CP server:
Enter the virtual IP address or the fully qualified host name. Enter the port address on which the CP server listens for connections. The default value is 14250. You can enter a different port address. Valid values are between 49152 and 65535.
Configuring VCS clusters for data integrity Enabling or disabling the preferred fencing policy
155
The installer assumes that these values are identical from the view of the VCS cluster nodes that host the applications for high availability.
8 9
Verify and confirm the CP server information that you provided. Verify and confirm the VCS cluster configuration information. Review the output as the installer performs the following tasks:
Updates the CP server configuration files on each CP server with the following details:
Registers each node of the VCS cluster with the CP server. Adds CP server user to the CP server. Adds VCS cluster to the CP server user.
Updates the following configuration files on each node of the VCS cluster
10 Review the output as the installer stops VCS on each node, starts I/O fencing
on each node, updates the VCS configuration file main.cf, and restarts VCS with non-SCSI-3 server-based fencing. Confirm to configure the CP agent on the VCS cluster.
11 Confirm whether you want to send the installation information to Symantec. 12 After the installer configures I/O fencing successfully, note the location of
summary, log, and response files that installer creates. The files provide useful information which can assist you with the configuration, and can also assist future configurations.
156
Configuring VCS clusters for data integrity Enabling or disabling the preferred fencing policy
Make sure that the cluster is running with I/O fencing set up.
# vxfenadm -d
Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haclus -value UseFence
Set the value of the system-level attribute FencingWeight for each node in the cluster. For example, in a two-node cluster, where you want to assign galaxy five times more weight compared to nebula, run the following commands:
# hasys -modify galaxy FencingWeight 50 # hasys -modify nebula FencingWeight 10
Set the value of the group-level attribute Priority for each service group. For example, run the following command:
Configuring VCS clusters for data integrity Enabling or disabling the preferred fencing policy
157
Make sure that you assign a parent service group an equal or lower priority than its child service group. In case the parent and the child service groups are hosted in different subclusters, then the subcluster that hosts the child service group gets higher preference.
To view the fencing node weights that are currently set in the fencing driver, run the following command:
# vxfenconfig -a
Make sure that the cluster is running with I/O fencing set up.
# vxfenadm -d
Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haclus -value UseFence
To disable preferred fencing and use the default race policy, set the value of the cluster-level attribute PreferredFencingPolicy as Disabled.
# haconf -makerw # haclus -modify PreferredFencingPolicy Disabled # haconf -dump -makero
158
Configuring VCS clusters for data integrity Enabling or disabling the preferred fencing policy
Section
160
Chapter
10
Installing VCS
This chapter includes the following topics:
Before using the Veritas Web-based installer Starting the Veritas Web-based installer Obtaining a security exception on Mozilla Firefox Performing a pre-installation check with the Veritas Web-based installer Installing VCS with the Web-based installer
The systems where you plan to install Must be a supported the Veritas products. platform for VCS 6.0. The server where you start the installation. The installation media is accessible from the installation server. Must use the same operating system as the target systems and must be at one of the supported operating system update levels.
Installation server
162
Requirements
Must have a Web browser. Supported browsers: Internet Explorer 6, 7, and 8 Firefox 3.x and later
Administrative system
Start the Veritas XPortal Server process xprtlwid, on the installation server:
# ./webinstaller start
The webinstaller script displays a URL. Note this URL. Note: If you do not see the URL, run the command again. The default listening port is 14172. If you have a firewall that blocks port 14172, use the -port option to use a free port instead.
2 3 4
On the administrative server, start the Web browser. Navigate to the URL that the script displayed. Certain browsers may display the following message:
Secure Connection Failed
Obtain a security exception for your browser. When prompted, enter root and root's password of the installation server.
Log in as superuser.
Installing VCS Performing a pre-installation check with the Veritas Web-based installer
163
The following instructions are general. They may change because of the rapid release cycle of Mozilla browsers. To obtain a security exception
1 2 3 4 5 6
Click Or you can add an exception link. Click Add Exception button. Click Get Certificate button. Uncheck Permanently Store this exception checkbox (recommended). Click Confirm Security Exception button. Enter root in User Name field and root password of the web server in the Password field.
Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
2 3 4 5 6
On the Select a task and a product page, select Perform a Pre-installation Check from the Task drop-down list. Select the Veritas Cluster Server from the Product drop-down list, and click Next. Indicate the systems on which to perform the precheck. Enter one or more system names, separated by spaces. Click Next. The installer performs the precheck and displays the results. If the validation completes successfully, click Next. The installer prompts you to begin the installation. Click Yes to install on the selected system. Click No to install later. Click Finish. The installer prompts you for another task.
164
Perform preliminary steps. See Performing a pre-installation check with the Veritas Web-based installer on page 163.
Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
3 4 5 6 7 8
Select Install a Product from the Task drop-down list. Select Veritas Cluster Server (VCS) from the Product drop-down list, and click Next. On the License agreement page, read the End User License Agreement (EULA). To continue, select Yes, I agree and click Next. Choose minimal, recommended, or all packages. Click Next. Indicate the systems where you want to install. Separate multiple system names with spaces. Click Next. If you have not yet configured a communication mode among systems, you have the option to let the installer configure ssh or rsh. If you choose to allow this configuration, select the communication mode and provide the superuser passwords for the systems. After the validation completes successfully, click Next to install VCS on the selected system.
10 After the installation completes, you must choose your licensing method.
On the license page, select one of the following tabs:
Keyless licensing Note: The keyless license option enables you to install without entering a key. However, in order to ensure compliance you must manage the systems with a management server. For more information, go to the following website: http://go.symantec.com/sfhakeyless Complete the following information:
Click Register.
165
If you have a valid license key, select this tab. Enter the license key for each system. Click Register.
11 The installer prompts you to configure the cluster. Select Yes to continue
with configuring the product. If you select No, you can exit the installer. You must configure the product before you can use VCS. After the installation completes, the installer displays the location of the log and summary files. If required, view the files to confirm the installation status.
12 If prompted, select the checkbox to specify whether you want to send your
installation information to Symantec.
Would you like to send the information about this installation to Symantec to help improve installation in the future?
166
Chapter
11
Configuring VCS
This chapter includes the following topics:
Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
On the Select a task and a product page, select the task and the product as follows:
Task Product Configure a Product Veritas Cluster Server
Click Next.
168
On the Select Systems page, enter the system names where you want to configure VCS, and click Next. Example: galaxy nebula The installer performs the initial system verification. It checks for the system communication. It also checks for release compatibility, installed product version, platform version, and performs product prechecks. Click Next after the installer completes the system verification successfully.
In the Confirmation dialog box that appears, choose whether or not to configure I/O fencing. To configure I/O fencing, click Yes. To configure I/O fencing later, click No. You can configure I/O fencing later using the Web-based installer. See Configuring VCS for data integrity using the Web-based installer on page 172. You can also configure I/O fencing later using the installvcs -fencing command, the response files, or manually configure.
169
On the Set Cluster Name/ID page, specify the following information for the cluster.
Cluster Name Cluster ID Enter a unique cluster name. Enter a unique cluster ID. Note that you can have the installer check to see if the cluster ID is unique. Symantec recommends that you use the installer to check for duplicate cluster IDs in multi-cluster environments. Check duplicate cluster ID Select the check box if you want the installer to verify if the given cluster ID is unique in your private network. The verification is performed after you specify the heartbeat details in the following pages. The verification takes some time to complete. Select an LLT type from the list. You can choose to configure LLT over UDP or over Ethernet. If you choose Auto detect over Ethernet, the installer auto-detects the LLT links over Ethernet. Verify the links and click Yes in the Confirmation dialog box. Skip to step 7. If you click No, you must manually enter the details to configure LLT over Ethernet. Number of Heartbeats NICs Choose the number of heartbeat links you want to configure.
LLT Type
Additional Low Select the check box if you want to configure a low priority link. Priority Heartbeat The installer configures one heartbeat link as low priority link. NIC Unique Heartbeat For LLT over Ethernet, select the check box if you do not want NICs per system to use the same NIC details to configure private heartbeat links on other systems. For LLT over UDP, this check box is selected by default.
Click Next.
On the Set Cluster Heartbeat page, select the heartbeat link details for the LLT type you chose on the Set Cluster Name/ID page.
170
For LLT over Ethernet: Do the following: If you are using the same NICs on all the systems, select the NIC for each private heartbeat link. If you had selected Unique Heartbeat NICs per system on the Set Cluster Name/ID page, provide the NIC details for each system.
Select the NIC, Port, and IP address for each private heartbeat link. You must provide these details for each system.
Click Next.
On the Optional Configuration page, decide the optional VCS features that you want to configure. Click the corresponding tab to specify the details for each option:
Virtual IP
If each system uses a separate NIC, select the Configure NICs for every system separately check box. Select the interface on which you want to configure the virtual IP. Enter a virtual IP address and value for the netmask. You can use an IPv4 or an IPv6 address. VCS Users
Reset the password for the Admin user, if necessary. Select the Configure VCS users option. Click Add to add a new user. Specify the user name, password, and user privileges for this user. Select the Configure SMTP check box. If each system uses a separate NIC, select the Configure NICs for every system separately check box. If all the systems use the same NIC, select the NIC for the VCS Notifier to be used on all systems. If not, select the NIC to be used by each system. In the SMTP Server box, enter the domain-based hostname of the SMTP server. Example: smtp.yourcompany.com In the Recipient box, enter the full email address of the SMTP recipient. Example: [email protected]. In the Event list box, select the minimum security level of messages to be sent to each recipient. Click Add to add more SMTP recipients, if necessary.
SMTP
171
SNMP
Select the Configure SNMP check box. If each system uses a separate NIC, select the Configure NICs for every system separately check box. If all the systems use the same NIC, select the NIC for the VCS Notifier to be used on all systems. If not, select the NIC to be used by each system. In the SNMP Port box, enter the SNMP trap daemon port: (162). In the Console System Name box, enter the SNMP console system name. In the Event list box, select the minimum security level of messages to be sent to each console. Click Add to add more SNMP consoles, if necessary.
GCO
If you installed a valid HA/DR license, you can now enter the wide-area heartbeat link details for the global cluster that you would set up later. See the Veritas Cluster Server Administrator's Guide for instructions to set up VCS global clusters.
If each system uses a separate NIC, select the Configure NICs for every system separately check box. Select a NIC.
Enter a virtual IP address and value for the netmask. You can use an IPv4 or an IPv6 address.
Security
To configure a secure VCS cluster, select the Configure secure cluster check box. If you want to perform this task later, do not select the Configure secure cluster check box. You can use the -security option of the installvcs program.
Click Next.
8 9
On the Stop Processes page, click Next after the installer stops all the processes successfully. On the Start Processes page, click Next after the installer performs the configuration based on the details you provided and starts all the processes successfully. If you did not choose to configure I/O fencing in step 4, then skip to step 11. Go to step 10 to configure fencing.
172
10 On the Select Fencing Type page, choose the type of fencing configuration:
Configure Coordination Point client based fencing Configure disk based fencing Choose this option to configure server-based I/O fencing.
Based on the fencing type you choose to configure, follow the installer prompts. See Configuring VCS for data integrity using the Web-based installer on page 172.
12 Select the checkbox to specify whether you want to send your installation
information to Symantec. Click Finish. The installer prompts you for another task.
Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
On the Select a task and a product page, select the task and the product as follows:
Task Product I/O fencing configuration Veritas Cluster Server
Click Next.
Verify the cluster information that the installer presents and confirm whether you want to configure I/O fencing on the cluster.
173
On the Select Cluster page, click Next if the installer completes the cluster verification successfully. The installer performs the initial system verification. It checks for the system communication. It also checks for release compatibility, installed product version, platform version, and performs product prechecks.
On the Select Fencing Type page, choose whether to configure disk-based fencing or server-based fencing. If you chose to configure disk-based fencing, go to step 7. If you chose to configure server-based fencing, go to step 10.
In the Confirmation dialog box that appears, confirm whether your storage environment supports SCSI-3 PR. You can configure non-SCSI-3 server-based fencing in a virtual environment that is not SCSI-3 PR compliant.
On the Configure Fencing page, the installer prompts for details based on the fencing type you chose to configure. Specify the coordination points details. Click Next.
174
Go to step 16.
10 On the Configure Fencing page, the installer prompts for details based on the
fencing type you chose to configure. Specify the coordination points details. Click Next.
Enter the virtual IP addresses or host names of the virtual IP address. The installer assumes these values to be identical as viewed from all the application cluster nodes. Enter the port that the CP server must listen on. Click Next.
If you have not already checked the disks for SCSI-3 PR compliance, check the disks now, and click OK in the dialog box. If you do not want to use the default coordinator disk group name, enter a name for the new coordinator disk group you want to create. Select the disks to create the coordinator disk group. Choose the fencing disk policy for the disk group.
13 In the Confirmation dialog box that appears, confirm whether the coordination
points information you provided is correct, and click Yes.
15 Configure the CP agent on the VCS (application cluster), and click Next.
175
17 Select the checkbox to specify whether you want to send your installation
information to Symantec. Click Finish. The installer prompts you for another task.
176
Section
Chapter 12. Performing automated VCS installation Chapter 13. Performing automated VCS configuration Chapter 14. Performing automated I/O fencing configuration for VCS
178
Chapter
12
Installing VCS using response files Response file variables to install VCS Sample response file for installing VCS
Make sure the systems where you want to install VCS meet the installation requirements. See Important preinstallation information for VCS on page 35.
Make sure the preinstallation tasks are completed. See Performing preinstallation tasks on page 59.
Copy the response file to one of the cluster systems where you want to install VCS. See Sample response file for installing VCS on page 182.
Edit the values of the response file variables as necessary. See Response file variables to install VCS on page 180.
180
5 6
Mount the product disc and navigate to the directory that contains the installation program. Start the installation from the system to which you copied the response file. For example:
# ./installer -responsefile /tmp/response_file # ./installvcs -responsefile /tmp/response_file
Description
Installs VCS packages. (Required)
CFG{accepteula}
Scalar
CFG{systems}
List
CFG{prod}
Scalar
Defines the product to be installed. The value is VCS60 for VCS. (Required)
181
Description
Instructs the installer to install VCS packages based on the variable that has the value set to 1: installallpkgs: Installs all packages installrecpkgs: Installs recommended packages installminpkgs: Installs minimum packages
182
Performing automated VCS installation Sample response file for installing VCS
Description
Defines a location, typically an NFS mount, from which all remote systems can install product packages. The location must be accessible from all target systems. (Optional)
CFG{opt}{tmppath}
Scalar
Defines the location where a working directory is created to store temporary files and the packages that are needed during the install. The default location is /var/tmp. (Optional)
CFG{opt}{logpath}
Scalar
Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs.
Performing automated VCS installation Sample response file for installing VCS
183
184
Performing automated VCS installation Sample response file for installing VCS
Chapter
13
Configuring VCS using response files Response file variables to configure Veritas Cluster Server Sample response file for configuring VCS
1 2
Make sure the VCS packages are installed on the systems where you want to configure VCS. Copy the response file to one of the cluster systems where you want to configure VCS. See Sample response file for configuring VCS on page 195.
186
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
Edit the values of the response file variables as necessary. To configure optional features, you must define appropriate values for all the response file variables that are related to the optional feature. See Response file variables to configure Veritas Cluster Server on page 186.
Start the configuration from the system to which you copied the response file. For example:
# /opt/VRTS/install/installvcs -responsefile /tmp/response_file
Response file variables specific to configuring Veritas Cluster Server List or Scalar
Scalar
Description
Performs the configuration if the packages are already installed. (Required) Set the value to 1 to configure VCS.
CFG{accepteula}
Scalar
CFG{systems}
List
CFG{prod}
Scalar
Defines the product to be configured. The value is VCS60 for VCS. (Required)
CFG{opt}{keyfile}
Scalar
Defines the location of an ssh keyfile that is used to communicate with all remote systems. (Optional)
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
187
Table 13-1
Response file variables specific to configuring Veritas Cluster Server (continued) List or Scalar
Scalar
Variable
CFG{opt}{rsh}
Description
Defines that rsh must be used instead of ssh as the communication method between systems. (Optional)
CFG{opt}{logpath}
Scalar
Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs.
Note that some optional variables make it necessary to define other optional variables. For example, all the variables that are related to the cluster service group (csgnic, csgvip, and csgnetmask) must be defined if any are defined. The same is true for the SMTP notification (smtpserver, smtprecp, and smtprsev), the SNMP trap notification (snmpport, snmpcons, and snmpcsev), and the Global Cluster Option (gconic, gcovip, and gconetmask). Table 13-2 lists the response file variables that specify the required information to configure a basic VCS cluster.
188
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
Response file variables specific to configuring a basic VCS cluster List or Scalar
Scalar
Description
An integer between 0 and 65535 that uniquely identifies the cluster. (Required)
CFG{vcs_clustername}
Scalar
CFG{vcs_allowcomms}
Scalar
Indicates whether or not to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). (Required)
CFG{fencingenabled}
Scalar
Table 13-3 lists the response file variables that specify the required information to configure LLT over Ethernet. Table 13-3 Response file variables specific to configuring private LLT over Ethernet List or Scalar
Scalar
Variable
CFG{vcs_lltlink#} {"system"}
Description
Defines the NIC to be used for a private heartbeat link on each system. Two LLT links are required per system (lltlink1 and lltlink2). You can configure up to four LLT links. You must enclose the system name within double quotes. (Required)
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
189
Table 13-3
Response file variables specific to configuring private LLT over Ethernet (continued) List or Scalar
Scalar
Variable
CFG{vcs_lltlinklowpri#} {"system"}
Description
Defines a low priority heartbeat link. Typically, lltlinklowpri is used on a public network link to provide an additional layer of communication. If you use different media speed for the private NICs, you can configure the NICs with lesser speed as low-priority links to enhance LLT performance. For example, lltlinklowpri1, lltlinklowpri2, and so on. You must enclose the system name within double quotes. (Optional)
Table 13-4 lists the response file variables that specify the required information to configure LLT over UDP. Table 13-4 Variable
CFG{lltoverudp}=1
Response file variables specific to configuring LLT over UDP List or Scalar
Scalar
Description
Indicates whether to configure heartbeat link using LLT over UDP. (Required)
CFG{vcs_udplink<n>_address} {<system1>}
Scalar
Stores the IP address (IPv4 or IPv6) that the heartbeat link uses on node1. You can have four heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective heartbeat links. (Required)
190
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
Table 13-4
Response file variables specific to configuring LLT over UDP (continued) List or Scalar
Scalar
Variable
CFG {vcs_udplinklowpri<n>_address} {<system1>}
Description
Stores the IP address (IPv4 or IPv6) that the low priority heartbeat link uses on node1. You can have four low priority heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective low priority heartbeat links. (Required)
CFG{vcs_udplink<n>_port} {<system1>}
Scalar
Stores the UDP port (16-bit integer value) that the heartbeat link uses on node1. You can have four heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective heartbeat links. (Required)
Stores the UDP port (16-bit integer value) that the low priority heartbeat link uses on node1. You can have four low priority heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective low priority heartbeat links. (Required)
CFG{vcs_udplink<n>_netmask} {<system1>}
Scalar
Stores the netmask (prefix for IPv6) that the heartbeat link uses on node1. You can have four heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective heartbeat links. (Required)
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
191
Table 13-4
Response file variables specific to configuring LLT over UDP (continued) List or Scalar Description
Stores the netmask (prefix for IPv6) that the low priority heartbeat link uses on node1. You can have four low priority heartbeat links and <n> for this response file variable can take values 1 to 4 for the respective low priority heartbeat links. (Required)
Variable
Table 13-5 lists the response file variables that specify the required information to configure virtual IP for VCS cluster. Table 13-5 Response file variables specific to configuring virtual IP for VCS cluster List or Scalar
Scalar
Variable
CFG{vcs_csgnic} {system}
Description
Defines the NIC device to use on a system. You can enter all as a system value if the same NIC is used on all systems. (Optional)
CFG{vcs_csgvip}
Scalar
CFG{vcs_csgnetmask}
Scalar
Defines the Netmask of the virtual IP address for the cluster. (Optional)
Table 13-6 lists the response file variables that specify the required information to configure the VCS cluster in secure mode.
192
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
Table 13-6
Response file variables specific to configuring VCS cluster in secure mode List or Scalar
Scalar
Variable
CFG{vcs_eat_security}
Description
Specifies if the cluster is in secure enabled mode or not. Specifies that the securityonenode option is being used. Specifies the menu option to choose to configure the secure cluster one at a time.
CFG{opt}{securityonenode}
Scalar
CFG{securityonenode_menu}
Scalar
CFG{security_conf_dir}
Scalar
Specifies the directory where the configuration files are placed. Specifies that the security option is being used.
CFG{opt}{security}
Scalar
Table 13-7 lists the response file variables that specify the required information to configure VCS users. Table 13-7 Variable
CFG{vcs_userenpw}
Description
List of encoded passwords for VCS users The value in the list can be "Administrators Operators Guests"
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
193
Response file variables specific to configuring VCS users (continued) List or Scalar
List
Description
List of privileges for VCS users
Table 13-8 lists the response file variables that specify the required information to configure VCS notifications using SMTP. Table 13-8 Response file variables specific to configuring VCS notifications using SMTP List or Scalar
Scalar
Variable
CFG{vcs_smtpserver}
Description
Defines the domain-based hostname (example: smtp.symantecexample.com) of the SMTP server to be used for Web notification. (Optional)
CFG{vcs_smtprecp}
List
CFG{vcs_smtprsev}
List
Defines the minimum severity level of messages (Information, Warning, Error, SevereError) that listed SMTP recipients are to receive. Note that the ordering of severity levels must match that of the addresses of SMTP recipients. (Optional)
Table 13-9 lists the response file variables that specify the required information to configure VCS notifications using SNMP.
194
Performing automated VCS configuration Response file variables to configure Veritas Cluster Server
Table 13-9
Response file variables specific to configuring VCS notifications using SNMP List or Scalar
Scalar
Variable
CFG{vcs_snmpport}
Description
Defines the SNMP trap daemon port (default=162). (Optional)
CFG{vcs_snmpcons}
List
CFG{vcs_snmpcsev}
List
Defines the minimum severity level of messages (Information, Warning, Error, SevereError) that listed SNMP consoles are to receive. Note that the ordering of severity levels must match that of the SNMP console system names. (Optional)
Table 13-10 lists the response file variables that specify the required information to configure VCS global clusters. Table 13-10 Variable
CFG{vcs_gconic} {system}
Response file variables specific to configuring VCS global clusters List or Scalar
Scalar
Description
Defines the NIC for the Virtual IP that the Global Cluster Option uses. You can enter all as a system value if the same NIC is used on all systems. (Optional)
CFG{vcs_gcovip}
Scalar
Defines the virtual IP address to that the Global Cluster Option uses. (Optional)
CFG{vcs_gconetmask}
Scalar
Defines the Netmask of the virtual IP address that the Global Cluster Option uses. (Optional)
Performing automated VCS configuration Sample response file for configuring VCS
195
196
Performing automated VCS configuration Sample response file for configuring VCS
Chapter
14
Configuring I/O fencing using response files Response file variables to configure disk-based I/O fencing Sample response file for configuring disk-based I/O fencing Response file variables to configure server-based I/O fencing Sample response file for configuring server-based I/O fencing Response file variables to configure non-SCSI-3 server-based I/O fencing Sample response file for configuring non-SCSI-3 server-based I/O fencing
1 2
Make sure that VCS is configured. Based on whether you want to configure disk-based or server-based I/O fencing, make sure you have completed the preparatory tasks. See About planning to configure I/O fencing on page 89.
198
Performing automated I/O fencing configuration for VCS Response file variables to configure disk-based I/O fencing
Copy the response file to one of the cluster systems where you want to configure I/O fencing. See Sample response file for configuring disk-based I/O fencing on page 199. See Sample response file for configuring server-based I/O fencing on page 202.
Edit the values of the response file variables as necessary. See Response file variables to configure disk-based I/O fencing on page 198. See Response file variables to configure server-based I/O fencing on page 200.
Start the configuration from the system to which you copied the response file. For example:
# /opt/VRTS/install/installvcs -responsefile /tmp/response_file
Response file variables specific to configuring disk-based I/O fencing List or Scalar Description
Scalar Performs the I/O fencing configuration. (Required)
CFG{fencing_option}
Scalar
Specifies the I/O fencing configuration mode. 1Coordination Point Server-based I/O fencing 2Coordinator disk-based I/O fencing 3Disabled mode
(Required)
Performing automated I/O fencing configuration for VCS Sample response file for configuring disk-based I/O fencing
199
Table 14-1
Response file variables specific to configuring disk-based I/O fencing (continued) List or Scalar Description
Specifies the I/O fencing mechanism. This variable is not required if you had configured fencing in disabled mode. For disk-based fencing, you must configure the fencing_scsi3_disk_policy variable and either the fencing_dgname variable or the fencing_newdg_disks variable. (Optional)
Variable
CFG{fencing_dgname}
Scalar
200
Performing automated I/O fencing configuration for VCS Response file variables to configure server-based I/O fencing
# our %CFG; $CFG{opt}{configure}=1; $CFG{opt}{fencing}=1; $CFG{prod}="VCS60"; $CFG{systems}=[ qw(galaxy nebula) ]; $CFG{vcs_clusterid}=13221; $CFG{vcs_clustername}="clus1"; $CFG{fencing_dgname}="fendg"; $CFG{fencing_scsi3_disk_policy}="dmp"; $CFG{fencing_newdg_disks}= [ qw(c1t1d0s2 c2t1d0s2 c3t1d0s2) ]; $CFG{fencing_option}=2;
Performing automated I/O fencing configuration for VCS Response file variables to configure server-based I/O fencing
201
Table 14-2
Coordination point server (CP server) based fencing response file definitions (continued) Definition
Name of the service group which will have the Coordination Point agent resource as part of it.
CFG {fencing_reusedg}
CFG {fencing_ports}
CFG {fencing_scsi3_disk_policy}
202
Performing automated I/O fencing configuration for VCS Sample response file for configuring server-based I/O fencing
Performing automated I/O fencing configuration for VCS Sample response file for configuring non-SCSI-3 server-based I/O fencing
203
Table 14-3
CFG {fencing_cpagentgrp}
Name of the service group which will have the Coordination Point agent resource as part of it.
CFG {fencing_cps_vips}
CFG {fencing_ncp}
CFG {fencing_ports}
204
Performing automated I/O fencing configuration for VCS Sample response file for configuring non-SCSI-3 server-based I/O fencing
$CFG{fencing_ports}{"10.198.89.252"}=14250; $CFG{fencing_ports}{"10.198.89.253"}=14250; $CFG{non_scsi3_fencing}=1; $CFG{opt}{configure}=1; $CFG{opt}{fencing}=1; $CFG{prod}="VCS60"; $CFG{systems}=[ qw(galaxy nebula) ]; $CFG{vcs_clusterid}=1256; $CFG{vcs_clustername}="clus1"; $CFG{fencing_option}=1;
Section
Manual installation
Chapter 15. Performing preinstallation tasks Chapter 16. Manually installing VCS Chapter 17. Manually configuring VCS Chapter 18. Manually configuring the clusters for data integrity
206
Chapter
15
1 2
Log in as the superuser. Mount the appropriate disc. See Mounting the product disc on page 71.
208
Chapter
16
About VCS manual installation Installing VCS software manually Installing using JumpStart
210
Table 16-1 lists the tasks that you must perform when you manually install and configure VCS 6.0. Table 16-1 Task
Install VCS software manually on each node in the cluster.
Install VCS language pack See Installing language packages in a manual installation software manually on each on page 212. node in the cluster. Add a license key. See Adding a license key for a manual installation on page 213.
Copy the installation guide See Copying the installation guide to each node on page 215. to each node. Configure LLT and GAB.
See Configuring LLT manually on page 223. See Configuring GAB manually on page 227.
Configure VCS. Start LLT, GAB, and VCS services. Modify the VCS configuration.
See Configuring VCS manually on page 228. See Starting LLT, GAB, and VCS after manual configuration on page 230. See Modifying the VCS configuration on page 232.
Replace demo license with See Replacing a VCS demo license with a permanent license a permanent license. for manual installations on page 215.
211
Table 16-2
Installer command options to view VCS packages Command option to view the list of packages
Option Description
1
Installs only the minimal required VCS packages installvcs -minpkgs that provide basic functionality of the product. Installs the recommended VCS packages that provide complete functionality of the product. This option does not install the optional VCS packages. Installs all the VCS packages. You must choose this option to configure any optional VCS feature. installvcs -recpkgs
installvcs -allpkgs
Navigate to the directory where you can start the installvcs program.
# cd cluster_server
Run the following command to view the list of packages. Based on what packages you want to install, enter the appropriate command option:
# ./installvcs -minpkgs
Or
# ./installvcs -recpkgs
Or
# ./installvcs -allpkgs
212
Note: To configure an Oracle VM Server logical domain for disaster recovery, install the following required package inside the logical domain:
# pkgadd -d VRTSvcsnr.pkg
213
Copy the package files from the software disc to the temporary directory.
# cp -r pkgs/* /tmp
Install the following required and optional VCS packages from the compressed files:
Install the following required packages in the order shown for Chinese language support:
# pkgadd -d VRTSmulic.pkg # pkgadd -d VRTSzhvm.pkg
Install the following required packages in the order shown for Japanese language support:
# pkgadd -d VRTSmulic.pkg # pkgadd -d VRTSjacav.pkg # pkgadd -d VRTSjacse.pkg # pkgadd -d VRTSjacs.pkg # pkgadd -d VRTSjacsu.pkg # pkgadd -d VRTSjadba.pkg # pkgadd -d VRTSjadbe.pkg # pkgadd -d VRTSjafs.pkg # pkgadd -d VRTSjaodm.pkg # pkgadd -d VRTSjavm.pkg
214
For more information and to download the management server, see the following URL: http://go.symantec.com/vom When you set the product license level for the first time, you enable keyless licensing for that system. If you install with the product installer and select the keyless option, you are prompted to select the product and feature level that you want to license. After you install, you can change product license levels at any time to reflect the products and functionality that you want to license. When you set a product level, you agree that you have the license for that functionality. To set or change the product level
Output resembles:
/opt/VRTSvlic/bin
where prod_levels is a comma-separated list of keywords. The keywords are the product levels as shown by the output of step 3. If you want to remove keyless licensing and enter a key, you must clear the keyless licenses. Use the NONE keyword to clear all keys from the system. Warning: Clearing the keys disables the Veritas products until you install a new key or set a new product level.
215
For more details on using the vxkeyless utility, see the vxkeyless(1m) manual page.
The license key The type of license The product for which it applies Its expiration date, if one exists Demo keys have expiration dates, while permanent keys and site keys do not.
Replacing a VCS demo license with a permanent license for manual installations
When a VCS demo key license expires, you can replace it with a permanent license using the vxlicinst program. See Checking licensing information on the system on page 133.
216
is the release version and platform is the name of the operating system.
1 2 3
Add a client (register to the JumpStart server). See the JumpStart documentation that came with your operating system for details. Read the JumpStart installation instructions. Generate the finish scripts. See Generating the finish scripts on page 217.
Prepare shared storage installation resources. See Preparing installation resources on page 217.
Modify the rules file for JumpStart. See the JumpStart documentation that came with your operating system for details.
6 7
Install the operating system using the JumpStart server. When the system is up and running, run the installer command from the installation media to configure the Veritas software.
# /opt/VRTS/install/installer -configure
217
Run the product installer program to generate the scripts for all products.
./installer -jumpstart directory_to_generate_scripts
Or
./installprod -jumpstart directory_to_generate_script
Where prod is the product's installation command, and directory_to_generate_scripts is where you want to put the product's script. For example:
# ./installvcs -jumpstart /js_scripts
Modify the JumpStart script according to your requirements. You must modify the BUILDSRC and ENCAPSRC values. Keep the values aligned with the resource location values.
BUILDSRC="hostname_or_ip:/path_to_pkgs" // If you don't want to encapsulate the root disk automatically // comment out the following line. ENCAPSRC="hostname_or_ip:/path_to_encap_script"
218
Copy the pkgs directory of the installation media to the shared storage.
# cd /path_to_installation_media # cp -r pkgs BUILDSRC
219
For the language pack, copy the language packages from the language pack installation disc to the shared storage.
# cd /cdrom/cdrom0/pkgs # cp -r * BUILDSRC/pkgs
2 3
In the finish script, copy the product package information and replace the product packages with language packages. The finish script resembles:
. . . for PKG do ... done. . for PKG do ... done. .
in product_packages
. in language_packages
If you plan to start flar (flash archive) creation from bare metal, perform step 1 through step 10. If you plan to start flar creation from a system where you have installed but not configured the product, perform step 1 through step 4. Skip step 5 and finish step 6 through step 10.
220
If you plan to start flar creation from a system where you have installed and configured the product, perform step 5 through step 10.
Flash archive creation overview
1. 2. 3. 4.
Ensure that you have installed Solaris 10 on the master system. Use JumpStart to create a clone of a system. Reboot the cloned system. Install the Veritas products on the master system. Perform one of the installation procedures from this guide.
5.
If you have configured the product on the master system, create the vrts_deployment.sh file and the vrts_deployment.cf file and copy them to the master system. See Creating the Veritas post-deployment scripts on page 220.
6. 7. 8. 9.
Use the flarcreate command to create the Flash archive on the master system. Copy the archive back to the JumpStart server. Use JumpStart to install the Flash archive to the selected systems. Configure the Veritas product on all nodes in the cluster. Start configuration with the following command: Perform post-installation and configuration tasks.
10.
1 2
Mount the product disc. From the prompt, run the -flash_archive option for the installer. Specify a directory where you want to create the files.
# ./installer -flash_archive /tmp
Copy the vrts_postedeployment.sh file and the vrts_postedeployment.cf file to the golden system.
221
Put the vrts_postdeployment.sh file in the /etc/flash/postdeployment directory. Put the vrts_postdeployment.cf file in the /etc/vx directory.
Make sure that the two files have the following ownership and permissions:
# chown root:root /etc/flash/postdeployment/vrts_postdeployment.sh # chmod 755 /etc/flash/postdeployment/vrts_postdeployment.sh # chown root:root /etc/vx/vrts_postdeployment.cf # chmod 644 /etc/vx/vrts_postdeployment.cf
Note that you only need these files in a Flash archive where you have installed Veritas products.
222
Chapter
17
About configuring VCS manually Configuring LLT manually Configuring GAB manually Configuring VCS manually Configuring VCS in single node mode Starting LLT, GAB, and VCS after manual configuration Modifying the VCS configuration
Traffic distribution
224
Heartbeat traffic
To configure LLT over Ethernet, perform the following steps on each node in the cluster:
Set up the file /etc/llthosts. See Setting up /etc/llthosts for a manual installation on page 224. Set up the file /etc/llttab. See Setting up /etc/llttab for a manual installation on page 224. Edit the following file on each node in the cluster to change the values of the LLT_START and the LLT_STOP environment variables to 1: /etc/default/llt
You can also configure LLT over UDP. See Using the UDP layer for LLT on page 475.
For SPARC:
set-node galaxy set-cluster 2
225
For x64:
set-node galaxy set-cluster 2 link e1000g0 /dev/e1000g:0 - ether - link e1000g1 /dev/e1000g:1 - ether - -
The first line must identify the system where the file exists. In the example, the value for set-node can be: galaxy, 0, or the file name /etc/nodename. The file needs to contain the name of the system (galaxy in this example). The next line, beginning with the set-cluster command, identifies the cluster number, which must be a unique number when more than one cluster is configured on the same physical network connection.The next two lines, beginning with the link command, identify the two private network cards that the LLT protocol uses. The order of directives must be the same as in the sample llttab file in /opt/VRTSllt. If you use different media speed for the private NICs, Symantec recommends that you configure the NICs with lesser speed as low-priority links to enhance LLT performance. For example: Use vi or another editor to create the file /etc/lltab that contains the entries that resemble the following:
For SPARC:
set-node galaxy set-cluster 2 link qfe0 qfe:0 - ether - link qfe1 qfe:1 - ether - link-lowpri qfe2 qfe:2 - ether - -
For x64:
set-node galaxy set-cluster 2 link e1000g0 /dev/e1000g:0 - ether - link e1000g1 /dev/e1000g:1 - ether - link-lowpri e1000g2 /dev/e1000g:2 - ether - -
226
link
Attaches LLT to a network interface. At least one link is required, and up to eight are supported. LLT distributes network traffic evenly across all available network connections unless you mark the link as low-priority using the link-lowpri directive or you configured LLT to use destination-based load balancing. The first argument to link is a user-defined tag shown in the lltstat(1M) output to identify the link. It may also be used in llttab to set optional static MAC addresses. The second argument to link is the device name of the network interface. Its format is device_name:device_instance_number. The remaining four arguments to link are defaults; these arguments should be modified only in advanced configurations. There should be one link directive for each network interface. LLT uses an unregistered Ethernet SAP of 0xCAFE. If the SAP is unacceptable, refer to the llttab(4) manual page for information on how to customize SAP. Note that IP addresses do not need to be assigned to the network device; LLT does not use IP addresses.
set-cluster
Assigns a unique cluster number. Use this directive when more than one cluster is configured on the same physical network connection. LLT uses a default cluster number of zero. Use this directive in place of link for public network interfaces. This directive prevents VCS communication on the public network until the network is the last link, and reduces the rate of heartbeat broadcasts. If you use private NICs with different speed, use "link-lowpri" directive in place of "link" for all links with lower speed. Use the "link" directive only for the private NIC with higher speed to enhance LLT performance. LLT uses low-priority network links for VCS communication only when other links fail.
link-lowpri
227
For more information about the LLT directives, refer to the llttab(4) manual page.
228
To configure GAB
Set up an /etc/gabtab configuration file on each node in the cluster using vi or another editor. The following example shows an /etc/gabtab file:
/sbin/gabconfig -c -nN
Where the -c option configures the driver for use. The -nN option specifies that the cluster is not formed until at least N systems are ready to form the cluster. Symantec recommends that you set N to be the total number of systems in the cluster. Warning: Symantec does not recommend the use of the -c -x option for /sbin/gabconfig. Using -c -x can lead to a split-brain condition.
Edit the following file on each node in the cluster to change the values of the GAB_START and the GAB_STOP environment variables to 1: /etc/default/gab
types.cf file
Note that the "include" statement in main.cf refers to the types.cf file. This text file describes the VCS bundled agent resources. During new installations, the types.cf file is automatically copied in to the /etc/VRTSvcs/conf/config directory.
When you manually install VCS, the file /etc/VRTSvcs/conf/config/main.cf contains only the line:
include "types.cf"
For a full description of the main.cf file, and how to edit and verify it, refer to the Veritas Cluster Server Administrator's Guide.
229
Log on as superuser, and move to the directory that contains the configuration file:
# cd /etc/VRTSvcs/conf/config
Use vi or another text editor to edit the main.cf file, defining your cluster name and system names. Refer to the following example. An example main.cf for a two-node cluster:
include "types.cf" cluster VCSCluster2 ( ) system galaxy ( ) system nebula ( )
3 4
Save and close the main.cf file. Edit the following file on each node in the cluster to change the values of the VCS_START and the VCS_STOP environment variables to 1: /etc/default/vcs
On one node in the cluster, perform the following command to populate the cluster UUID on each node in the cluster.
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -configure nodeA nodeB ... nodeN
Where nodeA, nodeB, through nodeN are the names of the cluster nodes.
230
Copy the SMF service manifest file vcs-onenode.xml from /etc/VRTSvcs/conf to /var/svc/manifest/system.
# cp /etc/VRTSvcs/conf/vcs-onenode.xml /var/svc/manifest/system
Edit the following file to change the value of the ONENODE environment variable to yes.
/etc/default/vcs
Manually configuring VCS Starting LLT, GAB, and VCS after manual configuration
231
To start LLT
On each node, run the following command to verify that LLT is running:
# /sbin/lltconfig LLT is running
To start GAB
On each node, run the following command to verify that GAB is running:
# /sbin/gabconfig -a GAB Port Memberships =================================== Port a gen a36e0003 membership 01
To start VCS
232
Chapter
18
Setting up disk-based I/O fencing manually Setting up server-based I/O fencing manually Setting up non-SCSI-3 fencing in virtual environments manually
See Identifying disks to use as coordinator disks on page 234. See Checking shared disks for I/O fencing on page 140.
234
Manually configuring the clusters for data integrity Setting up disk-based I/O fencing manually
Configuring CoordPoint agent See Configuring CoordPoint agent to monitor to monitor coordination points coordination points on page 248. Verifying I/O fencing configuration See Verifying I/O fencing configuration on page 238.
List the disks on each node. For example, execute the following commands to list the disks:
# vxdisk -o alldgs list
Pick three SCSI-3 PR compliant shared disks as coordinator disks. See Checking shared disks for I/O fencing on page 140.
Manually configuring the clusters for data integrity Setting up disk-based I/O fencing manually
235
On any node, create the disk group by specifying the device names:
# vxdg init vxfencoorddg c1t1d0s2 c2t1d0s2 c3t1d0s2
Set the coordinator attribute value as "on" for the coordinator disk group.
# vxdg -g vxfencoorddg set coordinator=on
Import the disk group with the -t option to avoid automatically importing it when the nodes restart:
# vxdg -t import vxfencoorddg
Deport the disk group. Deporting the disk group prevents the coordinator disks from serving other purposes:
# vxdg deport vxfencoorddg
Create the I/O fencing configuration file /etc/vxfendg Update the I/O fencing configuration file /etc/vxfenmode
Do not use spaces between the quotes in the "vxfencoorddg" text. This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group.
On all cluster nodes depending on the SCSI-3 mechanism, type one of the following selections:
236
Manually configuring the clusters for data integrity Setting up disk-based I/O fencing manually
# cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode
To check the updated /etc/vxfenmode configuration, enter the following command on one of the nodes. For example:
# more /etc/vxfenmode
Edit the following file on each node in the cluster to change the values of the VXFEN_START and the VXFEN_STOP environment variables to 1: /etc/default/vxfen
If the I/O fencing driver vxfen is already running, stop the I/O fencing driver.
# svcadm disable -t vxfen
Manually configuring the clusters for data integrity Setting up disk-based I/O fencing manually
237
On one node, use vi or another text editor to edit the main.cf file. To modify the list of cluster attributes, add the UseFence attribute and assign its value as SCSI3.
cluster clus1( UserNames = { admin = "cDRpdxPmHpzS." } Administrators = { admin } HacliUserLevel = COMMANDROOT CounterInterval = 5 UseFence = SCSI3 )
Regardless of whether the fencing configuration is disk-based or server-based, the value of the cluster-level attribute UseFence is set to SCSI3.
6 7
Save and close the file. Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf:
# hacf -verify /etc/VRTSvcs/conf/config
Using rcp or another utility, copy the VCS configuration file from a node (for example, galaxy) to the remaining cluster nodes. For example, on each remaining node, enter:
# rcp galaxy:/etc/VRTSvcs/conf/config/main.cf \ /etc/VRTSvcs/conf/config
Start the I/O fencing driver and VCS. Perform the following steps on each node:
Start the I/O fencing driver. The vxfen startup script also invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordination points that are listed in /etc/vxfentab.
# svcadm enable vxfen
Start VCS.
# /opt/VRTS/bin/hastart
238
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
Output similar to the following appears if the fencing mode is SCSI3 and the SCSI3 disk policy is dmp:
I/O Fencing Cluster Information: ================================ Fencing Fencing Fencing Cluster Protocol Version: 201 Mode: SCSI3 SCSI3 Disk Policy: dmp Members:
* 0 (galaxy) 1 (nebula) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running)
Verify that the disk-based I/O fencing is using the specified disks.
# vxfenconfig -l
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
239
Modifying I/O fencing See Configuring server-based fencing on the VCS cluster configuration files to configure manually on page 242. server-based I/O fencing Modifying VCS configuration to use I/O fencing See Modifying VCS configuration to use I/O fencing on page 236.
Configuring Coordination Point See Configuring CoordPoint agent to monitor agent to monitor coordination coordination points on page 248. points Verifying the server-based I/O See Verifying server-based I/O fencing configuration fencing configuration on page 249.
240
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
Determine the cluster name and uuid on the VCS cluster. For example, issue the following commands on one of the VCS cluster nodes (galaxy):
# grep cluster /etc/VRTSvcs/conf/config/main.cf cluster clus1 # cat /etc/vx/.uuids/clusuuid {f0735332-1dd1-11b2-bb31-00306eea460a}
Use the cpsadm command to check whether the VCS cluster and nodes are present in the CP server. For example:
# cpsadm -s mycps1.symantecexample.com -a list_nodes ClusName UUID Hostname(Node ID) Registered clus1 {f0735332-1dd1-11b2-bb31-00306eea460a} galaxy(0) 0 clus1 {f0735332-1dd1-11b2-bb31-00306eea460a} nebula(1) 0
If the output does not show the cluster and nodes, then add them as described in the next step. For detailed information about the cpsadm command, see the Veritas Cluster Server Administrator's Guide.
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
241
Add the VCS cluster and nodes to each CP server. For example, issue the following command on the CP server (mycps1.symantecexample.com) to add the cluster:
# cpsadm -s mycps1.symantecexample.com -a add_clus\ -c clus1 -u {f0735332-1dd1-11b2}
Issue the following command on the CP server (mycps1.symantecexample.com) to add the first node:
# cpsadm -s mycps1.symantecexample.com -a add_node\ -c clus1 -u {f0735332-1dd1-11b2} -h galaxy -n0 Node 0 (galaxy) successfully added
Issue the following command on the CP server (mycps1.symantecexample.com) to add the second node:
# cpsadm -s mycps1.symantecexample.com -a add_node\ -c clus1 -u {f0735332-1dd1-11b2} -h nebula -n1 Node 1 (nebula) successfully added
If security is to be enabled, check whether the CPSADM@VCS_SERVICES@cluster_uuid users are created in the CP server. If the output below does not show the users, then add them as described in the next step.
# cpsadm -s mycps1.symantecexample.com -a list_users Username/Domain Type Cluster Name / UUID Role
If security is to be disabled, then add the user name "cpsclient@hostname" to the server instead of the CPSADM@VCS_SERVICES@cluster_uuid (for example, cpsclient@galaxy). The CP server can only run in either secure mode or non-secure mode, both connections are not accepted at the same time.
242
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
Add the users to the CP server. Issue the following commands on the CP server (mycps1.symantecexample.com):
# cpsadm -s mycps1.symantecexample.com -a add_user -e\ CPSADM@VCS_SERVICES@cluster_uuid\ -f cps_operator -g vx User CPSADM@VCS_SERVICES@cluster_uuid successfully added
Authorize the CP server user to administer the VCS cluster. You must perform this task for the CP server users corresponding to each node in the VCS cluster. For example, issue the following command on the CP server (mycps1.symantecexample.com) for VCS cluster clus1 with two nodes galaxy and nebula:
# cpsadm -s mycps1.symantecexample.com -a\ add_clus_to_user -c clus1\ -u {f0735332-1dd1-11b2}\ -e CPSADM@VCS_SERVICES@cluster_uuid\ -f cps_operator -g vx Cluster successfully added to user CPSADM@VCS_SERVICES@cluster_uuid privileges.
Fencing mode Fencing mechanism Fencing disk policy (if applicable to your I/O fencing configuration) Appropriate value for the security configuration CP server or CP servers Coordinator disk group (if applicable to your I/O fencing configuration)
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
243
Note: Whenever coordinator disks are used as coordination points in your I/O fencing configuration, you must create a disk group (vxfencoorddg). You must specify this disk group in the /etc/vxfenmode file. See Setting up coordinator disk groups on page 234. The customized fencing framework also generates the /etc/vxfentab file which has security setting and the coordination points (all the CP servers and disks from disk group specified in /etc/vxfenmode file). To configure server-based fencing on the VCS cluster manually
Use a text editor to edit the following file on each node in the cluster:
/etc/default/vxfen
You must change the values of the VXFEN_START and the VXFEN_STOP environment variables to 1.
Use a text editor to edit the /etc/vxfenmode file values to meet your configuration specifications. If your server-based fencing configuration uses a single highly available CP server as its only coordination point, make sure to add the single_cp=1 entry in the /etc/vxfenmode file. The following sample file output displays what the /etc/vxfenmode file contains: See Sample vxfenmode file output for server-based fencing on page 243.
After editing the /etc/vxfenmode file, run the vxfen init script to start fencing. For example:
# svcadm enable vxfen
For CP servers in secure mode, make sure that the security is enabled on the cluster and the credentials for the CPSADM are present in the /var/VRTSvcs/vcsauth/data/CPSADM directory.
244
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
# scsi3 - use scsi3 persistent reservation disks # customized - use script based customized fencing # disabled - run the driver but don't do any actual fencing # vxfen_mode=customized # vxfen_mechanism determines the mechanism for customized I/O # fencing that should be used. # # available options: # cps - use a coordination point server with optional script # controlled scsi3 disks # vxfen_mechanism=cps # # scsi3_disk_policy determines the way in which I/O Fencing # communicates with the coordination disks. This field is # required only if customized coordinator disks are being used. # # available options: # dmp - use dynamic multipathing # raw - connect to disks using the native interface # scsi3_disk_policy=dmp # security when enabled uses secure communication to the cp server # using VxAT (Veritas Authentication Service) # available options: # 0 - don't use Veritas Authentication Service for cp server # communication # 1 - use Veritas Authentication Service for cp server # communication security=1 # # # # # # # #
Specify 3 or more odd number of coordination points in this file, one in its own line. They can be all-CP servers, all-SCSI-3 compliant coordinator disks, or a combination of CP servers and SCSI-3 compliant coordinator disks. Please ensure that the CP server coordination points are numbered sequentially and in the same order on all the cluster nodes.
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
245
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Coordination Point Server(CPS) is specified as: cps<number>=[<vip/vhn>]:<port> If a CPS supports multiple virtual IPs or virtual hostnames over different subnets, all of the IPs/names can be specified in a comma separated list as follows: cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>,..., [<vip_n/vhn_n>]:<port_n> Where, <number> is the serial number of the CPS as a coordination point; must start with 1. <vip> is the virtual IP address of the CPS, must be specified in square brackets ("[]"). <vhn> is the virtual hostname of the CPS, must be specified in square brackets ("[]"). <port> is the port number bound to a particular <vip/vhn> of the CPS. It is optional to specify a <port>. However, if specified, it must follow a colon (":") after <vip/vhn>. If not specified, the colon (":") must not exist after <vip/vhn>. For all the <vip/vhn>s which do not have a specified <port>, a default port can be specified as follows: port=<default_port> Where <default_port> is applicable to all the <vip/vhn>s for which a <port> is not specified. In other words, specifying <port> with a <vip/vhn> overrides the <default_port> for that <vip/vhn>. If the <default_port> is not specified, and there are <vip/vhn>s for which <port> is not specified, then port number 14250 will be used for such <vip/vhn>s. Example of specifying CP Servers to be used as coordination points: port=57777 cps1=[192.168.0.23],[192.168.0.24]:58888,[mycps1.company.com] cps2=[192.168.0.25]
246
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
cps3=[mycps2.company.com]:59999 In the above example, - port 58888 will be used for vip [192.168.0.24] - port 59999 will be used for vhn [mycps2.company.com], and - default port 57777 will be used for all remaining <vip/vhn>s: [192.168.0.23] [mycps1.company.com] [192.168.0.25] - if default port 57777 were not specified, port 14250 would be used for all remaining <vip/vhn>s: [192.168.0.23] [mycps1.company.com] [192.168.0.25] SCSI-3 compliant coordinator disks are specified as: vxfendg=<coordinator disk group name> Example: vxfendg=vxfencoorddg Examples of different configurations: 1. All CP server coordination points cps1= cps2= cps3= 2. A combination of CP server and a disk group having two SCSI-3 coordinator disks cps1= vxfendg= Note: The disk group specified in this case should have two disks 3. All SCSI-3 coordinator disks vxfendg= Note: The disk group specified in case should have three disks
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
247
vxfen_mechanism
scsi3_disk_policy
248
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
single_cp
Value 1 for single_cp parameter indicates that the server-based fencing uses a single highly available CP server as its only coordination point. Value 0 for single_cp parameter indicates that the server-based fencing uses at least three coordination points.
1 2
Ensure that your VCS cluster has been properly installed and configured with fencing enabled. Create a parallel service group vxfen and add a coordpoint resource to the vxfen service group using the following commands:
# # # # # # # # # # haconf -makerw hagrp -add vxfen hagrp -modify vxfen SystemList galaxy 0 nebula 1 hagrp -modify vxfen AutoFailOver 0 hagrp -modify vxfen Parallel 1 hagrp -modify vxfen SourceFile "./main.cf" hares -add coordpoint CoordPoint vxfen hares -modify coordpoint FaultTolerance 1 hares -modify coordpoint Enabled 1 haconf -dump -makero
Manually configuring the clusters for data integrity Setting up server-based I/O fencing manually
249
Verify the status of the agent on the VCS cluster using the hares commands. For example:
# hares -state coordpoint
Access the engine log to view the agent log. The agent log is written to the engine log. The agent log contains detailed CoordPoint agent monitoring information; including information about whether the CoordPoint agent is able to access all the coordination points, information to check on which coordination points the CoordPoint agent is reporting missing keys, etc. To view all such information in the engine log, change the dbg level for that node using the following commands:
# haconf -makerw # hatype -modify Coordpoint LogDbg 10 # haconf -dump -makero
The agent log can now be viewed at the following location: /var/VRTSvcs/log/engine_A.log
250
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
Verify that the I/O fencing configuration was successful by running the vxfenadm command. For example, run the following command:
# vxfenadm -d
Note: For troubleshooting any server-based I/O fencing configuration issues, refer to the Veritas Cluster Server Administrator's Guide.
Verify that I/O fencing is using the specified coordination points by running the vxfenconfig command. For example, run the following command:
# vxfenconfig -l
If the output displays single_cp=1, it indicates that the application cluster uses a CP server as the single coordination point for server-based fencing.
Configure I/O fencing in customized mode with only CP servers as coordination points. See Setting up server-based I/O fencing manually on page 238.
Make sure that the VCS cluster is online and check that the fencing mode is customized.
# vxfenadm -d
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
251
On each node, set the value of the LLT sendhbcap timer parameter value as follows:
Add the following line to the /etc/llttab file so that the changes remain persistent after any reboot:
set-timer senhbcap:3000
For each resource of the type DiskGroup, set the value of the MonitorReservation attribute to 0 and the value of the Reservation attribute to NONE.
# hares -modify <dg_resource> MonitorReservation 0 # hares -modify <dg_resource> Reservation "NONE"
252
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
Make sure that the UseFence attribute in the VCS configuration file main.cf is set to SCSI3. dependent modules
10 To make these VxFEN changes take effect, stop and restart VxFEN and the
On each node, run the following command to stop VCS:
# svcadm disable -t vcs
After VCS takes all services offline, run the following command to stop VxFEN:
# svcadm disable -t vxfen
On each node, run the following commands to restart VxFEN and VCS:
# svcadm enable vxfen
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
253
# # # # # #
available options: dmp - use dynamic multipathing raw - connect to disks using the native interface scsi3_disk_policy=dmp
# # Seconds for which the winning sub cluster waits to allow for the # losing subcluster to panic & drain I/Os. Useful in the absence of # SCSI3 based data disk fencing loser_exit_delay=55 # # Seconds for which vxfend process wait for a customized fencing # script to complete. Only used with vxfen_mode=customized vxfen_script_timeout=25 # # security when enabled uses secure communication to the cp server # using VxAT (Veritas Authentication Service) # available options: # 0 - don't use Veritas Authentication Service for cp server # communication # 1 - use Veritas Authentication Service for cp server # communication security=1 # # # # # # # # # # # # # # #
Specify 3 or more odd number of coordination points in this file, one in its own line. They can be all-CP servers, all-SCSI-3 compliant coordinator disks, or a combination of CP servers and SCSI-3 compliant coordinator disks. Please ensure that the CP server coordination points are numbered sequentially and in the same order on all the cluster nodes. Coordination Point Server(CPS) is specified as: cps<number>=[<vip/vhn>]:<port> If a CPS supports multiple virtual IPs or virtual hostnames over different subnets, all of the IPs/names can be specified in a comma separated list as follows:
254
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
cps<number>=[<vip_1/vhn_1>]:<port_1>,[<vip_2/vhn_2>]:<port_2>,..., [<vip_n/vhn_n>]:<port_n> Where, <number> is the serial number of the CPS as a coordination point; must start with 1. <vip> is the virtual IP address of the CPS, must be specified in square brackets ("[]"). <vhn> is the virtual hostname of the CPS, must be specified in square brackets ("[]"). <port> is the port number bound to a particular <vip/vhn> of the CPS. It is optional to specify a <port>. However, if specified, it must follow a colon (":") after <vip/vhn>. If not specified, the colon (":") must not exist after <vip/vhn>. For all the <vip/vhn>s which do not have a specified <port>, a default port can be specified as follows: port=<default_port> Where <default_port> is applicable to all the <vip/vhn>s for which a <port> is not specified. In other words, specifying <port> with a <vip/vhn> overrides the <default_port> for that <vip/vhn>. If the <default_port> is not specified, and there are <vip/vhn>s for which <port> is not specified, then port number 14250 will be used for such <vip/vhn>s. Example of specifying CP Servers to be used as coordination points: port=57777 cps1=[192.168.0.23],[192.168.0.24]:58888,[mycps1.company.com] cps2=[192.168.0.25] cps3=[mycps2.company.com]:59999 In the above example, - port 58888 will be used for vip [192.168.0.24] - port 59999 will be used for vhn [mycps2.company.com], and - default port 57777 will be used for all remaining <vip/vhn>s: [192.168.0.23]
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
255
# [mycps1.company.com] # [192.168.0.25] # - if default port 57777 were not specified, port 14250 would be used # for all remaining <vip/vhn>s: # [192.168.0.23] # [mycps1.company.com] # [192.168.0.25] # # SCSI-3 compliant coordinator disks are specified as: # # vxfendg=<coordinator disk group name> # Example: # vxfendg=vxfencoorddg # # Examples of different configurations: # 1. All CP server coordination points # cps1= # cps2= # cps3= # # 2. A combination of CP server and a disk group having two SCSI-3 # coordinator disks # cps1= # vxfendg= # Note: The disk group specified in this case should have two disks # # 3. All SCSI-3 coordinator disks # vxfendg= # Note: The disk group specified in case should have three disks # cps1=[mycps1.company.com] cps2=[mycps2.company.com] cps3=[mycps3.company.com] port=14250 ================================
256
Manually configuring the clusters for data integrity Setting up non-SCSI-3 fencing in virtual environments manually
Section
Upgrading VCS
Chapter 19. Planning to upgrade VCS Chapter 20. Performing a typical VCS upgrade using the installer Chapter 21. Performing a phased upgrade Chapter 22. Performing an automated VCS upgrade using response files Chapter 23. Performing a rolling upgrade Chapter 24. Upgrading using Live Upgrade Chapter 25. Upgrading the Solaris operating system
258
Chapter
19
About upgrading to VCS 6.0 VCS supported upgrade paths Upgrading VCS in secure enterprise environments
Typical upgrade using Veritas product installer or the installvcs program See VCS supported upgrade paths on page 260. See Upgrading VCS using the script-based installer on page 265. Typical upgrade using Veritas Web installer See VCS supported upgrade paths on page 260. See Upgrading Veritas Cluster Server using the Veritas Web-based installer on page 267. Phased upgrade to reduce downtime See Performing a phased upgrade on page 272. Automated upgrade using response files See VCS supported upgrade paths on page 260. See Upgrading VCS using response files on page 291. Upgrade using supported native operating system utility Live Upgrade See About Live Upgrade on page 301. Rolling upgrade to minimize downtime
260
See Performing a rolling upgrade of VCS using the Web-based installer on page 299. You can upgrade VCS 6.0 to Storage Foundation High Availability 6.0 using Veritas product installer or response files. See the Veritas Storage Foundation and High Availability Installation Guide.
No upgrade path No upgrade path Not applicable exists. Uninstall VCS. exists. Uninstall VCS. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0.
No upgrade path No upgrade path No upgrade path exists. Uninstall VCS. exists. Uninstall VCS. exists. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Uninstall VCS and then use the installer to perform a full installation of VCS 6.0.
261
Table 19-1
No upgrade path No upgrade path No upgrade path exists. Uninstall VCS. exists. Uninstall VCS. exists. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Uninstall VCS and then use the installer to perform a full installation of VCS 6.0.
No upgrade path No upgrade path Upgrade directly to exists. Uninstall VCS. exists. Uninstall VCS. VCS 6.0 using the installer script. Upgrade the Upgrade the operating system to operating system to at least Solaris 10. at least Solaris 10. Use the installer to perform a full installation of VCS 6.0. Use the installer to perform a full installation of VCS 6.0. No upgrade path Upgrade directly to exists. Uninstall VCS. VCS 6.0 using the installer script. Upgrade the operating system to at least Solaris 10. Use the installer to perform a full installation of VCS 6.0.
Not applicable
Table 19-2 lists the supported upgrade paths for the Solaris x64 Platform Edition. Table 19-2 Supported upgrade paths for Solaris x64 Platform Edition Solaris 10
No upgrade path exists. Uninstall VCS. Use the installer to perform a full installation of VCS 6.0.
262
Table 19-2
Supported upgrade paths for Solaris x64 Platform Edition (continued) Solaris 10
No upgrade path exists. Uninstall VCS. Use the installer to perform a full installation of VCS 6.0. Use the installer to upgrade to VCS 6.0.
Run the installvcs program on each node to upgrade the cluster to VCS 6.0. On each node, the installvcs program updates the configuration, stops the cluster, and then upgrades VCS on the node. The program also generates a cluster UUID on the node. Each node may have a different cluster UUID at this point.
VCS generates the cluster UUID on this node. Run the following command to display the cluster UUID on the local node:
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -display systemname
Set the value of the VCS_HOST environment variable to the name of the first node.
263
Display the value of the CID attribute that stores the cluster UUID value:
# haclus -value CID
Copy the output of the CID attribute to the file /etc/vx/.uuids/clusuuid. Update the VCS_HOST environment variable to remove the set value. Start VCS. The node must successfully join the already running nodes in the cluster. See Verifying LLT, GAB, and cluster operation on page 346.
264
Chapter
20
Before upgrading using the script-based or Web-based installer Upgrading VCS using the script-based installer Upgrading Veritas Cluster Server using the Veritas Web-based installer
Make sure that all non-global zones are booted and in the running state before you install or upgrade the VCS packages in the global zone. If the non-global zones are not mounted and running at the time of upgrade, you must upgrade each package in each non-global zone manually.
266
Performing a typical VCS upgrade using the installer Upgrading VCS using the script-based installer
1 2
Log in as superuser and mount the product disc. Start the installer.
# ./installer
The installer starts the product installation program with a copyright message. It then specifies where it creates the logs. Note the log's directory and name.
3 4 5
From the opening Selection Menu, choose: G for "Upgrade a Product." Choose 1 for Full Upgrade. Enter the names of the nodes that you want to upgrade. Use spaces to separate node names. Press the Enter key to proceed. The installer runs some verification checks on the nodes.
When the verification checks are complete, the installer asks if you agree with the terms of the End User License Agreement. Press y to agree and continue. The installer lists the packages to upgrade.
The installer asks if you want to stop VCS processes. Press the Enter key to continue. The installer stops VCS processes, uninstalls packages, installs or upgrades packages, and configures VCS. The installer lists the nodes that Symantec recommends you restart.
The installer asks if you would like to send the information about this installation to Symantec to help improve installation in the future. Enter your response. The installer displays the location of log files, summary file, and response file.
If you want to upgrade CP server systems that use VCS or SFHA to VCS 6.0, make sure that you first upgrade all application clusters to version VCS 6.0. Then, upgrade VCS or SFHA on the CP server systems. For instructions to upgrade VCS or SFHA, see the Veritas Cluster Server Installation Guide or the Storage Foundation and High Availability Installation Guide.
If you are upgrading from 4.x, you may need to create new VCS accounts if you used native OS accounts. See Creating new VCS accounts if you used native operating system accounts on page 451.
Performing a typical VCS upgrade using the installer Upgrading Veritas Cluster Server using the Veritas Web-based installer
267
1 2
Perform the required steps to save any data that you wish to preserve. For example, make configuration file backups. If you are upgrading a high availability (HA) product, take all service groups offline. List all service groups:
# /opt/VRTSvcs/bin/hagrp -list
Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
On the Select a task and a product page, select Upgrade a Product from the Task drop-down menu. The installer detects the product that is installed on the specified system. Click Next.
5 6
Indicate the systems on which to upgrade. Enter one or more system names, separated by spaces. Click Next. On the License agreement page, select whether you accept the terms of the End User License Agreement (EULA). To continue, select Yes I agree and click Next. Click Next to complete the upgrade. After the upgrade completes, the installer displays the location of the log and summary files. If required, view the files to confirm the installation status.
After the upgrade, if the product is not configured, the Web-based installer asks: "Do you want to configure this product?" If the product is already configured, it will not ask any questions.
268
Performing a typical VCS upgrade using the installer Upgrading Veritas Cluster Server using the Veritas Web-based installer
Click Finish. The installer prompts you for another task. VCS 6.0, make sure that you upgraded all application clusters to version VCS 6.0. Then, upgrade VCS or SFHA on the CP server systems. For instructions to upgrade VCS or SFHA, see the VCS or SFHA Installation Guide.
10 If you want to upgrade VCS or SFHA 5.1 on the CP server systems to version
If you are upgrading from 4.x, you may need to create new VCS accounts if you used native operating system accounts. See Creating new VCS accounts if you used native operating system accounts on page 451.
Chapter
21
Downtime for that service group equals the time that is taken to perform an upgrade and restart the node.
270
Split the cluster into two sub-clusters of equal or near equal size. Split the cluster so that your high priority service groups remain online during the upgrade of the first subcluster. Before you start the upgrade, back up the VCS configuration files main.cf and types.cf which are in the directory/etc/VRTSvcs/conf/config/.
While you perform the upgrades, do not start any modules. When you start the installer, only select VCS. While you perform the upgrades, do not add or remove service groups to any of the nodes. After you upgrade the first half of your cluster (the first subcluster), you need to set up password-less SSH. Create the connection between an upgraded node in the first subcluster and a node from the other subcluster. The node from the other subcluster is where you plan to run the installer and also plan to upgrade. Depending on your configuration, you may find that you cannot upgrade multiple nodes at the same time. You may only be able to upgrade one node at a time. For very large clusters, you might have to repeat these steps multiple times to upgrade your cluster.
271
Figure 21-1
First subcluster sg1 sg2 sg3
node01
node02
node03
node04
sg1 and sg2 are parallel service groups and run on all the nodes. sg3 and sg4 are failover service groups. sg3 runs on node01 and sg4 runs on node02.
In your system list, you have each service group that fails over to other nodes as follows:
sg1 and sg2 are running on all the nodes. sg3 and sg4 can fail over to any of the nodes in the cluster.
Move all the failover service groups from the first subcluster to the second subcluster. Take all the parallel service groups offline on the first subcluster. Upgrade the operating system on the first subcluster's nodes, if required. On the first subcluster, start the upgrade using the installation program. Get the second subcluster ready. Activate the first subcluster. Upgrade the operating system on the second subcluster's nodes, if required. On the second subcluster, start the upgrade using the installation program. Activate the second subcluster.
272
273
On the first subcluster, determine where the service groups are online.
# hagrp -state
#Group sg1 sg1 sg1 sg1 sg2 sg2 sg2 sg2 sg3 sg3 sg3 sg3 sg4 sg4 sg4 sg4
Attribute State State State State State State State State State State State State State State State State
System node01 node02 node03 node04 node01 node02 node03 node04 node01 node02 node03 node04 node01 node02 node03 node04
Value |ONLINE| |ONLINE| |ONLINE| |ONLINE| |ONLINE| |ONLINE| |ONLINE| |ONLINE| |ONLINE| |OFFLINE| |OFFLINE| |OFFLINE| |OFFLINE| |ONLINE| |OFFLINE| |OFFLINE|
Offline the parallel service groups (sg1 and sg2) from the first subcluster. Switch the failover service groups (sg3 and sg4) from the first subcluster (node01 and node02) to the nodes on the second subcluster (node03 and node04).
# hagrp -offline sg1 -sys node01 # hagrp -offline sg2 -sys node01 # hagrp -offline sg1 -sys node02 # hagrp -offline sg2 -sys node02 # hagrp -switch sg3 -to node03 # hagrp -switch sg4 -to node04
274
On the nodes in the first subcluster, unmount all the VxFS file systems that VCS does not manage, for example:
# df -k
Filesystem kbytes used avail capacity Mounted on /dev/dsk/c1t0d0s0 66440242 10114415 55661425 16% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 5287408 1400 5286008 1% /etc/svc/volatile objfs 0 0 0 0% /system/object sharefs 0 0 0 0% /etc/dfs/sharetab /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 66440242 10114415 55661425 16% /platform/sun4u-us3/lib/ libc_psr.so.1 /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 66440242 10114415 55661425 16% /platform/sun4u-us3/lib/ sparcv9/libc_psr.so.1 fd 0 0 0 0% /dev/fd swap 5286064 56 5286008 1% /tmp swap 5286056 48 5286008 1% /var/run swap 5286008 0 5286008 0% /dev/vx/dmp swap 5286008 0 5286008 0% /dev/vx/rdmp 3.0G 18M 2.8G 1% /mnt/dg2/dg2vol1 /dev/vx/dsk/dg2/dg2vol2 1.0G 18M 944M 2% /mnt/dg2/dg2vol2 /dev/vx/dsk/dg2/dg2vol3 10G 20M 9.4G 1% /mnt/dg2/dg2vol3 # umount /mnt/dg2/dg2vol1 # umount /mnt/dg2/dg2vol2 # umount /mnt/dg2/dg2vol3
4 5
On the nodes in the first subcluster, stop all VxVM volumes (for each disk group) that VCS does not manage. Make the configuration writable on the first subcluster.
# haconf -makerw
275
Verify that the service groups are offline on the first subcluster that you want to upgrade.
# hagrp -state
Output resembles:
#Group Attribute System Value sg1 State node01 |OFFLINE| sg1 State node02 |OFFLINE| sg1 State node03 |ONLINE| sg1 State node04 |ONLINE| sg2 State node01 |OFFLINE| sg2 State node02 |OFFLINE| sg2 State node03 |ONLINE| sg2 State node04 |ONLINE| sg3 State node01 |OFFLINE| sg3 State node02 |OFFLINE| sg3 State node03 |ONLINE| sg3 State node04 |OFFLINE| sg4 State node01 |OFFLINE| sg4 State node02 |OFFLINE| sg4 State node03 |OFFLINE| sg4 State node04 |ONLINE|
Perform this step on the nodes (node01 and node02) in the first subcluster if the cluster uses I/O Fencing. Use an editor of your choice and change the following:
In the /etc/vxfenmode file, change the value of the vxfen_mode variable from scsi3 to disabled. Ensure that the line in the vxfenmode file resembles:
vxfen_mode=disabled
276
In the /etc/VRTSvcs/conf/config/main.cf file, change the value of the UseFence attribute from SCSI3 to NONE. Ensure that the line in the main.cf file resembles:
UseFence = NONE
1 2 3
Confirm that you are logged on as the superuser and you mounted the product disc. Make sure that you can ssh or rsh from the node where you launched the installer to the nodes in the second subcluster without requests for a password. Navigate to the folder that contains installvcs.
# cd cluster_server
277
Start the installvcs program, specify the nodes in the first subcluster (node1 and node2).
# ./installvcs node1 node2
The program starts with a copyright message and specifies the directory where it creates the logs.
Review the available installation options. See Veritas Cluster Server installation packages on page 413.
1 Installs only the minimal required VCS packages that provides basic functionality of the product. Installs the recommended VCS packages that provide complete functionality of the product. This option does not install the optional VCS packages. Note that this option is the default. 3 Installs all the VCS packages. You must choose this option to configure any optional VCS feature. 4 Displays the VCS packages for each option. For this example, select 3 for all packages. Select the packages to be installed on all systems? [1-4,q,?] (2) 3
7 8
The installer performs a series of checks and tests to ensure communications, licensing, and compatibility. When you are prompted, reply y to continue with the upgrade.
Do you want to continue? [y,n,q] (y)
278
10 The installer ends for the first subcluster with the following output:
Configuring VCS: 100% Estimated time remaining: 0:00 Performing VCS upgrade configuration .................... Done Veritas Cluster Server Configure completed successfully
You are performing phased upgrade (Phase 1) on the systems. Follow the steps in install guide to upgrade the remaining systems. Would you like to send the information about this installation to Symantec to help improve installation in the future? [y,n,q,?] (y)
The upgrade is finished on the first subcluster. Do not reboot the nodes in the first subcluster until you complete the Preparing the second subcluster procedure.
279
-- GROUP STATE -- Group B B B B B B B B B B B B B B B B SG1 SG1 SG1 SG1 SG2 SG2 SG2 SG2 SG3 SG3 SG3 SG3 SG4 SG4 SG4 SG4
System node01 node02 node03 node04 node01 node02 node03 node04 node01 node02 node03 node04 node01 node02 node03 node04
Probed Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
AutoDisabled N N N N N N N N N N N N N N N N
State OFFLINE OFFLINE ONLINE ONLINE OFFLINE OFFLINE ONLINE ONLINE OFFLINE OFFLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE
280
Unmount all the VxFS file systems that VCS does not manage, for example:
# df -k
Filesystem
kbytes
used
avail capacity
Mounted on
/dev/dsk/c1t0d0s0 66440242 10114415 55661425 16% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 5287408 1400 5286008 1% /etc/svc/volatile objfs 0 0 0 0% /system/object sharefs 0 0 0 0% /etc/dfs/sharetab /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 66440242 10114415 55661425 16% /platform/sun4u-us3/ lib/libc_psr.so.1 /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 66440242 10114415 55661425 16% /platform/sun4u-us3/ lib/sparcv9/libc_psr.so.1 fd 0 0 0 0% /dev/fd swap 5286064 56 5286008 1% /tmp swap 5286056 48 5286008 1% /var/run swap 5286008 0 5286008 0% /dev/vx/dmp swap 5286008 0 5286008 0% /dev/vx/rdmp 3.0G 18M 2.8G 1% /mnt/dg2/dg2vol1 /dev/vx/dsk/dg2/dg2vol2 1.0G 18M 944M 2% /mnt/dg2/dg2vol2 /dev/vx/dsk/dg2/dg2vol3 10G 20M 9.4G 1% /mnt/dg2/dg2vol3 # umount /mnt/dg2/dg2vol1 # umount /mnt/dg2/dg2vol2 # umount /mnt/dg2/dg2vol3
3 4
Stop all VxVM volumes (for each disk group) that VCS does not manage. Make the configuration writable on the second subcluster.
# haconf -makerw
281
Perform this step on node03 and node04 if the cluster uses I/O Fencing. Use an editor of your choice and change the following:
In the /etc/vxfenmode file, change the value of the vxfen_mode variable from scsi3 to disabled. Ensure that the line in the vxfenmode file resembles:
vxfen_mode=disabled
282
In the /etc/VRTSvcs/conf/config/main.cf file, change the value of the UseFence attribute from SCSI3 to NONE. Ensure that the line in the main.cf file resembles:
UseFence = NONE
10 Stop VCS, I/O Fencing, GAB, and LLT on node03 and node04.
Solaris 9:
# /opt/VRTSvcs/bin/hastop -local # /etc/init.d/vxfen stop # /etc/init.d/gab stop # /etc/init.d/llt stop
Solaris 10:
# /opt/VRTSvcs/bin/hastop -local # svcadm disable -t /system/vxfen # svcadm disable -t /system/gab # svcadm disable -t /system/llt
11 Make sure that the VXFEN, GAB, and LLT modules on node03 and node04
are not loaded.
Solaris 9:
# /etc/init.d/vxfen status VXFEN module is not loaded # /etc/init.d/gab status GAB module is not loaded # /etc/init.d/llt status LLT module is not loaded
Solaris 10:
# /lib/svc/method/vxfen status VXFEN module is not loaded # /lib/svc/method/gab status GAB module is not loaded
283
Perform this step on node01 and node02 if the cluster uses I/O Fencing. Use an editor of your choice and revert the following to an enabled state before you reboot the first subcluster's nodes:
In the /etc/VRTSvcs/conf/config/main.cf file, change the value of the UseFence attribute from NONE to SCSI3. Ensure that the line in the main.cf file resembles:
UseFence = SCSI3
In the /etc/vxfenmode file, change the value of the vxfen_mode variable from disabled to scsi3. Ensure that the line in the vxfenmode file resembles:
vxfen_mode=scsi3
For nodes that use Solaris 10, start VCS in first half of the cluster:
# svcadm enable system/vcs
284
On the second subcluster, disable VCS so that it does not start after reboot. Edit the vcs file in /etc/default. Open the vcs file in an editor, and change the line that reads VCS_START=1 to VCS_START=0. Save and close the file. On the second subcluster, disable VXFEN so that it does not start after reboot. Edit the vxfen file in /etc/default. Open the vxfen file in an editor, and change the line that reads VXFEN_START=1 to VXFEN_START=0. Save and close the file. On the second subcluster, disable GAB so that it does not start after reboot. Edit the gab file in /etc/default. Open the gab file in an editor, and change the line that reads GAB_START=1 to GAB_START=0. Save and close the file.
285
On the second subcluster, disable LLT so that it does not start after reboot. Edit the llt file in /etc/default. Open the llt file in an editor, and change the line that reads LLT_START=1 to LLT_START=0. Save and close the file. For a cluster that uses secure mode, create a password-less SSH connection. The connection is from the node where you plan to run the installer to one of the nodes that you have already upgraded.
1 2
Confirm that you are logged on as the superuser and you mounted the product disc. Navigate to the folder that contains installvcs.
# cd cluster_server
Confirm that VCS is stopped on node03 and node04. Start the installvcs program, specify the nodes in the second subcluster (node3 and node4).
# ./installvcs node3 node4
The program starts with a copyright message and specifies the directory where it creates the logs.
286
Review the available installation options. See Veritas Cluster Server installation packages on page 413.
1. Installs only the minimal required VCS packages that provides basic functionality of the product. Installs the recommended VCS packages that provide complete functionality of the product. This option does not install the optional VCS packages. Note that this option is the default. 3. Installs all the VCS packages. You must choose this option to configure any optional VCS feature. 4. Displays the VCS packages for each option. For this example, select 3 for all packages. Select the packages to be installed on all systems? [1-4,q,?] (2) 3
2.
6 7
The installer performs a series of checks and tests to ensure communications, licensing, and compatibility. When you are prompted, reply y to continue with the upgrade.
Do you want to continue? [y,n,q] (y)
Monitor the installer program answering questions as appropriate until the upgrade completes.
287
Verify that the cluster UUID is the same on the nodes in the second subcluster and the first subcluster. Run the following command to display the cluster UUID:
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -display node1 [node2 ...]
If the cluster UUID differs, manually copy the cluster UUID from a node in the first subcluster to the nodes in the second subcluster. For example:
# /opt/VRTSvcs/bin/uuidconfig.pl [-rsh] -clus -copy -from_sys node01 -to_sys node03 node04
Perform this step on node03 and node04 if the cluster uses I/O Fencing. Use an editor of your choice and revert the following to an enabled state before you reboot the second subcluster's nodes:
In the /etc/vxfenmode file, change the value of the vxfen_mode variable from disabled to scsi3. Ensure that the line in the vxfenmode file resembles:
vxfen_mode=scsi3
The nodes in the second subcluster join the nodes in the first subcluster.
For nodes that use Solaris 10, start VCS in first half of the cluster:
# svcadm enable system/vcs
288
Run an hastatus -sum command to determine the status of the nodes, service groups, and cluster.
# hastatus -sum -- SYSTEM STATE -- System A A A A --B B B B B B B B B B B B B B B B node01 node02 node03 node04 GROUP STATE Group System sg1 node01 sg1 node02 sg1 node03 sg1 node04 sg2 node01 sg2 node02 sg2 node03 sg2 node04 sg3 node01 sg3 node02 sg3 node03 sg3 node04 sg4 node01 sg4 node02 sg4 node03 sg4 node04
Frozen 0 0 0 0
Probed Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y
AutoDisabled N N N N N N N N N N N N N N N N
State ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE OFFLINE OFFLINE
After the upgrade is complete, start the VxVM volumes (for each disk group) and mount the VxFS file systems.
In this example, you have performed a phased upgrade of VCS. The service groups were down when you took them offline on node03 and node04, to the time VCS brought them online on node01 or node02.
289
Note: If you want to upgrade Coordination Point (CP) server systems that use Veritas Cluster Server (VCS) or Storage Foundation High Availability (SFHA) to 6.0, make sure that you upgraded all application clusters to version 6.0. Then, upgrade VCS or SFHA on the CP server systems. For instructions to upgrade VCS or SFHA, see the VCS or SFHA Installation Guide.
290
Chapter
22
Upgrading VCS using response files Response file variables to upgrade VCS Sample response file for upgrading VCS
1 2 3
Make sure the systems where you want to upgrade VCS meet the upgrade requirements. Make sure the pre-upgrade tasks are completed. Copy the response file to one of the systems where you want to upgrade VCS. See Sample response file for upgrading VCS on page 293.
Edit the values of the response file variables as necessary. See Response file variables to upgrade VCS on page 292.
292
Performing an automated VCS upgrade using response files Response file variables to upgrade VCS
5 6
Mount the product disc and navigate to the folder that contains the installation program. Start the upgrade from the system to which you copied the response file. For example:
# ./installer -responsefile /tmp/response_file # ./installvcs -responsefile /tmp/response_file
Description
Upgrades VCS packages. (Required)
CFG{accepteula}
Scalar
CFG{systems}
List
CFG{prod}
Scalar
Defines the product to be upgraded. The value is VCS60 for VCS. (Optional)
CFG{vcs_allowcomms}
Scalar
Indicates whether or not to start LLT and GAB when you set up a single-node cluster. The value can be 0 (do not start) or 1 (start). (Required)
CFG{opt}{keyfile}
Scalar
Defines the location of an ssh keyfile that is used to communicate with all remote systems. (Optional)
Performing an automated VCS upgrade using response files Sample response file for upgrading VCS
293
Description
Defines a location, typically an NFS mount, from which all remote systems can install product packages. The location must be accessible from all target systems. (Optional)
CFG{opt}{tmppath}
Scalar
Defines the location where a working directory is created to store temporary files and the packages that are needed during the install. The default location is /var/tmp. (Optional)
CFG{opt}{logpath}
Scalar
Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs.
294
Performing an automated VCS upgrade using response files Sample response file for upgrading VCS
Chapter
23
About rolling upgrades Performing a rolling upgrade using the installer Performing a rolling upgrade of VCS using the Web-based installer
3.
296
Figure 23-1 illustrates an example of the installer performing a rolling upgrade for three service groups on a two node cluster. Figure 23-1
SG1
SG3 Node A
SG3 Node B
Running cluster prior to the rolling upgrade SG1 SG2 SG1 SG2 Node is upgraded SG3 Node B Node A SG3 Node B SG1 SG2
SG3 Node A
SG3 Node B
SG3
Node A
Phase 1 starts on Node B; SG2 fails over; SG3 stops on Node B SG1 SG2
SG1 SG2
SG3 Node A
SG3 Node B
SG3 Node A
SG3 Node B
Phase 1 starts on Node A; SG1 and SG2 fail over; SG3 stops on Node A SG1 SG2
Key: SG1: Failover service group SG2: Failover service group SG3: Parallel service group Phase 1: Upgrades kernel packages Phase 2: Upgrades VCS and VCS agent packges
SG3
SG3
Node A Node B Phase 2, all remaining packages upgraded on all nodes simulatenously; HAD stops and starts
297
Rolling upgrades are not compatible with phased upgrades. Do not mix rolling upgrades and phased upgrades. You can perform a rolling upgrade from 5.1 and later versions.
1 2 3
Complete the preparatory steps on the first sub-cluster. Log in as superuser and mount the VCS VCS 6.0 installation media. From root, start the installer.
# ./installer
4 5
From the menu select Rolling Upgrade. The installer checks system communications, release compatibility, version information, and lists the cluster name, ID, and cluster nodes. Type y to continue. The installer inventories the running service groups and determines the node or nodes to upgrade in phase 1 of the rolling upgrade. Type y to continue. If you choose to specify the nodes, type n and enter the names of the nodes. The installer performs further prechecks on the nodes in the cluster and may present warnings. You can type y to continue or quit the installer and address the precheck's warnings. Review the EULA, and type y if you agree to its terms. The installer prompts you to stop the applicable processes. Type y to continue. The installer fails over failover service groups to the node or nodes that are not upgraded at this time. The downtime is the time that it normally takes for the service group's failover. The installer stops parallel service groups on the nodes that are to be upgraded.
8 9
298
10 The installer stops relevant processes, uninstalls old kernel packages, and
installs the new packages. It performs the configuration for the upgrade and re-starts processes. In case of failure in the startup of some of the processes, you may need to reboot the nodes and manually check the cluster's status.
11 Complete the preparatory steps on the nodes that you have not yet upgraded. 12 The installer begins phase 1 of the upgrade on the remaining node or nodes.
Type y to continue the rolling upgrade. If the installer reboots nodes, restart the installer. The installer repeats step 6 through step 10. For clusters with larger number of nodes, this process may repeat several times. Service groups come down and are brought up to accommodate the upgrade.
13 When phase 1 of the rolling upgrade completes, begin phase 2 of the upgrade.
Phase 2 of the upgrade includes downtime for the VCS engine (HAD), which does not include application downtime. Type y to continue. If the boot disk is encapsulated, you need to reboot nodes when phase 1 completes.
15 The installer stops Veritas Cluster Server (VCS) processes. Type y to continue.
The installer performs prechecks, uninstalls old packages, and installs the new packages. It performs post-installation tasks, and the configuration for the upgrade.
16 Type y or n to help Symantec improve the automated installation. 17 If you have network connection to the Internet, the installer checks for
updates. If updates are discovered, you can apply them now.
18 Upgrade application. 19 To upgrade VCS or Storage Foundation High Availability (SFHA) on the
Coordination Point (CP) server systems to version 6.0, upgrade all the application clusters to 6.0. You then upgrade VCS or SFHA on the CP server systems. For instructions to upgrade VCS or SFHA on the CP server systems, refer to the appropriate installation guide.
Performing a rolling upgrade Performing a rolling upgrade of VCS using the Web-based installer
299
1 2 3
Complete the preparatory steps on the first sub-cluster. Perform the required steps to save any data that you wish to preserve. For example, take back-ups of configuration files. Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
In the Task pull-down menu, select Rolling Upgrade. Click the Next button to proceed.
Review the systems that the installer has chosen to start the rolling upgrade. These systems are chosen to minimize downtime during the upgrade. Click Yes to proceed. The installer validates systems. If it throws an error, address the error and return to the installer.
6 7 8
Review the End User License Agreement (EULA). To continue, select Yes I agree and click Next. The installer stops all processes. Click Next to proceed. The installer removes old software and upgrades the software on the systems that you selected. Review the output and click the Next button when prompted. The installer starts all the relevant processes and brings all the service groups online. If the installer reboots nodes, restart the installer.
Complete the preparatory steps on the nodes that you have not yet upgraded.
300
Performing a rolling upgrade Performing a rolling upgrade of VCS using the Web-based installer
10 When prompted, perform step 4 through step 8 on the nodes that you have
not yet upgraded.
11 When prompted, start phase 2. Click Yes to continue with the rolling upgrade.
You may need to restart the Web-based installer to perform phase 2. See Starting the Veritas Web-based installer on page 162. To upgrade the non-kernel componentsphase 2
In the Task pull-down menu, make sure that Rolling Upgrade is selected. Click the Next button to proceed.
The installer detects the information of cluster and the state of rolling upgrade. The installer validates systems and stops processes. If it throws an error, address the error and return to the installer.
3 4 5 6
Review the End User License Agreement (EULA). To continue, select Yes I agree and click Next. The installer validates systems. If it throws an error, address the error and return to the installer. Click Next to proceed. The installer stops all processes. Click Next to proceed. The installer removes old software and upgrades the software on the systems that you selected. Review the output and click the Next button when prompted. The installer starts all the relevant processes and brings all the service groups online. If you have network connection to the Internet, the installer checks for updates. If updates are discovered, you can apply them now.
Upgrade application.
Chapter
24
About Live Upgrade Supported upgrade paths for Live Upgrade Before you upgrade VCS using Solaris Live Upgrade Upgrading VCS and Solaris using Live Upgrade Upgrading Solaris using Live Upgrade Upgrading VCS using Live Upgrade Administering boot environments
Upgrade the operating system and VCS. See Upgrading VCS and Solaris using Live Upgrade on page 306. Upgrade the operating system. See Upgrading Solaris using Live Upgrade on page 313. Upgrade VCS. See Upgrading VCS using Live Upgrade on page 315.
Figure 24-1 illustrates an example of an upgrade of Veritas products from 5.1 SP1 to 6.0, and the operating system from Solaris 9 to Solaris 10.
302
Upgrading using Live Upgrade Supported upgrade paths for Live Upgrade
Figure 24-1
Create the alternate boot environment from the primary boot environment while the server runs.
Other packages
Other packages
Some service groups (failover and parallel) may be online in this cluster and they are not affected by the Live Upgrade process. The only downtime experienced is when the server is rebooted to boot into the alternate boot disk.
Upgrading using Live Upgrade Before you upgrade VCS using Solaris Live Upgrade
303
Storage Foundation product. After you reboot the alternative root, you can install VRTSodm. VCS version must be at least 5.0 MP3. Symantec requires that both global and non-global zones run the same version of Veritas products. Note: If you use Live Upgrade on a system where non-global zones are configured, make sure that all the zones are in the installed state before you start Live Upgrade. You can use Live Upgrade in the following virtualized environments: Table 24-1 Environment
Solaris native zones
Perform Live Upgrade to upgrade the global zone. See Upgrading VCS and Solaris using Live Upgrade on page 306. VCS6.0 does not support Branded zones. You must migrate applications running on Solaris 8 or Solaris 9 branded zones to Solaris 10 non-global zones if the applications needed to be managed by VCS.
Perform Live Upgrade on the Control domain only. Perform Live Upgrade on the Guest domain only. Use the standard Live Upgrade procedure for both types of logical domains. See Upgrading VCS and Solaris using Live Upgrade on page 306.
304
Upgrading using Live Upgrade Before you upgrade VCS using Solaris Live Upgrade
1 2
Make sure that the VCS installation media and the operating system installation images are available and on hand. On the nodes to be upgraded, select an alternate boot disk that is at least the same size as the root partition of the primary boot disk. If the primary boot disk is mirrored, you need to break off the mirror for the alternate boot disk.
Before you perform the Live Upgrade, take offline any services that involve non-root file systems. This prevents file systems from being copied to the alternate boot environment that could potentially cause a root file system to run out of space. On the primary boot disk, patch the operating system for Live Upgrade. Patch 137477-01 is required. Verify that this patch is installed. The version of the Live Upgrade packages must match the version of the operating system to which you want to upgrade on the alternate boot disk. If you are upgrading the Solaris operating system, do the following steps:
4 5
Remove the installed Live Upgrade packages for the current operating system version: All Solaris versions: SUNWluu, SUNWlur packages. Solaris 10 update 7 or later also requires: SUNWlucfg package. Solaris 10 zones or Branded zones also requires: SUNWluzone package. From the new Solaris installation image, install the new versions of the following Live Upgrade packages: All Solaris versions: SUNWluu, SUNWlur, and SUNWlucfg packages. Solaris 10 zones or Branded zones also requires: SUNWluzone package. Note: While you can perform Live Upgrade in the presence of branded zones, they must be halted, and the branded zones themselves are not upgraded.
Solaris installation media comes with a script for this purpose named liveupgrade20. Find the script at /cdrom/solaris_release/Tools/Installers/liveupgrade20. If scripting, you can use:
# /cdrom/solaris_release/Tools/Installers/liveupgrade20 \ -nodisplay -noconsole
Upgrading using Live Upgrade Before you upgrade VCS using Solaris Live Upgrade
305
Symantec provides the vxlustart script that runs a series of commands to create the alternate boot disk for the upgrade. To preview the commands, specify the vxlustart script with the -V option. Symantec recommends that you preview the commands to ensure there are no problems before beginning the Live Upgrade process. The vxlustart script is located on the distribution media, in the scripts directory.
# cd /cdrom/scripts # ./vxlustart -V -u targetos_version -s osimage_path -d diskname -V Lists the commands to be executed during the upgrade process. The -V option is a preview option without execution. The -v option displays the commands as the script executes them. -u Specifies the operating system version for the upgrade on the alternate boot disk. For example, use 5.9 for Solaris 9 and 5.10 for Solaris 10. Indicates the path to the operating system image to be installed on the alternate boot disk. If you are upgrading the operating system, specify the path to the new operating system version. If you are not upgrading the operating system, and you specify the -s option, the vxlustart -V command can compare the patches that are installed on the specified image with the patches installed on the primary boot disk. If you are not upgrading the operating system, you can omit the -s option; the operating system is cloned from the primary boot disk. -d Indicates the name of the alternate boot disk on which you intend to upgrade. If you do not specify this option with the script, you are prompted for the disk information. Indicates verbose, the executing commands display before they run. Indicates a default yes with no questions asked. Prints with debug option on, and is for debugging. Specifies the rootdisk's file system, where the default is ufs. Specifies the number of CDs involved in upgrade. Specifies that if the machine crashes or reboots before remounting the alternate disk using this option.
-s
-v -Y -D -F -t -r
306
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
For example, to preview the commands to upgrade the Veritas products only:
# ./vxlustart -V -u 5.10 -d disk_name
If the specified image is missing patches that are installed on the primary boot disk, note the patch numbers. To ensure that the alternate boot disk is the same as the primary boot disk, you need to install any missing patches on the alternate boot disk.
In the procedure examples, the primary or current boot environment resides on Disk0 (c0t0d0) and the alternate or inactive boot environment resides on Disk1 (c0t1d0).
Prepare to upgrade using Solaris Live Upgrade. See Before you upgrade VCS using Solaris Live Upgrade on page 303. Create a new boot environment on the alternate boot disk. See Creating a new boot environment on the alternate boot disk on page 307. Upgrade to VCS 6.0 on the alternate boot environment manually or using the installer. To upgrade VCS manually, refer to the following procedure:
See Upgrading VCS using the installer for a Live Upgrade on page 308.
Switch the alternate boot environment to be the new primary. See Completing the Live Upgrade on page 311. Verify Live Upgrade of VCS. See Verifying Live Upgrade of VCS on page 312.
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
307
The Solaris operating system on the alternate boot disk is upgraded, if you have chosen to upgrade the operating system. A new boot environment is created on the alternate boot disk by cloning the primary boot environment.
To create a new boot environment on the alternate boot disk Perform the steps in this procedure on each node in the cluster.
View the list of VxVM disks on which you want to create the new boot environment.
# vxdisk list
308
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
On each node, run one of the following commands: To upgrade the operating system, by itself or together with upgrading the Veritas products:
# ./vxlustart -v -u targetos_version \ -s osimage_path -d disk_name
See See Before you upgrade VCS using Solaris Live Upgrade on page 303. Refer to the step on command options. For example, to upgrade to Solaris 10 update 6:
# ./vxlustart -v -u 5.10 -s /mnt/Solaris_10u6
Review the output and note the new mount points. If the system is rebooted before completion of the upgrade or if the mounts become unmounted, you may need to remount the disks. If you need to remount, run the command:
# vxlustart -r -u targetos_version -d disk_name
After the alternate boot disk is created and mounted on /altroot.5.10, install any operating system patches or packages on the alternate boot disk that are required for the Veritas product installation:
# pkgadd -R /altroot.5.10 -d pkg_dir
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
309
1 2
Insert the product disc with VCS 6.0 or access your copy of the software on the network. Run the installer script specifying the root path as the alternate boot disk:
# ./installer -upgrade -rootpath /altroot.5.10
See Removing and reinstalling VCS using the installer on page 314.
Enter the names of the nodes that you want to upgrade to VCS 6.0. The installer displays the list of packages to be installed or upgraded on the nodes.
Press Return to continue with the installation. During Live Upgrade, if the OS of the alternate boot disk is upgraded, the installer will not update the VCS configurations for Oracle, Netlsnr, and Sybase resources. If cluster configurations include these resources, you will be prompted to run a list of commands to manually update the configurations after the cluster restarts from the alternate boot disks.
Verify that the version of the Veritas packages on the alternate boot disk is 6.0.
# pkginfo -R /altroot.5.10 -l VRTSpkgname
For example:
# pkginfo -R /altroot.5.10 -l VRTSvcs
Confirm that the vxlustart script has mounted the secondary (alternate) disk to /altroot.5.10.
# mount
310
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
Or
# df -k
Remove VCS packages on the alternate boot disk in the following order:
# pkgrm -R /altroot.5.10 \
VRTScmcc VRTScmcs VRTScssim VRTScscm \ VRTSvcsmn VRTSacclib VRTSweb VRTScscw \ VRTSjre15 VRTSvcsag VRTSvcsmg VRTSvcs \ VRTSvxfen VRTSgab VRTSllt VRTSspt VRTSat \ VRTSpbx VRTSicsco VRTSvlic VRTSperl
The -R option removes the packages from the root path /altroot.5.10. Package lists vary from release to release.
Install the VCS packages in the following order one at a time to the alternate boot disk using the pkgadd command. Note that this package list is an example. Full package lists vary from release to release and by product option.
VRTSvlic.pkg VRTSperl.pkg VRTSspt.pkg VRTSat.pkg VRTSllt.pkg VRTSgab.pkg VRTSvxfen.pkg VRTSamf.pkg VRTSvcs.pkg VRTScps.pkg VRTSvcsag.pkg VRTSvcsea.pkg
For example:
# pkgadd -R /altroot.5.10 -d package_name.pkg
where you replace package_name.pkg with a package's name, for example VRTSvcs.pkg.
# pkgadd -R /altroot.5.10 -d VRTSvcs.pkg
In the /media directory, list the patches for each platform. Enter the following:
# ./installer -listpatches
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
311
Install the patches on the alternative boot disk using the patchadd command.
# patchadd -R /altroot.5.10 patch_name
For example:
# patchadd -R /altroot.5.10 143282-01
Verify that the version of the packages on the alternate boot disk is 6.0.
# pkginfo -R /altroot.5.10 -l VRTSvcs
Run the following command to export the root path installation environment variable.
# export INSTALL_ROOT_PATH=/altroot.5.10
Run the following command on the alternate root path of any one node in the cluster to configure a VCS cluster UUID:
# /altroot.5.10/opt/VRTSvcs/bin/uuidconfig.pl -clus -configure \ -use_llthost
The -use_llthost option indicates that the /etc/llthost file is used to determine the names of the nodes in the cluster. Alternatively, you can specify the node names instead of the file name.
The alternate boot environment is activated. The system is booted from the alternate boot disk.
312
Upgrading using Live Upgrade Upgrading VCS and Solaris using Live Upgrade
Complete the Live upgrade process. Enter the following command on all nodes in the cluster.
# ./vcslufinish -u target_os_version Live Upgrade finish on the Solaris release <5.10>
If the system crashes or reboots before Live Upgrade completes successfully, you can remount the alternate disk using the following command:
# ./vxlustart -r -u target_os_version
Note: Do not use the reboot, halt, or uadmin commands to reboot the system. Use either the init or the shutdown commands to enable the system to boot using the alternate boot environment. You can ignore the following error if it appears: ERROR: boot environment <dest.13445> already mounted on </altroot.5.10>.
# shutdown -g0 -y -i6
If you want to upgrade CP server systems that use VCS or SFHA to this version, make sure that you upgraded all application clusters to this version. Then, upgrade VCS or SFHA on the CP server systems. For instructions to upgrade VCS or SFHA on the CP server systems, see the VCS or SFHA Installation Guide.
313
If the alternate boot environment is not active, you can revert to the primary boot environment. See Reverting to the primary boot environment on page 316.
Perform other verification as required to ensure that the new boot environment is configured correctly. For example, verify the version in the /etc/release file and verify the VRTSdbac version. In a zone environment, verify the zone configuration.
Preparing to upgrade using Solaris Live Upgrade. See Before you upgrade VCS using Solaris Live Upgrade on page 303. Creating a new boot environment on the alternate boot disk See Creating a new boot environment on the alternate boot disk on page 307. Removing and reinstalling VCS 6.0 on the alternate boot environment, in one of the following ways: Using manual steps: See Upgrading VCS manually on page 309. Using the installer: See Removing and reinstalling VCS using the installer on page 314. Note: Do NOT configure the VCS 6.0
314
Switching the alternate boot environment to be the new primary See Completing the Live Upgrade on page 311. Verifying Live Upgrade of VCS. See Verifying Live Upgrade of VCS on page 312.
VCS 6.0 is installed on the alternate boot disk, with the correct binaries for the new operating system version
Uninstall using the installer script, specifying the alternate boot disk as the root path:
# /opt/VRTS/install/uninstallvcs -rootpath altrootpath
For example:
# /opt/VRTS/install/uninstallvcs -rootpath /altroot.5.10
Enter the names of the nodes that you want to uninstall. The installer displays the list of packages that will be uninstalled.
3 4
Press Return to continue. Insert the product disc and run the following commands:
# ./installvcs -install -rootpath altrootpath
For example:
# cd /cdrom/cluster_server # ./installvcs -install -rootpath /altroot.5.10
315
5 6
Press Return to continue. Verify that the version of the Veritas packages on the alternate boot disk is 6.0.
# pkginfo -R /altroot.5.10 -l VRTSpkgname
For example:
# pkginfo -R /altroot.5.10 -l VRTSvcs
Prepare to upgrade using Solaris Live Upgrade. See Before you upgrade VCS using Solaris Live Upgrade on page 303. Create a new boot environment on the alternate boot disk. See Creating a new boot environment on the alternate boot disk on page 307. Upgrade to VCS 6.0 on the alternate boot environment manually or using the installer. Refer to one of the following: To upgrade VCS manually:
See Upgrading VCS using the installer for a Live Upgrade on page 308.
Switch the alternate boot environment to be the new primary. See Completing the Live Upgrade on page 311. Verify Live Upgrade of VCS. See Verifying Live Upgrade of VCS on page 312.
316
Failure to perform this step can result in the operating system booting from the alternate boot environment after the reboot. The vcslufinish script displays the way to revert to primary boot environment. Here is a sample output.
Notes: ****************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1c,600000/scsi@2/disk@0,0:a 3. Boot to the original boot environment by typing: boot *******************************************************************
317
In this example, the primary boot disk is currently (source.2657). You want to activate the alternate boot disk (dest.2657)
Unmount any file systems that are mounted on the alternate root disk (dest.2657).
# lufslist dest.2657 boot environment name: dest.2657 Filesystem ----------------/dev/dsk/c0t0d0s1 /dev/dsk/c0t0d0s0 /dev/dsk/c0t0d0s5 /dev/dsk/c0t0d0s3 fstype device size ------ -----------swap 4298342400 ufs 15729328128 ufs 8591474688 ufs 5371625472 Mounted on ---------/ /var /vxfs Mount Options -------------
# luumount dest.2657
The system automatically selects the boot environment entry that was activated.
318
Live Upgrade. You must perform the following procedures when you perform a manual Live Upgrade. To switch the boot environment
In this example, the primary boot disk is currently (source.2657). You want to activate the alternate boot disk (dest.2657)
Unmount any file systems that are mounted on the alternate root disk (dest.2657).
# lufslist dest.2657 boot environment name: dest.2657 Filesystem ----------------/dev/dsk/c0t0d0s1 /dev/dsk/c0t0d0s0 /dev/dsk/c0t0d0s5 /dev/dsk/c0t0d0s3 fstype device size ------ -----------swap 4298342400 ufs 15729328128 ufs 8591474688 ufs 5371625472 Mounted on ---------/ /var /vxfs Mount Options --------------
# luumount dest.2657
319
When the system boots up, the GRUB menu displays the following entries for the Live Upgrade boot environments:
source.2657 dest.2657
The system automatically selects the boot environment entry that was activated.
320
Chapter
25
322
Stopping VCS Stopping fencing, GAB, LLT, and unloading the kernel modules Removing packages and patches Upgrading Solaris operating system Reinstalling fencing, GAB, and LLT from the software disc Restarting VCS
To stop VCS
Move all service groups from the node you are plan to upgrade to another system. Keep services from failing over to this server. On the system where you plan to upgrade, type:
# hasys -freeze -persistent -evacuate upgrade_server
Check if all service groups and resources are offline on the system and online on the other system. Type:
# hastatus -summary
Close the configuration and unload the VCS services on the system that you plan to upgrade. On the system that you plan to upgrade, type:
# haconf -dump -makero # hastop -local
Output resembles:
GAB Port Memberships ======================================= Port a gen 23dc0001 membership 01
323
Unconfigure fencing.
# vxfenconfig -U
Unload the FENCING module from the kernel. Perform the following:
Unconfigure GAB.
# gabconfig -U
6 7
Type Y on each system in response to the message. Unload the LLT module from the kernel:
324
On each node, use the pkgrm command to remove the fencing, GAB, and LLT packages.
# pkgrm VRTSvxfen VRTSgab VRTSllt
1 2
Follow the Oracle installation guide to upgrade the operating system kernel to the new version of Solaris. As the system comes up, enter single-user mode.
To reinstall fencing, GAB, and LLT from the software disc and restart
1 2
In single-user mode, log on as superuser on the system that you have upgraded. Check whether the /tmp directory is mounted.
# mount
Insert the software disc with the VCS software into a system drive where you have upgraded. The Solaris volume-management software automatically mounts the disc as /cdrom/cdrom0. Type the command:
# cd /cdrom/cdrom0
Copy the package files from the software disc to the temporary directory:
# cp -r pkgs/VRTSllt.pkg /tmp/install # cp -r pkgs/VRTSgab.pkg /tmp/install # cp -r pkgs/VRTSvxfen.pkg /tmp/install
325
Install the LLT, GAB, and fencing packages and patches. As you enter the command, be sure to install the packages and patches in the order shown:
To restart VCS
Verify that VCS services are running on the upgraded server. On the upgraded server, type:
# ps -ef | grep ha root root 576 578 1 1 0 16:54:12 ? 0 16:54:13 ? 0:02 /opt/VRTSvcs/bin/had 0:00 /opt/VRTSvcs/bin/hashadow
If the VCS services are not running, reload the VCS services. Type:
# hastart
Unfreeze the upgraded server and save the configuration. On the upgraded server, type:
# hasys -unfreeze -persistent upgraded_server # haconf -dump -makero
326
Section
Post-installation tasks
Chapter 26. Performing post-installation tasks Chapter 27. Installing or upgrading VCS components Chapter 28. Verifying the VCS installation
328
Chapter
26
About enabling LDAP authentication for clusters that run in secure mode Accessing the VCS documentation Removing permissions for communication
About enabling LDAP authentication for clusters that run in secure mode
Symantec Product Authentication Service (AT) supports LDAP (Lightweight Directory Access Protocol) user authentication through a plug-in for the authentication broker. AT supports all common LDAP distributions such as Oracle Directory Server, Netscape, OpenLDAP, and Windows Active Directory. For a cluster that runs in secure mode, you must enable the LDAP authentication plug-in if the VCS users belong to an LDAP domain. See Enabling LDAP authentication for clusters that run in secure mode on page 331. If you have not already added VCS users during installation, you can add the users later. See the Veritas Cluster Server Administrator's Guide for instructions to add VCS users. Figure 26-1 depicts the VCS cluster communication with the LDAP servers when clusters run in secure mode.
330
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
Figure 26-1
1. When a user runs HA commands, AT initiates user authentication with the authentication broker.
4. AT issues the credentials to the user to proceed with the command. VCS node (authentication broker)
2. Authentication broker on VCS node performs an LDAP bind operation with the LDAP directory.
3. Upon a successful LDAP bind, AT retrieves group information from the LDAP direcory. LDAP server (such as OpenLDAP or Windows Active Directory)
The LDAP schema and syntax for LDAP commands (such as, ldapadd, ldapmodify, and ldapsearch) vary based on your LDAP implementation. Before adding the LDAP domain in Symantec Product Authentication Service, note the following information about your LDAP environment:
UserObjectClass (the default is posixAccount) UserObject Attribute (the default is uid) User Group Attribute (the default is gidNumber) Group Object Class (the default is posixGroup) GroupObject Attribute (the default is cn) Group GID Attribute (the default is gidNumber) Group Membership Attribute (the default is memberUid)
URL to the LDAP Directory Distinguished name for the user container (for example, UserBaseDN=ou=people,dc=comp,dc=com)
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
331
332
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
Add the LDAP domain to the AT configuration using the vssat command. The following example adds the LDAP domain, MYENTERPRISE:
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addldapdomain \ --domainname "MYENTERPRISE.symantecdomain.com"\ --server_url "ldap://my_openldap_host.symantecexample.com"\ --user_base_dn "ou=people,dc=symantecdomain,dc=myenterprise,dc=com"\ --user_attribute "cn" --user_object_class "account"\ --user_gid_attribute "gidNumber"\ --group_base_dn "ou=group,dc=symantecdomain,dc=myenterprise,dc=com"\ --group_attribute "cn" --group_object_class "posixGroup"\ --group_gid_attribute "member"\ --admin_user "cn=manager,dc=symantecdomain,dc=myenterprise,dc=com"\ --admin_user_password "password" --auth_type "FLAT"
Verify that you can successfully authenticate an LDAP user on the VCS nodes. You must have a valid LDAP user ID and password to run the command. In the following example, authentication is verified for the MYENTERPRISE domain for the LDAP user, vcsadmin1.
galaxy# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat authenticate --domain ldap:MYENTERPRISE.symantecdomain.com --prplname vcsadmin1 --broker galaxy:14149 Enter password for vcsadmin1: ########## authenticate ------------------------------------------Authenticated User vcsadmin1 ----------------------
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
333
If you want to enable group-level authentication, you must run the following command:
# hauser -addpriv \ ldap_group@ldap_domain AdministratorGroup
VCS_DOMAIN=myenterprise.symantecdomain.com VCS_DOMAINTYPE=ldap
For example, for the Bourne Shell (sh) or the Korn shell (ksh), run the following commands:
# export VCS_DOMAIN=myenterprise.symantecdomain.com # export VCS_DOMAINTYPE=ldap
334
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
Similarly, you can use the same LDAP user credentials to log on to the VCS node using the VCS Cluster Manager (Java Console).
To enable LDAP authentication on other nodes in the cluster, perform the procedure on each of the nodes in the cluster.
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
335
To enable Windows Active Directory authentication for clusters that run in secure mode
Run the LDAP configuration tool atldapconf using the -d option. The -d option discovers and retrieves an LDAP properties file which is a prioritized attribute list.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -d \ -s domain_controller_name_or_ipaddress \ -u domain_user -g domain_group
For example:
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \ -d -s 192.168.20.32 -u Administrator -g "Domain Admins" Search User provided is invalid or Authentication is required to proceed further. Please provide authentication information for LDAP server. Username/Common Name: symantecdomain\administrator Password: Attribute file created.
Run the LDAP configuration tool atldapconf using the -c option. The -c option creates a CLI file to add the LDAP domain.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \ -c -d windows_domain_name
For example:
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf \ -c -d symantecdomain.com Attribute list file not provided, using default AttributeList.txt. CLI file name not provided, using default CLI.txt. CLI for addldapdomain generated.
Run the LDAP configuration tool atldapconf using the -x option. The -x option reads the CLI file and executes the commands to add a domain to the AT.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atldapconf -x
336
Performing post-installation tasks About enabling LDAP authentication for clusters that run in secure mode
List the LDAP domains to verify that the Windows Active Directory server integration is complete.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat listldapdomains Domain Name : Server URL : SSL Enabled : User Base DN : User Object Class : User Attribute : User GID Attribute : Group Base DN : Group Object Class : Group Attribute : Group GID Attribute : Auth Type : Admin User : Admin User Password : Search Scope : symantecdomain.com ldap://192.168.20.32:389 No CN=people,DC=symantecdomain,DC=com account cn gidNumber CN=group,DC=symantecdomain,DC=com group cn cn FLAT
SUB
VCS_DOMAIN=symantecdomain.com VCS_DOMAINTYPE=ldap
For example, for the Bourne Shell (sh) or the Korn shell (ksh), run the following commands:
# export VCS_DOMAIN=symantecdomain.com # export VCS_DOMAINTYPE=ldap
337
Similarly, you can use the same LDAP user credentials to log on to the VCS node using the VCS Cluster Manager (Java Console).
To enable LDAP authentication on other nodes in the cluster, perform the procedure on each of the nodes in the cluster.
Copy the PDF from the software disc (cluster_server/docs/) to the directory /opt/VRTS/docs.
338
Chapter
27
Installing the Java Console Upgrading the Java Console Installing VCS Simulator Upgrading VCS Simulator
340
Note: Make sure that you are using an operating system version that supports JRE 1.6.
Pentium II 300 megahertz 256 megabytes of RAM 800x600 display resolution 8-bit color depth of the monitor A graphics card that is capable of 2D images
Note: Symantec recommends using Pentium III 400MHz or higher, 256MB RAM or higher, and 800x600 display resolution or higher. The version of the Java 2 Runtime Environment (JRE) requires 32 megabytes of RAM. Symantec recommends using the following hardware:
48 megabytes of RAM 16-bit color mode The KDE and the KWM window managers that are used with displays set to local hosts
341
2 3
Download the Java GUI utility from http://go.symantec.com/vcsm_download to a temporary directory. Go to the temporary directory and unzip the compressed package file using the gunzip utility:
# cd /tmp/install # gunzip VRTScscm.tar.gz
1 2 3 4
Download the Java GUI utility from http://go.symantec.com/vcsm_download to a temporary directory. Extract the zipped file to a temporary folder. From this extracted folder, double-click setup.exe. The Veritas Cluster Manager Install Wizard guides you through the installation process.
342
1 2
Log in as superuser on the node where you intend to install the package. Remove the GUI from the previous installation.
# pkgrm VRTScscm
Install the VCS Java console. See Installing the Java Console on Solaris on page 340.
1 2
Stop Cluster Manager (Java Console) if it is running. Remove Cluster Manager from the system.
From the Control Panel, double-click Add/Remove Programs Select Veritas Cluster Manager. Click Add/Remove. Follow the uninstall wizard instructions.
Install the new Cluster Manager. See Installing the Java Console on a Windows system on page 341.
Note: Make sure that you are using an operating system version that supports JRE 1.6 or later.
343
Download VCS Simulator from the following location to a temporary directory. http://www.symantec.com/business/cluster-server and click Utilities.
2 3
Extract the compressed files to another directory. Navigate to the path of the Simulator installer file: \cluster_server\windows\VCSWindowsInstallers\Simulator
4 5 6 7 8
Double-click the installer file. Read the information in the Welcome screen and click Next. In the Destination Folders dialog box, click Next to accepted the suggested installation path or click Change to choose a different location. In the Ready to Install the Program dialog box, click Back to make changes to your selections or click Install to proceed with the installation. In the Installshield Wizard Completed dialog box, click Finish.
Content
Information about attributes associated with VCS objects VCS Simulator binaries Files for the default cluster configuration A sample cluster configuration, which serves as a template for each new cluster configuration Various templates that are used by the Java Console The types.cf files for all supported platforms Contains another directory called types. This directory contains assorted resource type definitions that are useful for the Simulator. The type definition files are present in platform-specific sub directories.
344
VCS Simulator creates a directory for every new simulated cluster and copies the contents of the sample_clus directory. Simulator also creates a log directory within each cluster directory for logs that are associated with the cluster.
1 2 3
Stop all instances of VCS Simulator. Stop VCS Simulator, if it is running. Remove VCS Simulator from the system.
From the Control Panel, double-click Add/Remove Programs Select VCS Simulator. Click Add/Remove. Follow the uninstall wizard instructions.
Install the new Simulator. See Installing VCS Simulator on Windows systems on page 342.
Chapter
28
About verifying the VCS installation About the cluster UUID Verifying the LLT, GAB, and VCS configuration files Verifying LLT, GAB, and cluster operation Performing a postcheck on a node
346
Verifying the VCS installation Verifying the LLT, GAB, and VCS configuration files
Verify the content of the configuration files. See About the LLT and GAB configuration files on page 453. See About the VCS configuration files on page 457.
1 2
Log in to any node in the cluster as superuser. Make sure that the PATH environment variable is set to run the VCS commands. See Setting the PATH variable on page 68.
Verify the cluster operation. See Verifying the cluster on page 351.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
347
Verifying LLT
Use the lltstat command to verify that links are active for LLT. If LLT is configured correctly, this command shows all the nodes in the cluster. The command also returns information about the links for LLT for the node on which you typed the command. Refer to the lltstat(1M) manual page for more information. To verify LLT
1 2
Log in as superuser on the node galaxy. Run the lltstat command on the node galaxy to view the status of LLT.
lltstat -n
Links 2 2
Each node has two links and each node is in the OPEN state. The asterisk (*) denotes the node on which you typed the command. If LLT does not operate, the command does not return any LLT links information: If only one network is connected, the command returns the following LLT statistics information:
LLT node information: Node * 0 galaxy 1 nebula 2 saturn
Links 2 2 1
3 4
Log in as superuser on the node nebula. Run the lltstat command on the node nebula to view the status of LLT.
lltstat -n
Links 2 2
348
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
To view additional information about LLT, run the lltstat -nvv command on each node. For example, run the following command on the node galaxy in a two-node cluster:
lltstat -nvv active
The command reports the status on the two active nodes in the cluster, galaxy and nebula. For each correctly configured node, the information must show the following:
A state of OPEN A status for each link of UP An address for each link
However, the output in the example shows different details for the node nebula. The private network connection is possibly broken or the information in the /etc/llttab file may be incorrect.
To obtain information about the ports open for LLT, type lltstat -p on any node. For example, type lltstat -p on the node galaxy in a two-node cluster:
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
349
lltstat -p
Verifying GAB
Verify the GAB operation using the gabconfig -a command. This command returns the GAB port membership information. The ports indicate the following:
Port a
Nodes have GAB communication. gen a36e0003 is a randomly generated number membership 01 indicates that nodes 0 and 1 are connected Indicates that the I/O fencing driver is connected to GAB port b.
Port b
Note: Port b appears in the gabconfig command output only if you had
configured I/O fencing after you configured VCS.
gen a23da40d is a randomly generated number membership 01 indicates that nodes 0 and 1 are connected VCS is started. gen fd570002 is a randomly generated number membership 01 indicates that nodes 0 and 1 are both running VCS
Port h
For more information on GAB, refer to the Veritas Cluster Server Administrator's Guide.
350
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
To verify GAB
To verify that GAB operates, type the following command on each node:
/sbin/gabconfig -a
If GAB operates, the following GAB port membership information is returned: For a cluster where I/O fencing is not configured:
GAB Port Memberships =================================== Port a gen a36e0003 membership 01 Port h gen fd570002 membership 01
Note that port b appears in the gabconfig command output only if you had configured I/O fencing. You can also use the vxfenadm -d command to verify the I/O fencing configuration.
If GAB does not operate, the command does not return any GAB port membership information:
GAB Port Memberships ===================================
If only one network is connected, the command returns the following GAB port membership information:
GAB Port Memberships =================================== Port a gen a36e0003 membership 01 Port a gen a36e0003 jeopardy ;1 Port h gen fd570002 membership 01 Port h gen fd570002 jeopardy ;1
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
351
Frozen 0 0
Probed Y Y
AutoDisabled N N
The system state If the value of the system state is RUNNING, the cluster is successfully started. The ClusterService group state In the sample output, the group state lists the ClusterService group, which is ONLINE on galaxy and OFFLINE on nebula.
352
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
Note: The example in the following procedure is for SPARC. x64 clusters have different command output. To verify the cluster nodes
The example shows the output when the command is run on the node galaxy. The list continues with similar information for nebula (not shown) and any other nodes in the cluster.
#System galaxy galaxy galaxy galaxy Attribute AgentsStopped AvailableCapacity CPUBinding CPUThresholdLevel Value 0 100 BindTo None CPUNumber 0 Critical 90 Warning 80 Note 70 Info 60 0 Enabled 0 ActionThreshold 0 ActionTimeLimit 0 Action NONE NotifyThreshold 0 NotifyTimeLimit 0
galaxy galaxy
CPUUsage CPUUsageMonitoring
galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy
Capacity ConfigBlockCount ConfigCheckSum ConfigDiskState ConfigFile ConfigInfoCnt ConfigModDate ConnectorState CurrentLimits DiskHbStatus
100 130 46688 CURRENT /etc/VRTSvcs/conf/config 0 Thu Sep 22 07:14:23 CDT 2011 Down
Verifying the VCS installation Verifying LLT, GAB, and cluster operation
353
galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy galaxy
DynamicLoad EngineRestarted EngineVersion FencingWeight Frozen GUIIPAddr HostUtilization LLTNodeId LicenseType Limits LinkHbStatus LoadTimeCounter LoadTimeThreshold LoadWarningLevel NoAutoDisable NodeId OnGrpCnt PhysicalServer ShutdownTimeout SourceFile SwapThresholdLevel
0 0 6.0.00.0 0 0
600 ./main.cf Critical 90 Warning 80 Note 70 Info 60 Solaris:galaxy,Generic_ 118558-11,5.9,sun4u galaxy RUNNING
galaxy
SysInfo
354
SystemOwner SystemRecipients TFrozen TRSE UpDownState UserInt UserStr VCSFeatures VCSMode NONE VCS 0 0 Up 0
The heartbeat link does not exist. The heartbeat link cannot communicate.
355
The heartbeat link is a part of a bonded or aggregated NIC. A duplicated cluster ID exists. The VRTSllt pkg version is not consistent on the nodes. The llt-linkinstall value is incorrect. The llthosts(4) or llttab(4) configuration is incorrect. the /etc/gabtab file is incorrect. The incorrect GAB linkinstall value exists. The VRTSgab pkg version is not consistent on the nodes. The main.cf file or the types.cf file is invalid. The /etc/VRTSvcs/conf/sysname file is not consistent with the hostname. The cluster UUID does not exist. The uuidconfig.pl file is missing. The VRTSvcs pkg version is not consistent on the nodes. The /etc/vxfenmode file is missing or incorrect. The /etc/vxfendg file is invalid. The vxfen link-install value is incorrect. The VRTSvxfen pkg version is not consistent.
The postcheck option can help you troubleshoot the following SFHA or SFCFSHA issues:
Volume Manager cannot start because the Volboot file is not loaded. Volume Manager cannot start because no license exists. Cluster Volume Manager cannot start because the CVM configuration is incorrect in the main.cf file. For example, the Autostartlist value is missing on the nodes. Cluster Volume Manager cannot come online because the node ID in the /etc/llthosts file is not consistent. Cluster Volume Manager cannot come online because Vxfen is not started. Cluster Volume Manager cannot start because gab is not configured.
356
Cluster Volume Manager cannot come online because of a CVM protocol mismatch. Cluster Volume Manager group name has changed from "cvm", which causes CVM to go offline.
Section
Uninstalling VCS
Chapter 29. Uninstalling VCS using the installer Chapter 30. Uninstalling VCS using response files Chapter 31. Manually uninstalling VCS
358
Chapter
29
Preparing to uninstall VCS Uninstalling VCS using the script-based installer Uninstalling VCS with the Veritas Web-based installer Removing language packages using the uninstaller program Removing the CP server configuration using the removal script
Before you remove VCS from any node in the cluster, shut down the applications that depend on VCS. For example, applications such as Java Console or any high availability agents for VCS. Before you remove VCS from fewer than all nodes in a cluster, stop the service groups on the nodes from which you uninstall VCS. You must also reconfigure VCS on the remaining nodes. See About adding and removing nodes on page 385. If you have manually edited any of the VCS configuration files, you need to reformat them. See Reformatting VCS configuration files on a stopped cluster on page 72.
360
Uninstalling VCS using the installer Uninstalling VCS using the script-based installer
Make sure that the communication exists between systems. By default, the uninstaller uses ssh. Make sure you can execute ssh or rsh commands as superuser on all nodes in the cluster. Make sure that the ssh or rsh is configured to operate without requests for passwords or passphrases.
If you cannot meet the prerequisites, then you must run the uninstallvcs program on each node in the cluster. The uninstallvcs program removes all VCS packages and VCS language packages. The example demonstrates how to uninstall VCS using the uninstallvcs program. The uninstallvcs program uninstalls VCS on two nodes: galaxy nebula. The example procedure uninstalls VCS from all nodes in the cluster.
1 2
Log in as superuser from the node where you want to uninstall VCS. Start uninstallvcs program.
# cd /opt/VRTS/install # ./uninstallvcs
The program specifies the directory where the logs are created. The program displays a copyright notice and a description of the cluster:
Enter the names of the systems from which you want to uninstall VCS. The program performs system verification checks and asks to stop all running VCS processes.
Enter y to stop all the VCS processes. The program stops the VCS processes and proceeds with uninstalling the software.
Uninstalling VCS using the installer Uninstalling VCS with the Veritas Web-based installer
361
Verifies the communication between systems Checks the installations on each system to determine the packages to be uninstalled.
6 7
Review the output as the uninstaller stops processes, unloads kernel modules, and removes the packages. Note the location of summary, response, and log files that the uninstaller creates after removing all the packages.
You need to uninstall VCS after an incomplete installation. The uninstallvcs program is not available in /opt/VRTS/install.
If you mounted the installation media to /mnt, access the uninstallvcs program by changing directory to:
cd /mnt/cluster_server/ ./ uninstallvcs
1 2
Perform the required steps to save any data that you wish to preserve. For example, take back-ups of configuration files. Start the Web-based installer. See Starting the Veritas Web-based installer on page 162.
3 4
On the Select a task and a product page, select Uninstall a Product from the Task drop-down list. Select Veritas Cluster Server from the Product drop-down list, and click Next.
362
Uninstalling VCS using the installer Removing language packages using the uninstaller program
5 6 7 8
Indicate the systems on which to uninstall. Enter one or more system names, separated by spaces. Click Next. After the validation completes successfully, click Next to uninstall VCS on the selected system. If there are any processes running on the target system, the installer stops the processes. Click Next. After the installer stops the processes, the installer removes the products from the specified system. Click Next.
After the uninstall completes, the installer displays the location of the summary, response, and log files. If required, view the files to confirm the status of the removal.
10 Click Finish.
You see a prompt recommending that you reboot the system, and then return to the Web page to complete additional tasks.
Removes all CP server configuration files Removes the VCS configuration for CP server
After you run this utility, you can uninstall VCS from the node or the cluster.
Uninstalling VCS using the installer Removing the CP server configuration using the removal script
363
Note: You must run the configuration utility only once per CP server (which can be on a single-node VCS cluster or an SFHA cluster), when you want to remove the CP server configuration. To remove the CP server configuration
To run the configuration removal script, enter the following command on the node where you want to remove the CP server configuration:
[email protected] # /opt/VRTScps/bin/configure_cps.pl
Review the warning message and confirm that you want to unconfigure the CP server.
WARNING: Unconfiguring Coordination Point Server stops the vxcpserv process. VCS clusters using this server for coordination purpose will have one less coordination point. Are you sure you want to bring down the cp server? (y/n) (Default:n) :y
Review the screen output as the script performs the following steps to remove the CP server configuration:
Stops the CP server Removes the CP server from VCS configuration Removes resource dependencies Takes the the CP server service group (CPSSG) offline, if it is online Removes the CPSSG service group from the VCS configuration
364
Uninstalling VCS using the installer Removing the CP server configuration using the removal script
Answer yto delete the CP server configuration file and log files.
Do you want to delete the CP Server configuration file (/etc/vxcps.conf) and log files (in /var/VRTScps)? (y/n) (Default:n) : y
Run the hagrp -state command to ensure that the CPSSG service group has been removed from the node. For example:
[email protected] # hagrp -state CPSSG VCS WARNING V-16-1-40131 Group CPSSG does not exist in the local cluster
Chapter
30
Uninstalling VCS using response files Response file variables to uninstall VCS Sample response file for uninstalling VCS
1 2
Make sure that you meet the prerequisites to uninstall VCS. Copy the response file to thesystem where you want to uninstall VCS. See Sample response file for uninstalling VCS on page 367.
Edit the values of the response file variables as necessary. See Response file variables to uninstall VCS on page 366.
Start the uninstallation from the system to which you copied the response file. For example:
# /opt/VRTS/install/uninstallvcs -responsefile /tmp/response_file
366
Uninstalling VCS using response files Response file variables to uninstall VCS
Description
Uninstalls VCS packages. (Required)
CFG{systems}
List
CFG{prod}
Scalar
Defines the product to be uninstalled. The value is VCS51 for VCS. (Required)
CFG{opt}{keyfile}
Scalar
Defines the location of an ssh keyfile that is used to communicate with all remote systems. (Optional)
CFG{opt}{rsh}
Scalar
Defines that rsh must be used instead of ssh as the communication method between systems. (Optional)
CFG{opt}{logpath}
Scalar
Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs.
Uninstalling VCS using response files Sample response file for uninstalling VCS
367
368
Uninstalling VCS using response files Sample response file for uninstalling VCS
Chapter
31
Removing VCS packages manually Manually remove the CP server fencing configuration Manually deleting cluster details from a CP server
Shut down VCS on the local system using the hastop command.
# hastop -local
370
371
Note: The VRTScps package should be removed after manually removing the CP server fencing configuration.See Manually remove the CP server fencing configuration on page 372.
372
Unregister all VCS cluster nodes from all CP servers using the following command:
# cpsadm -s cp_server -a unreg_node -u uuid -n nodeid
Remove the VCS cluster from all CP servers using the following command:
# cpsadm -s cp_server -a rm_clus -u uuid
Remove all the VCS cluster users communicating to CP servers from all the CP servers using the following command:
# cpsadm -s cp_server -a rm_user -e user_name -g domain_type
373
# cpsadm -s mycps1 -a list_nodes ClusterName =========== cluster1 UUID Hostname(Node ID) Registered =================================== ================ =========== {3719a60a-1dd2-11b2-b8dc-197f8305ffc0} node0(0) 1
# cpsadm -s mycps1 -a list_users Username/Domain Type Cluster Name/UUID ==================== ================== cpsclient@hostname/vx cluster1/{3719a60a-1dd2-11b2-b8dc-197f8305ffc0} Role ======= Operator
Remove the privileges for each user of the cluster that is listed in step 2 from the CP server cluster. For example:
# cpsadm -s mycps1 -a rm_clus_from_user -c cluster1 -e cpsclient@hostname -g vx -f cps_operator Cluster successfully deleted from user cpsclient@hostname privileges.
Remove each user of the cluster that is listed in step 2. For example:
# cpsadm -s mycps1 -a rm_user -e cpsclient@hostname -g vx User cpsclient@hostname successfully deleted
Unregister each node that is registered to the CP server cluster. See the output of step 1 for registered nodes. For example:
# cpsadm -s mycps1 -a unreg_node -c cluster1 -n 0 Node 0 (node0) successfully unregistered
374
# cpsadm -s mycps1 -a list_users Username/Domain Type Cluster Name/UUID ==================== ================== Role =======
Section
10
Chapter 32. Adding a node to a single-node cluster Chapter 33. Adding and removing cluster nodes
376
Chapter
32
Add Ethernet cards for private See Installing and configuring Ethernet cards heartbeat network for Node B. for private network on page 379. If necessary, add Ethernet cards for private heartbeat network for Node A. Make the Ethernet cable connections between the two nodes. Connect both nodes to shared storage. See Configuring the shared storage on page 380.
378
If necessary, install VCS on Node B and See Installing the VCS software manually when add a license key. adding a node to a single node cluster on page 381. Make sure Node B is running the same version of VCS as the version on Node A. Edit the configuration files on Node B. See About the VCS configuration files on page 457. Start LLT and GAB on Node B.
Start LLT and GAB on Node A. Restart VCS on Node A. Modify service groups for two nodes. Start VCS on Node B. Verify the two-node cluster.
See Reconfiguring VCS on the existing node Copy UUID from Node A to Node B. on page 381.
If VCS is not currently running on Node B, proceed to step 2. If the node you plan to add as Node B is currently part of an existing cluster, remove the node from the cluster. After you remove the node from the cluster, remove the VCS packages and configuration files. See Removing a node from a cluster on page 401. If the node you plan to add as Node B is also currently a single VCS node, uninstall VCS. If you renamed the LLT and GAB startup files, remove them.
379
If necessary, install VxVM and VxFS. See Installing VxVM or VxFS if necessary on page 379.
Install the Ethernet card on Node A. If you want to use aggregated interface to set up private network, configure aggregated interface.
Install the Ethernet card on Node B. If you want to use aggregated interface to set up private network, configure aggregated interface.
5 6 7
Configure the Ethernet card on both nodes. Make the two Ethernet cable connections from Node A to Node B for the private networks. Restart the nodes.
380
Start the operating system. On a SPARC node (Node A) enter the command:
ok boot -r
2 3
If you have configured I/O Fencing, GAB, and LLT on the node, stop them.
# /usr/sbin/svcadm disable -t gab # /usr/sbin/svcadm disable -t llt
381
Installing the VCS software manually when adding a node to a single node cluster
Install the VCS 6.0 packages manually and install the license key. Refer to the following sections:
See Preparing for a manual installation on page 207. See Adding a license key for a manual installation on page 213.
Create the file /etc/llttab for a two-node cluster See Setting up /etc/llttab for a manual installation on page 224.
Create the file /etc/llthosts that list both the nodes. See Setting up /etc/llthosts for a manual installation on page 224.
Create the file /etc/gabtab. See Configuring GAB manually on page 227.
On Solaris 10:
# /usr/sbin/svcadm enable llt
On Solaris 10:
# /usr/sbin/svcadm enable gab
382
On Node A, create the files /etc/llttab, /etc/llthosts, and /etc/gabtab. Use the files that are created on Node B as a guide, customizing the /etc/llttab for Node A. Start LLT on Node A.
Solaris 10:
# /usr/sbin/svcadm enable llt
Solaris 10:
# /usr/sbin/svcadm enable gab
Copy the cluster UUID from the existing node to the new node:
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -copy -from_sys \ node_name_in_running_cluster -to_sys new_sys1 ... new_sysn
Where you are copying the cluster UUID from a node in the cluster (node_name_in_running_cluster) to systems from new_sys1 through new_sysn that you want to join the cluster.
383
384
Chapter
33
About adding and removing nodes Adding nodes using the VCS installer Adding a node using the Web-based installer Manually adding a node to a cluster Removing a node from a cluster
Verifies that the node and the existing cluster meet communication requirements. Verifies the products and packages installed on the new node. Discovers the network interfaces on the new node and checks the interface settings. Creates the following files on the new node:
386
Adding and removing cluster nodes Adding nodes using the VCS installer
/etc/llttab /etc/VRTSvcs/conf/sysname
Updates the following configuration files and copies them on the new node:
/etc/llthosts /etc/gabtab /etc/VRTSvcs/conf/config/main.cf
Copies the following files from the existing cluster to the new node /etc/vxfenmode /etc/vxfendg /etc/vx/.uuids/clusuuid /etc/default/llt /etc/default/gab /etc/default/vxfen Configures disk-based or server-based fencing depending on the fencing mode in use on the existing cluster.
At the end of the process, the new node joins the VCS cluster. Note: If you have configured server-based fencing on the existing cluster, make sure that the CP server does not contain entries for the new node. If the CP server already contains entries for the new node, remove these entries before adding the node to the cluster, otherwise the process may fail with an error. To add the node to an existing VCS cluster using the VCS installer
1 2
Log in as the root user on one of the nodes of the existing cluster. Run the VCS installer with the -addnode option.
# cd /opt/VRTS/install # ./installvcs -addnode
The installer displays the copyright message and the location where it stores the temporary installation logs.
Enter the name of a node in the existing VCS cluster. The installer uses the node information to identify the existing cluster.
Enter a node name in the VCS cluster to which you want to add a node: galaxy
Adding and removing cluster nodes Adding nodes using the VCS installer
387
Enter the name of the systems that you want to add as new nodes to the cluster.
Enter the system names separated by spaces to add to the cluster: saturn
The installer checks the installed products and packages on the nodes and discovers the network interfaces.
Enter the name of the network interface that you want to configure as the first private heartbeat link. Note: The LLT configuration for the new node must be the same as that of the existing cluster. If your existing cluster uses LLT over UDP, the installer asks questions related to LLT over UDP for the new node. See Configuring private heartbeat links on page 116.
Enter the NIC for the first private heartbeat link on saturn: [b,q,?] qfe:0
Enter y to configure a second private heartbeat link. Note: At least two private heartbeat links must be configured for high availability of the cluster.
Would you like to configure a second private heartbeat link? [y,n,q,b,?] (y)
Enter the name of the network interface that you want to configure as the second private heartbeat link.
Enter the NIC for the second private heartbeat link on saturn: [b,q,?] qfe:1
Depending on the number of LLT links configured in the existing cluster, configure additional private heartbeat links for the new node. The installer verifies the network interface settings and displays the information.
388
Adding and removing cluster nodes Adding a node using the Web-based installer
10 Review and confirm the information. 11 If you have configured SMTP, SNMP, or the global cluster option in the
existing cluster, you are prompted for the NIC information for the new node.
Enter the NIC for VCS to use on saturn: qfe:2
From the Task pull-down menu, select Add a Cluster node. From the product pull-down menu, select the product. Click the Next button.
2 3
Click OK to confirm the prerequisites to add a node. In the System Names field enter a name of a node in the cluster where you plan to add the node and click OK. The installer program checks inter-system communications and compatibility. If the node fails any of the checks, review the error and fix the issue. If prompted, review the cluster's name, ID, and its systems. Click the Yes button to proceed.
In the System Names field, enter the names of the systems that you want to add to the cluster as nodes. Separate system names with spaces. Click the Next button. The installer program checks inter-system communications and compatibility. If the system fails any of the checks, review the error and fix the issue. Click the Next button. If prompted, click the Yes button to add the system and to proceed.
5 6
From the heartbeat NIC pull-down menus, select the heartbeat NICs for the cluster. Click the Next button. Once the addition is complete, review the log files. Optionally send installation information to Symantec. Click the Finish button to complete the node's addition to the cluster.
389
If the existing cluster is See Configuring I/O fencing on the new node on page 397. configured for I/O fencing, configure I/O fencing on the new node. Add the node to the existing cluster. Start VCS and verify the cluster. See Adding the node to the existing cluster on page 400.
390
Figure 33-1
Public network
Private network
Connect the VCS private Ethernet controllers. Perform the following tasks as necessary:
When you add nodes to a two-node cluster, use independent switches or hubs for the private network connections. You can only use crossover cables for a two-node cluster, so you might have to swap out the cable for a switch or hub. If you already use independent hubs, connect the two Ethernet controllers on the new node to the independent hubs.
Figure 33-1 illustrates a new node being added to an existing two-node cluster using two independent hubs.
391
See Installing VCS software manually on page 209. See Adding a license key for a manual installation on page 213.
392
Extract the embedded authentication files and copy them to temporary directory:
# mkdir -p /var/VRTSvcs/vcsauth/bkup # cd /tmp; gunzip -c /opt/VRTSvcs/bin/VxAT.tar.gz | tar xvf -
The output is a string denoting the UUID. This UUID (without { and }) is used as the ClusterName for the setup file.
{UUID} # cat /tmp/eat_setup 2>&1
393
Copy the broker credentials from one node in the cluster to saturn by copying the entire bkup directory. The bkup directory content resembles the following example:
# cd /var/VRTSvcs/vcsauth/bkup/ # ls CMDSERVER CPSADM CPSERVER HAD VCS_SERVICES WAC
Import the credentials for HAD, CMDSERVER, CPSADM, CPSERVER, and WAC.
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/atutil import -z \ /var/VRTSvcs/vcsauth/data/VCS_SERVICES -f /var/VRTSvcs/vcsauth/bkup \ /HAD -p password
394
Create the file /etc/llthosts on the new node. You must also update it on each of the current nodes in the cluster. For example, suppose you add saturn to a cluster consisting of galaxy and nebula:
Update the file for all nodes, including the new one, resembling:
0 galaxy 1 nebula 2 saturn
Create the file /etc/llttab on the new node, making sure that line beginning "set-node" specifies the new node. The file /etc/llttab on an existing node can serve as a guide. The following example describes a system where node saturn is the new node on cluster ID number 2:
395
set-node saturn set-cluster 2 link qfe0 qfe:0 - ether - link qfe1 qfe:1 - ether -
Copy the following file from one of the nodes in the existing cluster to the new node: /etc/default/llt
In a setup that uses LLT over UDP, new nodes automatically join the existing cluster if the new nodes and all the existing nodes in the cluster are not separated by a router. However, if you use LLT over UDP6 link with IPv6 address and if the new node and the existing nodes are separated by a router, then do the following:
Edit the /etc/llttab file on each node to reflect the link information about the new node. Specify the IPv6 address for UDP link of the new node to all existing nodes. Run the following command on each existing node for each UDP link:
# /sbin/lltconfig -a set systemid device_tag address
The file on the new node should be the same. Symantec recommends that you use the -c -nN option, where N is the total number of cluster nodes.
396
/sbin/gabconfig -c -n2
The file on all nodes, including the new node, should change to reflect the change in the number of cluster nodes. For example, the new file on each node should resemble:
/sbin/gabconfig -c -n3
The -n flag indicates to VCS the number of nodes that must be ready to form a cluster before VCS starts.
Copy the following file from one of the nodes in the existing cluster to the new node: /etc/default/gab
To verify GAB
The output should indicate that port a membership shows all nodes including the new node. The output should resemble:
GAB Port Memberships ==================================== Port a gen a3640003 membership 012
Run the same command on the other nodes (galaxy and nebula) to verify that the port a membership includes the new node:
# /sbin/gabconfig -a GAB Port Memberships ==================================== Port a gen a3640003 membership 012 Port h gen fd570002 membership 01 Port h gen fd570002 visible ; 2
397
Prepare to configure I/O fencing on the new node. See Preparing to configure I/O fencing on the new node on page 397. If the existing cluster runs server-based fencing, configure server-based fencing on the new node. See Configuring server-based fencing on the new node on page 398. If the existing cluster runs disk-based fencing, you need not perform any additional step. Skip to the next task. After you copy the I/O fencing files and start I/O fencing, disk-based fencing automatically comes up. Copy the I/O fencing files from an existing node to the new node and start I/O fencing on the new node. See Starting I/O fencing on the new node on page 399.
If the existing cluster is not configured for I/O fencing, perform the procedure to add the new node to the existing cluster. See Adding the node to the existing cluster on page 400.
Determine whether the existing cluster runs disk-based or server-based fencing mechanism. On one of the nodes in the existing cluster, run the following command:
# vxfenadm -d
If the fencing mode in the output is SCSI3, then the cluster uses disk-based fencing. If the fencing mode in the output is CUSTOMIZED, then the cluster uses server-based fencing.
In the following cases, install and configure Veritas Volume Manager (VxVM) on the new node.
The existing cluster uses disk-based fencing. The existing cluster uses server-based fencing with at least one coordinator disk.
398
You need not perform this step if the existing cluster uses server-based fencing with all coordination points as CP servers. See the Veritas Storage Foundation and High Availability Installation Guide for installation instructions.
Server-based fencing in non-secure mode: To configure server-based fencing in non-secure mode on the new node Server-based fencing in secure mode: To configure server-based fencing with security on the new node
1 2
Log in to each CP server as the root user. Update each CP server configuration with the new node information:
# cpsadm -s mycps1.symantecexample.com \ -a add_node -c clus1 -h saturn -n2 Node 2 (saturn) successfully added
399
1 2
Log in to each CP server as the root user. Update each CP server configuration with the new node information:
# cpsadm -s mycps1.symantecexample.com \ -a add_node -c clus1 -h saturn -n2 Node 2 (saturn) successfully added
On one of the nodes in the existing VCS cluster, set the cluster configuration to read-write mode:
# haconf -makerw
Save the configuration by running the following command from any node in the VCS cluster:
# haconf -dump -makero
Copy the following I/O fencing configuration files from one of the nodes in the existing cluster to the new node:
400
Run the GAB configuration command on the new node to verify that the port b membership is formed.
# gabconfig -a
Copy the cluster UUID from the one of the nodes in the existing cluster to the new node:
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -copy -from_sys \ node_name_in_running_cluster -to_sys new_sys1 ... new_sysn
Where you are copying the cluster UUID from a node in the cluster (node_name_in_running_cluster) to systems from new_sys1 through new_sysn that you want to join the cluster.
Copy the following file from one of the nodes in the existing cluster to the new node: /etc/default/vcs
Copy the main.cf file from an existing node to your new node:
# rcp /etc/VRTSvcs/conf/config/main.cf \ saturn:/etc/VRTSvcs/conf/config/
401
Check the VCS configuration file. No error message and a return value of zero indicates that the syntax is legal.
# hacf -verify /etc/VRTSvcs/conf/config/
7 8
Run the GAB configuration command on each node to verify that port a and port h include the new node in the membership:
# /sbin/gabconfig -a GAB Port Memberships =================================== Port a gen a3640003 membership 012 Port h gen fd570002 membership 012
If the cluster uses I/O fencing, then the GAB output also shows port b membership.
402
See Verifying the status of nodes and Check the status of the nodes and the service service groups on page 402. groups.
Switch or remove any VCS service groups on See Deleting the departing node from the node departing the cluster. VCS configuration on page 403. Delete the node from VCS configuration.
Modify the llthosts(4) and gabtab(4) files to reflect See Modifying configuration files on the change. each remaining node on page 406. If the existing cluster is configured to use server-based I/O fencing, remove the node configuration from the CP server. See Removing the node configuration from the CP server on page 407.
For a cluster that is running in a secure mode, See Removing security credentials remove the security credentials from the leaving from the leaving node on page 408. node. On the node departing the cluster: See Unloading LLT and GAB and removing VCS on the departing node Modify startup scripts for LLT, GAB, and VCS on page 408. to allow reboot of the node without affecting the cluster. Unconfigure and unload the LLT and GAB utilities.
403
Frozen 0 0 0
Probed Y Y Y Y Y Y
AutoDisabled N N N N N N
The example output from the hastatus command shows that nodes galaxy, nebula, and saturn are the nodes in the cluster. Also, service group grp3 is configured to run on node nebula and node saturn, the departing node. Service group grp4 runs only on node saturn. Service groups grp1 and grp2 do not run on node saturn.
Remove the service groups that other service groups depend on, or Switch the service groups to another node that other service groups depend on.
404
Switch failover service groups from the departing node. You can switch grp3 from node saturn to node nebula.
# hagrp -switch grp3 -to nebula
Check for any dependencies involving any service groups that run on the departing node; for example, grp4 runs only on the departing node.
# hagrp -dep
If the service group on the departing node requires other service groupsif it is a parent to service groups on other nodesunlink the service groups.
# haconf -makerw # hagrp -unlink grp4 grp1
These commands enable you to edit the configuration and to remove the requirement grp4 has for grp1.
405
Check the status again. The state of the departing node should be EXITED. Make sure that any service group that you want to fail over is online on other nodes.
# hastatus -summary --A A A --B B B B B B SYSTEM STATE System State galaxy RUNNING nebula RUNNING saturn EXITED GROUP STATE Group System grp1 galaxy grp1 nebula grp2 galaxy grp3 nebula grp3 saturn grp4 saturn
Frozen 0 0 0
Probed Y Y Y Y Y Y
AutoDisabled N N N N Y N
Delete the departing node from the SystemList of service groups grp3 and grp4.
# hagrp -modify grp3 SystemList -delete saturn # hagrp -modify grp4 SystemList -delete saturn
For the service groups that run only on the departing node, delete the resources from the group before you delete the group.
# hagrp -resources grp4 processx_grp4 processy_grp4 # hares -delete processx_grp4 # hares -delete processy_grp4
Delete the service group that is configured to run on the departing node.
# hagrp -delete grp4
406
GROUP STATE Group System grp1 galaxy grp1 nebula grp2 galaxy grp3 nebula
Probed Y Y Y Y
AutoDisabled N N N N
407
If necessary, modify the /etc/gabtab file. No change is required to this file if the /sbin/gabconfig command has only the argument -c. Symantec recommends using the -nN option, where N is the number of cluster systems. If the command has the form /sbin/gabconfig -c -nN, where N is the number of cluster systems, make sure that N is not greater than the actual number of nodes in the cluster. When N is greater than the number of nodes, GAB does not automatically seed. Symantec does not recommend the use of the -c -x option for /sbin/gabconfig.
Modify /etc/llthosts file on each remaining nodes to remove the entry of the departing node. For example, change:
0 galaxy 1 nebula 2 saturn
To:
0 galaxy 1 nebula
1 2
Log into the CP server as the root user. View the list of VCS users on the CP server, using the following command:
# cpsadm -s cp_server -a list_users
408
Remove the VCS user associated with the node you previously removed from the cluster. For CP server in non-secure mode:
# cpsadm -s cp_server -a rm_user \ -e cpsclient@saturn -f cps_operator
-g vx
View the list of nodes on the CP server to ensure that the node entry was removed:
# cpsadm -s cp_server -a list_nodes
Unloading LLT and GAB and removing VCS on the departing node
Perform the tasks on the node that is departing the cluster. If you have configured VCS as part of the Storage Foundation and High Availability products, you may have to delete other dependent packages before you can delete all of the following ones.
409
If you had configured I/O fencing in enabled mode, then stop I/O fencing.
# svcadm disable -t vxfen
Disable the startup files to prevent LLT, GAB, or VCS from starting up:
# /usr/sbin/svcadm disable -t llt # /usr/sbin/svcadm disable -t gab # /usr/sbin/svcadm disable -t vcs
410
To permanently remove the VCS packages from the system, use the pkgrm command. Start by removing the following packages, which may have been optionally installed, in the order shown:
# pkgrm VRTSvcsea # pkgrm VRTSat # pkgrm VRTSvcsag # pkgrm VRTScps # pkgrm VRTSvcs # pkgrm VRTSamf # pkgrm VRTSvxfen # pkgrm VRTSgab # pkgrm VRTSllt # pkgrm VRTSspt # rpm -e VRTSsfcpi60 # pkgrm VRTSperl # pkgrm VRTSvlic
Remove the language packages and patches. See Removing VCS packages manually on page 369.
Section
11
Installation reference
Appendix A. VCS installation packages Appendix B. Installation command options Appendix C. Changes to bundled agents in VCS 6.0 Appendix D. Configuration files Appendix E. Installing VCS on a single node Appendix F. Configuring LLT over UDP Appendix G. Configuring the secure shell or the remote shell for communications Appendix H. Troubleshooting VCS installation Appendix I. Sample VCS cluster setup diagrams for CP server-based I/O fencing Appendix J. Reconciling major/minor numbers for NFS shared disks Appendix K. Compatability issues when installing Veritas Cluster Server with other products
412
Appendix
Contains the binaries for the Required Veritas Asynchronous Monitoring Framework kernel driver functionality for the Process and Mount based agents. Contains the binaries for the Optional. Required to Veritas Coordination Point Server. Coordination Point Server (CPS). Contains the binaries for Veritas Required Cluster Server group membership Depends on VRTSllt. and atomic broadcast services. Contains the binaries for Veritas Cluster Server low-latency transport. Required
VRTScps
VRTSgab
VRTSllt
VRTSperl
414
Required/Optional
Required
You can use this script to simplify the native operating system installations, configurations, and upgrades. VRTSspt Contains the binaries for Veritas Software Support Tools. VRTSvcs contains the following components:
VRTSvcs
Required
Depends on VRTSperl and Contains the binaries for VRTSvlic. Veritas Cluster Server. Contains the binaries for Veritas Cluster Server manual pages. Contains the binaries for Veritas Cluster Server English message catalogs. Contains the binaries for Veritas Cluster Server utilities. These utilities include security services. VRTSvcsag Contains the binaries for Veritas Cluster Server bundled agents. Required Depends on VRTSvcs.
415
VRTSvcsea contains the binaries Optional for VCS. Required to use for Veritas high availability agents VCS with the high availability for DB2, Sybase, and Oracle. agents for DB2, Sybase, or Oracle. Contains the binaries for Symantec License Utilities. Contains the binaries for Veritas I/O Fencing . Veritas Storage Foundation Managed Recommended Host Discovers configuration information on a Storage Foundation managed host. This information is stored on a central database, which is not part of this release. You must download the database separately at: http://www.symantec.com/business/ storage-foundation-manager Required
VRTSvlic
VRTSvxfen
VRTSsfmh
VRTSvbs
Enables the VBS command line interface on a Veritas Operations Manager managed host in a Virtual Business Service configuration. For more information, see the Virtual Business ServiceAvailability Users Guide.
Recommended Depends on VRTSsfmh. VRTSsfmh version must be 4.1 or later for VRTSvbs to get installed.
VRTSvcsnr
Optional You must install VRTSvcsnr manually inside a Oracle VM Server logical domain if the domain is to be configured for disaster recovery.
Table A-2 shows the package name, contents, and type for each Veritas Cluster Server language package.
416
Package type
Common L10N package
VRTSatJA VRTSjacav
Japanese language package Contains the binaries for Japanese Japanese language package VERITAS Cluster Server Agent Extensions for Storage Cluster File System - Manual Pages and Message Catalogs. Contains Japanese Veritas High Japanese language package Availability Enterprise Agents by Symantec. Contains the binaries for Veritas Japanese language package Cluster Server Japanese Message Catalogs by Symantec. Contains the binaries for Japanese Japanese language package Veritas Cluster Utility Language Pack by Symantec. Contains the binaries for Japanese Japanese language package RAC support package by Symantec. Contains the Japanese Storage Management Software for Databases - Message Catalog. Japanese language package
VRTSjacse
VRTSjacs
VRTSjacsu
VRTSjadba
VRTSjadbe
VRTSjafs
Contains the binaries for Japanese Japanese language package Language Message Catalog and Manual Pages for VERITAS File System. Contains the binaries for Japanese Japanese language package Message Catalog and Man Pages for ODM. Contains the binaries for Japanese Japanese language package Virtual Disk Subsystem Message Catalogs and Manual Pages.
VRTSjaodm
VRTSjavm
417
Contains the binaries for Chinese Chinese language package Virtual Disk Subsystem Message Catalogs and Manual Pages.
418
Appendix
Command options for installvcs program Command options for uninstallvcs program
420
| | | |
-ignorepatchreqs | -settunables | -security | -securityonenode -securitytrust | -addnode | -fencing | -upgrade_kernelpkgs -upgrade_nonkernelpkgs | -rolling_upgrade -rollingupgrade_phase1 | -rollingupgrade_phase2 ]
Table B-1 provides a consolidated list of the options used with the installvcs command and uninstallvcs command. Table B-1 Option and Syntax
-addnode
-allpkgs
-comcleanup
Remove the ssh or ssh configuration added by installer on the systems. The option is only required when installation routines that performed auto-configuration of ssh or rsh are abruptly terminated. Configure VCS after using -install option to install VCS.
-configure
421
-copyinstallscripts Use this option when you manually install products and want to use the installation scripts that are stored on the system to perform product configuration, uninstallation, and licensing tasks without the product media. Use this option to copy the installation scripts to an alternate rootpath when you use it with the -rootpath option. The following examples demonstrate the usage for this option:
./installer -copyinstallscripts
Copies the installation and uninstallation scripts for all products in the release to /opt/VRTS/install. It also copies the installation Perl libraries to /opt/VRTSperl/lib/site_perl/release_name . ./installproduct_name -copyinstallscripts Copies the installation and uninstallation scripts for the specified product and any subset products for the product to /opt/VRTS/install. It also copies the installation Perl libraries to /opt/VRTSperl/lib/site_perl/release_name . ./installer -rootpath alt_root_path -copyinstallscripts The path alt_root_path can be a directory like /rdisk2. In that case, this command copies installation and uninstallation scripts for all the products in the release to /rdisk2/opt/VRTS/install. CPI perl libraries are copied at /rdisk2/opt/VRTSperl/lib/site_perl/release_name. For example, for the 6.0 release, the release_name is UXRT60. -fencing Configure I/O fencing after you configure VCS. The script provides an option to configure disk-based I/o fencing or server-based I/O fencing.
-flash_archive Generate Flash archive scripts which can be used by Solaris Jumpstart Server for automated Flash archive installation of all packages and patches for every product, an available location to store the post deployment scripts should be specified as a complete path. -hostfile Specify the location of a file that contains the system names for the installer. Install product packages on systems without configuring VCS.
-install
-installallpkgs Select all the packages for installation. See the -allpkgs option.
422
-installminpkgs Select the minimum packages for installation. See the -minpkgs option. -installrecpkgs Select the recommended packages for installation. See the -recpkgs option. -jumpstart dir_path Use this option to generate the finish scripts that the Solaris JumpStart Server can use for Veritas products. The dir_path indicates the path to an existing directory where the installer must store the finish scripts. Specify a key file for SSH. The option passes -i ssh_key_file with each SSH invocation. Register or update product licenses on the specified systems. This option is useful to replace a demo license. Specify that log_path, not /opt/VRTS/install/logs, is the location where installvcs log files, summary file, and response file are saved.
-logpath log_path
-makeresponsefile Generate a response file. No actual software installation occurs when you use this option. Create a response file or to verify that your system configuration is ready for uninstalling VCS. -minpkgs View a list of the minimal packages for VCS. The installvcs program lists the packages in the correct installation order. The list does not include the optional packages. You can use the output to create scripts for command-line installation, or for installations over a network. See the -allpkgs and the -recpkgs options. -nolic Install product packages on systems without licensing or configuration. License-based features or variants are not installed when you use this option.
423
If you do not specify an option, all three lists of packages are displayed. -pkgpath pkg_path Specify that pkg_path contains all packages that the installvcs program is about to install on all systems. The pkg_path is the complete path of a directory, usually NFS mounted. Discover and lists the 6.0 packages installed on the systems that you specify. Display the VCS 6.0 packages in the correct installation order. Check for different HA and file system-related processes, the availability of different ports, and the availability of cluster-related service groups. Verify that systems meet the installation requirements before proceeding with VCS installation. Symantec recommends doing a precheck before you install VCS. See Performing automated preinstallation check on page 72. -recpkgs View a list of the recommended packages for VCS. The installvcs program lists the packages in the correct installation order. The list does not include the optional packages. You can use the output to create scripts for command-line installation, or for installations over a network. See the -allpkgs and the -minpkgs options. -requirements View a list of required operating system version, required patches, file system space, and other system requirements to install VCS.
-pkgset
-pkgtable -postcheck
-precheck
424
-rootpath root_path
Specify that root_path is the root location for the installation of all packages. On Solaris, -rootpath passes -R root_path to pkgadd command.
-redirect
Specify that the installer need not display the progress bar details during the installation. Specify that rsh and rcp are to be used for communication between systems instead of ssh and scp. This option requires that systems be preconfigured such that rsh commands between systems execute without prompting for passwords or confirmations Enable or disable secure mode in a VCS cluster. See the Veritas Cluster Server Administrators Guide for instructions.
-rsh
-security
-securityonenode Form a secure cluster node by node in environments that do not support passwordless ssh or passwordless rsh. See Configuring a secure cluster node by node on page 123. -securitytrust Set up a trust relationship between your VCS cluster and a broker. See Setting up trust relationships for your VCS cluster on page 122. -serial Perform the installation, uninstallation, start, and stop operations on the systems in a serial fashion. By default, the installer performs these operations simultaneously on all the systems.
425
-stop
Stop the daemons and processes for VCS. If the installvcs program failed to start up all the VCS processes, you can use the -stop option to stop all the processes and then use the -start option to start the processes. See the -start option. See Starting and stopping processes for the Veritas products on page 500.
-timeout
Specifies the timeout value (in seconds) for each command that the installer issues during the installation. The default timeout value is set to 600 seconds. Specify that tmp_path is the working directory for installvcs program. This path is different from the /var/tmp path. This destination is where the installvcs program performs the initial logging and where the installvcs program copies the packages on remote systems before installation. Uninstall VCS from the systems that you specify. Upgrade the installed packages on the systems that you specify.
-tmppath tmp_path
-uninstall -upgrade
Upgrade the product kernel packages to the latest version during rollingupgrade_phase1 rolling upgrade Phase 1. Upgrade the VCS and other agent packages to the latest version during rollingupgrade_phase2 rolling upgrade Phase 2. Product kernel drivers are rolling-upgraded to the latest protocol version.
426
-settunables
-tunablesfile
-upgrade_kernelpkgs Has been renamed to -rollingupgrade_phase1 - p r d _ o k r e p g Has been renamed to -rollingupgrade_phase2 ugaennenlks
427
| -rollingupgrade_phase1 | -rollingupgrade_phase2 ]
For description of the uninstallvcs command options: See Table B-1 on page 420.
428
Appendix
Deprecated agents New agents New and modified attributes for VCS 6.0 agents Manually removing deprecated resource types and modifying attributes Creating new VCS accounts if you used native operating system accounts
Deprecated agents
The following agents are no longer supported:
CampusCluster CFSQlogckd ClusterMonitorConfig DiskReservation NFSLock Service group heartbeat (ServiceGroupHB) SANVolume VRTSWebApp
430
New agents
The following new agent is added in the 6.0 release:
AlternateIOMonitors VCS storage and network service groups that in turn monitor redundant I/O services exported from the control domain and alternate I/O domain to a guest logical domain.
VolumeSetBrings Veritas Volume Manager (VxVM) volume sets online and offline, and monitors them. DiskMonitors a physical disk or a partition. ProjectAdds, deletes, and monitors Solaris projects.
DiskGroupSnapVerifies the configuration and the data integrity in a campus cluster environment. LDomMonitors and manages logical domains on Solaris SPARC. ZpoolMonitors ZFS storage pools. SambaServerMonitors the smbd process. SambaShareUse to make a Samba Share highly available or to monitor it. NetBiosUse to make the nmbd process highly available or to monitor it.
Apache (now bundled on all platforms)Provides high availability to an Apache Web server. NFSRestartProvides high availability for NFS record locks. ProcessOnOnlyStarts and monitors a user-specified process. RemoteGroupMonitors and manages a service group on another system.
Refer to the Veritas Cluster Server Bundled Agents Reference Guide for more information on these new agents.
Table C-1 lists the attributes that VCS adds or modifies when you upgrade from VCS 5.1 SP1 to VCS 6.0.
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
431
Table C-2 lists the attributes that VCS adds or modifies when you upgrade from VCS 5.1 to VCS 5.1 SP1. Table C-3 lists the attributes that VCS adds or modifies when you upgrade from VCS 5.0 MP3 to VCS 5.1. Table C-4 lists the attributes that VCS adds or modifies when you upgrade from VCS 5.0 to VCS 5.0 MP3. Table C-5 lists the attributes that VCS adds or modifies when you upgrade from VCS 4.1 to VCS 5.0. Changes to attributes from VCS 5.1 SP1 to VCS 6.0 New and modified attributes Default value
New attributes AgentFile ArgList StorageSG NetworkSG Application Modified attributes ArgList { State, IState, User, StartProgram, StopProgram, CleanProgram, (new attribute added to list) MonitorProgram, PidFiles, MonitorProcesses, EnvFile, UseSUDash } IMF { Mode = 3, MonitorFreq = 1, RegisterRetryLimit = 3 } { "program.vfd", "user.vfd", "cksum.vfd", getcksum, propcv } bin/Script51Agent { StorageSG, NetworkSG } {} {}
432
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.1 SP1 to VCS 6.0 (continued) New and modified attributes
RefreshInterval CleanRRKeys Modified attribute ArgList { Domain, TTL, TSIGKeyFile, StealthMasters, ResRecord, CreatePTR, (new attribute added to list) OffDelRR, UseGSSAPI, RefreshInterval, CleanRRKeys }
Default value
0 0
DiskGroup Modified attributes PanicSystemOnDGLoss (attribute data type change) ArgList { DiskGroup, StartVolumes, StopVolumes, MonitorOnly, MonitorReservation, (new attribute added to list) tempUseFence, PanicSystemOnDGLoss, DiskGroupType, UmountVolumes, Reservation, ConfidenceLevel } DiskGroupSnap New attribute FDType Modified attribute ArgList (new attribute added to list) IP Modified attribute RegList IPMultiNIC Modified attribute ToleranceLimit 1 { NetMask } { TargetResName, FDSiteName, FDType } "" int PanicSystemOnDGLoss = 0
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
433
Changes to attributes from VCS 5.1 SP1 to VCS 6.0 (continued) New and modified attributes Default value
New attributes DNS Netmask CEInfo "" "" { Enabled=0, CESystem=NONE, FaultOnHBLoss=1 } "" 0 "" "" 1
{ State, IState, LDomName, CfgFile, MonitorCPU, NumCPU, (new attribute added to list) ConfigureNetwork, IPAddress, Netmask, Gateway, DNS, Memory, Memory, RemoveLDomConfigForMigration } AgentFile RegList (new attribute added to list) Mount Modified attributes AEPTimeout IMF 1 { Mode = 3, MonitorFreq = 1, RegisterRetryLimit = 3 } bin/Script51Agent { NumCPU, Memory }
434
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.1 SP1 to VCS 6.0 (continued) New and modified attributes
SecondLevelMonitor (deprecated attribute) SecondLevelTimeout (deprecated attribute) ArgList { MountPoint, BlockDevice, FSType, MountOpt, FsckOpt, SnapUmount, (list updated for deprecated CkptUmount, OptCheck, CreateMntPt, attributes) MntPtPermission, MntPtOwner, MntPtGroup, AccessPermissionChk, RecursiveMnt, VxFSMountLock }
Default value
RVGPrimary Modified attribute ArgList { RvgResourceName, "RvgResourceName:RVG", (new attribute added to list) "RvgResourceName:DiskGroup", AutoTakeover, AutoResync, BunkerSyncTimeOut, BunkerSyncElapsedTime } Zone New attributes DROpts DeleteVCSZoneUser Modified attributes IMF (attribute data type change) static int IMF{} = { Mode = 3, MonitorFreq = 5, RegisterRetryLimit = 3 } {} 0
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
435
Changes to attributes from VCS 5.1 SP1 to VCS 6.0 (continued) New and modified attributes
AEPTimeout IMFRegList ArgList
Default value
1 { BootState }
{ Pool, BootState, ShutdownGracePeriod, RunFsck, DetachZonePath, ForceAttach, (new attributes added to list) DeleteVCSZoneUser, DROpts } Zpool New attributes ForceOpt DeviceDir FailMode ForceRecoverOpt Modified attribute ArgList { PoolName, AltRootPath, ChkZFSMounts, ZoneResName, "ZoneResName:State", (new attribute added to list) DeviceDir, FailMode, ForceOpt, ForceRecoverOpt } 0 [] continue 0
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 New and modified attributes Default value
New attributes EnvFile UseSUDash RegList IMFRegList "" 0 { MonitorProcesses, User } { MonitorProcesses, User, PidFiles, MonitorProgram }
Modified attributes
436
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes
User (change in default value) ArgList (new attribute added to list) { User, StartProgram, StopProgram, CleanProgram, MonitorProgram, PidFiles, MonitorProcesses, EnvFile, UseSUDash }
Default value
"root"
Disk (new agent) New attributes Partition ArgList OfflineMonitorInterval Operations DiskGroup New attribute Reservation Modified attribute ArgList (new attribute added to list) { DiskGroup, StartVolumes, StopVolumes, MonitorOnly, MonitorReservation, tempUseFence, PanicSystemOnDGLoss, UmountVolumes, Reservation } "ClusterDefault" "" { Partition } 60 "None"
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
437
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes
ArgList (new attribute added to list)
Default value
{ Device, Address, NetMask, Options, ArpDelay, IfconfigTwice, RouteOptions , PrefixLen, ExclusiveIPZone }
IPMultiNICB New attribute Options Modified attribute ArgList (new attribute added to list) { BaseResName, Address, NetMask, DeviceChoice, RouteOptions, PrefixLen, IgnoreMultiNICBFailure, "BaseResName:Protocol", Options } ""
Mount New attribute IMFRegList MultiNICA Modified attribute Protocol (change in default value) MultiNICB Modified attribute Protocol (change in default value) NetBios New attribute PidFile Modified attribute "" "IPv4" "IPv4" { MountPoint, BlockDevice, FSType }
438
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes
ArgList (new attribute added to list)
Default value
{ "SambaServerRes:ConfFile", "SambaServerRes:SambaTopDir", "SambaServerRes:LockDir", NetBiosName, NetBiosAliases, Interfaces, WinsSupport, DomainMaster, "SambaServerRes:PidFile", SambaServerRes, PidFile }
NFS New attribute CleanRmtab Modified attribute ArgList (new attribute added to list) NFSRestart New attribute Lower Modified attribute ArgList (new attribute added to list) { LocksPathName, NFSLockFailover, LockServers, NFSRes, "NFSRes:Nservers", "NFSRes:LockFileTimeout", "NFSRes:UseSMF", Lower, State } 0 { UseSMF, Nservers, LockFileTimeout, CleanRmtab } 0
NIC New attributes ExclusiveIPZone ContainerOpts Modified attributes Protocol "IPv4" 0 { RunInContainer=0, PassCInfo=1 }
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
439
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes
ArgList (new attribute added to list)
Default value
{ Device, PingOptimize, NetworkHosts, Protocol, NetworkType, ExclusiveIPZone }
NotifierMngr New attribute NotifierSourceIP Modified attribute ArgList (new attribute added to list) { EngineListeningPort, MessagesQueue, NotifierListeningPort, NotifierSourceIP, SnmpdTrapPort, SnmpCommunity, SnmpConsoles, SmtpServer, SmtpServerVrfyOff, SmtpServerTimeout, SmtpReturnPath, SmtpFromPath, SmtpRecipients } ""
RemoteGroup New attributes ReturnIntOffline OfflineMonitoringNode Modified attributes IntentionalOffline (change in default value,RemoteGroup agent now supports intentional offline feature.) ArgList (new attribute added to list) { IpAddress, Port, Username, Password, GroupName, VCSSysName, ControlMode, OfflineWaitTime, DomainType, BrokerIp, ReturnIntOffline } 1 {} ""
440
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes
BunkerSyncTimeOut BunkerSyncElapsedTime Modified attributes ArgList (new attribute added to list) SupportedActions (new action added to keylist) { RvgResourceName, AutoTakeover, AutoResync, BunkerSyncTimeOut, BunkerSyncElapsedTime } { fbsync, ElectPrimary }
Default value
"" 0
RVGSnapshot New attribute VCSResLock SambaServer New attributes PidFile SocketAddress Modified attribute ArgList (new attribute added to list) { ConfFile, SambaTopDir, LockDir, Ports, IndepthMonitorCyclePeriod, ResponseTimeout, PidFile, SocketAddress } "" "" ""
SambaShare Modified attribute ArgList (dependent attributes added to list) { "SambaServerRes:ConfFile", "SambaServerRes:SambaTopDir", "SambaServerRes:LockDir", ShareName, ShareOptions, "SambaServerRes:Ports", SambaServerRes, "SambaServerRes:PidFile", "SambaServerRes:SocketAddress" }
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
441
Changes to attributes from VCS 5.1 to VCS 5.1 SP1 (continued) New and modified attributes Default value
New attributes DiskGroup VolumeSet ArgList Zone New attributes RunFsck DetachZonePath ForceAttach Modified attribute BootState (change in default value) ArgList (dependent attributes added to list) { Pool, BootState, ShutdownGracePeriod, RunFsck, DetachZonePath, ForceAttach } "multi-user" 0 1 1 "" "" { DiskGroup, VolumeSet }
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 New and modified attributes Default value
442
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 (continued) New and modified attributes
New attribute ContainerOpts RunInContainer=1, PassCInfo=0
Default value
Application Modified attribute ContainerName (deleted attribute) ContainerType (deleted attribute) New attribute ContainerOpts DNS Modified attributes Alias (deleted attribute) Hostname (deleted attribute) DiskGroup Modified attributes StartVolumes StopVolumes PanicSystemOnDGLoss IP New attributes RouteOptions PrefixLen ContainerOpts RunInContainer=0, PassCInfo=1 1 1 0 RunInContainer=1, PassCInfo=0
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
443
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 (continued) New and modified attributes
Modified attribute ContainerName (deleted attribute)
Default value
IPMultiNIC New attributes PrefixLen ContainerOpts Modified attribute ContainerName (deleted attribute) IPMultiNICB New attributes OnlineRetryLimit ContainerOpts IgnoreMultiNICBFailure RouteOptions PrefixLen Modified attributes ContainerName (deleted attribute) LDOM New attributes MonitorCPU Mount New attributes 1 1 RunInContainer=0, PassCInfo=1 0 RunInContainer=0, PassCInfo=1
444
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 (continued) New and modified attributes
OptCheck CreateMountPt ReuseMntPt MntPtPermission MntPtOwner MntPtGroup AccessPermissionChk RecursiveMnt ContainerOpts Modified attributes ContainerName (deleted attribute) ContainerType (deleted attribute) Zone 0 0 RunInContainer=0, PassCInfo=0
Default value
0 0 0
MultiNICA New attributes Protocol MultiNICB Modified attribute MpathdCommand New attribute Protocol NFS New attribute UseSMF 0 /usr/lib/inet/in.mpathd
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
445
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 (continued) New and modified attributes Default value
New attribute LockServers NIC New attribute Protocol Phantom Modified attribute Dummy (deleted attribute) Process New attribute ContainerOpts Modified attributes ContainerName (deleted attribute) ContainerType (deleted attribute) ProcessOnOnly New attribute ContainerOpts Modified attributes ContainerName (deleted attribute) ContainerType (deleted attribute) Zone RunInContainer=0, PassCInfo=1 Zone RunInContainer=1, PassCInfo=0 20
446
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 5.0 MP3 to VCS 5.1 (continued) New and modified attributes Default value
New attribute NFSRes Zone New attributes ContainerOpts BootState Pool Modified attribute ZoneName (deleted attribute) RunInContainer=1, PassCInfo=0
Changes to attributes from VCS 5.0 to VCS 5.0 MP3 New and modified attributes Default value
New attribute SupportedActions ContainerType PidFile ContainerName IntentionalOffline DNS New attributes SupportedActions "dig.vfd", "keyfile.vfd", "master.vfd" 0 "checkconffile.vfd" Zone
ResRecord
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
447
Changes to attributes from VCS 5.0 to VCS 5.0 MP3 (continued) New and modified attributes
CreatePTR OffDelRR
Default value
0 0
DiskGroup New attributes SupportedActions "license.vfd", "disk.vfd", "udid.vfd", "verifyplex.vfd", "checkudid", "campusplex", "numdisks", "joindg", "splitdg", "getvxvminfo", "volinuse" 0
NFSRestart New attributes SupportedActions Share New attributes SupportedActions "direxists.vfd" "lockdir.vfd", "nfsconf.vfd"
Changes to attributes from VCS 4.1 to VCS 5.0 New and modified attributes Default value
New attributes
448
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
Changes to attributes from VCS 4.1 to VCS 5.0 (continued) New and modified attributes
SupportedActions
Default value
program.vfd, user.vfd, cksum.vfd, getcksum
PanicSystemOnDGLoss DiskGroupType Modified attributes tempUseFence IP New attributes SupportedActions ContainerName Modified attribute IfconfigTwice IPMultiNIC New attributes ContainerName Modified attribute IfconfigTwice IPMultiNICB New attributes ToleranceLimit MonitorInterval ContainerName
Invalid
"device.vfd", "route.vfd"
1 30
Changes to bundled agents in VCS 6.0 New and modified attributes for VCS 6.0 agents
449
Changes to attributes from VCS 4.1 to VCS 5.0 (continued) New and modified attributes
Modified attribute DeviceChoice 0
Default value
VxFSMountLock ContainerName ContainerType Modified attribute SnapUmount MultiNICA Modified attribute IfconfigTwice MultiNICB New attribute GrNew attributes oupName Modified attributes NoBroadcast Failback NFS New attributes LockFileTimeout NIC
Zone
180
450
Changes to bundled agents in VCS 6.0 Manually removing deprecated resource types and modifying attributes
Changes to attributes from VCS 4.1 to VCS 5.0 (continued) New and modified attributes
New attributes SupportedActions "device.vfd"
Default value
2 3
Back up the configuration file, main.cf to a location on the cluster node. Edit the main.cf located under /etc/VRTSvcs/conf/config. Perform the following instructions:
Remove the resource of the deprecated resource types. You must modify the resource dependencies to ensure that the configuration works properly.
Changes to bundled agents in VCS 6.0 Creating new VCS accounts if you used native operating system accounts
451
Modify attribute values that might have changed. See Table C-3 on page 441. See Table C-4 on page 446. See Table C-5 on page 447. Save the main.cf. Reformat the main.cf file.
# hacf -cftocmd config # hacf -cmdtocf config
5 6
Creating new VCS accounts if you used native operating system accounts
VCS has deprecated the AllowNativeCliUsers attribute. To use native OS accounts with VCS, use the halogin command. After you run the halogin command, VCS encrypts and stores your VCS credentials in your home directory for a specific time period. After you run the halogin command, you need not authenticate yourself every time you run a VCS command. In secure clusters, the command also sets up a trust relationship and retrieves a certificate from an authentication broker. See the Veritas Cluster Server Administrators Guide for information on assigning user privileges to OS user groups for clusters running in secure mode and clusters not running in secure mode. Perform the following procedure if you used the AllowNativeCliUsers attribute. Ensure that each native user running VCS commands has a home directory on the system from which the user runs VCS commands.
452
Changes to bundled agents in VCS 6.0 Creating new VCS accounts if you used native operating system accounts
2 3
Assign proper privileges to the OS users or user groups. Each operating system user must perform steps 3 and 4. If the user executes VCS commands from a remote host, set the following environment variables:
VCS_HOST: Name of the VCS node on which you run commands. You may specify the virtual IP address associated with the cluster. VCS_DOMAIN: Name of the VxSS domain to which the user belongs. VCS_DOMAINTYPE: Type of VxSS domain: unixpwd, ldap, nt, nis, nisplus, or vx.
2 3
Create VCS user accounts for all users and assign privileges to these users. Each VCS user must run the halogin command:
$ halogin vcsusername password
Appendix
Configuration files
This appendix includes the following topics:
About the LLT and GAB configuration files About the AMF configuration files About the VCS configuration files About I/O fencing configuration files Sample configuration files for CP server
454
Description
This file stores the start and stop environment variables for LLT: LLT_STARTDefines the startup behavior for the LLT module after a system reboot. Valid values include: 1Indicates that LLT is enabled to start up. 0Indicates that LLT is disabled to start up. LLT_STOPDefines the shutdown behavior for the LLT module during a system shutdown. Valid values include: 1Indicates that LLT is enabled to shut down. 0Indicates that LLT is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration. If you manually configured VCS, make sure you set the values of these environment variables to 1. /etc/llthosts The file llthosts is a database that contains one entry per system. This file links the LLT system ID (in the first column) with the LLT host name. This file must be identical on each node in the cluster. A mismatch of the contents of the file can cause indeterminate behavior in the cluster. For example, the file /etc/llthosts contains the entries that resemble: 0 1 galaxy nebula
455
Description
The file llttab contains the information that is derived during installation and used by the utility lltconfig(1M). After installation, this file lists the private network links that correspond to the specific system. For example, the file /etc/llttab contains the entries that resemble the following:
For Solaris SPARC: set-node galaxy set-cluster 2 link qfe0 /dev/qfe:0 - ether - link qfe1 /dev/qfe:1 - ether - -
For Solaris x64: set-node galaxy set-cluster 2 link e1000g1 /dev/e1000g:1 - ether - link e1000g2 /dev/e1000g:2 - ether - -
The first line identifies the system. The second line identifies the cluster (that is, the cluster ID you entered during installation). The next two lines begin with the link command. These lines identify the two network cards that the LLT protocol uses. If you configured a low priority link under LLT, the file also includes a "link-lowpri" line. Refer to the llttab(4) manual page for details about how the LLT configuration may be modified. The manual page describes the ordering of the directives in the llttab file.
Table D-2 lists the GAB configuration files and the information that these files contain.
456
The installer sets the value of these variables to 1 at the end of VCS configuration. If you manually configured VCS, make sure you set the values of these environment variables to 1. /etc/gabtab After you install VCS, the file /etc/gabtab contains a gabconfig(1) command that configures the GAB driver for use. The file /etc/gabtab contains a line that resembles: /sbin/gabconfig -c -nN The -c option configures the driver for use. The -nN specifies that the cluster is not formed until at least N nodes are ready to form the cluster. Symantec recommends that you set N to be the total number of nodes in the cluster.
Note: Symantec does not recommend the use of the -c -x option for
/sbin/gabconfig. Using -c -x can lead to a split-brain condition.
457
/etc/amftab
After you install VCS, the file /etc/amftab contains a amfconfig(1) command that configures the AMF driver for use. The AMF init script uses this /etc/amftab file to configure the AMF driver. The /etc/amftab file contains the following line by default: /opt/VRTSamf/bin/amfconfig -c
main.cf The installer creates the VCS configuration file in the /etc/VRTSvcs/conf/config folder by default during the VCS configuration. The main.cf file contains the minimum information that defines the cluster and its nodes. See Sample main.cf file for VCS clusters on page 459. See Sample main.cf file for global clusters on page 460. types.cf The file types.cf, which is listed in the include statement in the main.cf file, defines the VCS bundled types for VCS resources. The file types.cf is also located in the folder /etc/VRTSvcs/conf/config. Additional files similar to types.cf may be present if agents have been added, such as OracleTypes.cf. /etc/default/vcs This file stores the start and stop environment variables for VCS engine:
458
VCS_STARTDefines the startup behavior for VCS engine after a system reboot. Valid values include: 1Indicates that VCS engine is enabled to start up. 0Indicates that VCS engine is disabled to start up.
VCS_STOPDefines the shutdown behavior for VCS engine during a system shutdown. Valid values include: 1Indicates that VCS engine is enabled to shut down. 0Indicates that VCS engine is disabled to shut down. The installer sets the value of these variables to 1 at the end of VCS configuration. If you manually configured VCS, make sure you set the values of these environment variables to 1.
Note the following information about the VCS configuration file after installing and configuring VCS:
The cluster definition includes the cluster information that you provided during the configuration. This definition includes the cluster name, cluster address, and the names of users and administrators of the cluster. Notice that the cluster has an attribute UserNames. The installvcs program creates a user "admin" whose password is encrypted; the word "password" is the default password. If you set up the optional I/O fencing feature for VCS, then the UseFence = SCSI3 attribute is present. If you configured the cluster in secure mode, the main.cf includes "SecureClus = 1" cluster attribute. The installvcs program creates the ClusterService service group if you configured the virtual IP, SMTP, SNMP, or global cluster options. The service group also has the following characteristics:
The group includes the IP and NIC resources. The service group also includes the notifier resource configuration, which is based on your input to installvcs program prompts about notification. The installvcs program also creates a resource dependency tree. If you set up global clusters, the ClusterService service group contains an Application resource, wac (wide-area connector). This resources attributes contain definitions for controlling the cluster in a global cluster environment. Refer to the Veritas Cluster Server Administrator's Guide for information about managing VCS global clusters.
459
Refer to the Veritas Cluster Server Administrator's Guide to review the configuration concepts, and descriptions of main.cf and types.cf files for Solaris systems.
cluster vcs02 ( SecureClus = 1 ) system sysA ( ) system sysB ( ) system sysC ( ) group ClusterService ( SystemList = { sysA = 0, sysB = 1, sysC = 2 } AutoStartList = { sysA, sysB, sysC } OnlineRetryLimit = 3 OnlineRetryInterval = 120 ) NIC csgnic ( Device = hme0 NetworkHosts = { "10.182.13.1" } ) NotifierMngr ntfr ( SnmpConsoles = { jupiter" = SevereError } SmtpServer = "smtp.example.com"
460
{ "[email protected]" = SevereError }
461
OnlineRetryLimit = 3 OnlineRetryInterval = 120 ) Application wac ( StartProgram = "/opt/VRTSvcs/bin/wacstart" StopProgram = "/opt/VRTSvcs/bin/wacstop" MonitorProcesses = { "/opt/VRTSvcs/bin/wac" } RestartLimit = 3 ) IP gcoip ( Device = hme0 Address = "10.182.13.50" NetMask = "255.255.240.0" ) NIC csgnic ( Device = hme0 NetworkHosts = { "10.182.13.1" } ) NotifierMngr ntfr ( SnmpConsoles = { jupiter" = SevereError } SmtpServer = "smtp.example.com" SmtpRecipients = { "[email protected]" = SevereError } ) gcoip requires csgnic ntfr requires csgnic wac requires gcoip // resource dependency tree // // group ClusterService // { // NotifierMngr ntfr // { // NIC csgnic // } // // // Application wac { IP gcoip
462
// // // // // } }
{ NIC csgnic }
/etc/default/vxfen This file stores the start and stop environment variables for I/O fencing: VXFEN_STARTDefines the startup behavior for the I/O fencing module after a system reboot. Valid values include: 1Indicates that I/O fencing is enabled to start up. 0Indicates that I/O fencing is disabled to start up. VXFEN_STOPDefines the shutdown behavior for the I/O fencing module during a system shutdown. Valid values include: 1Indicates that I/O fencing is enabled to shut down. 0Indicates that I/O fencing is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration. If you manually configured VCS, you must make sure to set the values of these environment variables to 1. /etc/vxfendg This file includes the coordinator disk group information. This file is not applicable for server-based fencing.
463
Description
This file contains the following parameters:
customizedFor server-based fencing disabledTo run the I/O fencing driver but not do any fencing operations.
vxfen_mechanism This parameter is applicable only for server-based fencing. Set the value as cps. scsi3_disk_policy dmpConfigure the vxfen module to use DMP devices The disk policy is dmp by default. If you use iSCSI devices, you must set the disk policy as dmp. rawConfigure the vxfen module to use the underlying raw character devices
Note: You must use the same SCSI-3 disk policy on all the nodes.
security This parameter is applicable only for server-based fencing. 1Indicates that communication with the CP server is in secure mode. This setting is the default. 0Indicates that communication with the CP server is in non-secure mode. List of coordination points This list is required only for server-based fencing configuration. Coordination points in a server-based fencing can include coordinator disks, CP servers, or a mix of both. If you use coordinator disks, you must create a coordinator disk group with the coordinator disk names. Refer to the sample file /etc/vxfen.d/vxfenmode_cps for more information on how to specify the coordination points and multiple IP addresses for each CP server. single_cp This parameter is applicable for server-based fencing which uses a single highly available CP server as its coordination point. Also applicable for when you use a coordinator disk group with single disk. autoseed_gab_timeout This parameter enables GAB automatic seeding of the cluster even when some cluster nodes are unavailable. This feature requires that I/O fencing is enabled. 0Turns the GAB auto-seed feature on. Any value greater than 0 indicates the number of seconds that GAB must delay before it automatically seeds the cluster. -1Turns the GAB auto-seed feature off. This setting is the default.
464
Description
When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node. The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the coordinator points.
Note: The /etc/vxfentab file is a generated file; do not modify this file.
For disk-based I/O fencing, the /etc/vxfentab file on each node contains a list of all paths to each coordinator disk. An example of the /etc/vxfentab file in a disk-based fencing configuration on one node resembles as follows:
For server-based fencing, the /etc/vxfentab file also includes the security settings information. For server-based fencing with single CP server, the /etc/vxfentab file also includes the single_cp settings information.
The main.cf file for a CP server that is hosted on a single node: See Sample main.cf file for CP server hosted on a single node that runs VCS on page 465. The main.cf file for a CP server that is hosted on an SFHA cluster: See Sample main.cf file for CP server hosted on a two-node SFHA cluster on page 467.
465
Note: The CP server supports Internet Protocol version 4 or version 6 (IPv4 or IPv6 addresses) when communicating with VCS clusters (application clusters). The example main.cf files use IPv4 addresses.
Sample main.cf file for CP server hosted on a single node that runs VCS
The following is an example of a single CP server node main.cf. For this CP server single node main.cf, note the following values:
include "types.cf" include "/opt/VRTScps/bin/Quorum/QuorumTypes.cf" // cluster name: cps1 // CP server: mycps1 cluster cps1 ( UserNames = { admin = bMNfMHmJNiNNlVNhMK, haris = fopKojNvpHouNn, "mycps1.symantecexample.com@root@vx" = aj, "[email protected]" = hq } Administrators = { admin, haris, "mycps1.symantecexample.com@root@vx", "[email protected]" } SecureClus = 1 HacliUserLevel = COMMANDROOT ) system mycps1 ( ) group CPSSG ( SystemList = { mycps1 = 0 } AutoStartList = { mycps1 } ) IP cpsvip1 ( Critical = 0 Device @mycps1 = hme0 Address = "10.209.3.1"
466
NetMask = "255.255.252.0" ) IP cpsvip2 ( Critical = 0 Device @mycps1 = qfe:0 Address = "10.209.3.2" NetMask = "255.255.252.0" ) NIC cpsnic1 ( Critical = 0 Device @mycps1 = hme0 PingOptimize = 0 NetworkHosts @mycps1 = { "10.209.3.10 } ) NIC cpsnic2 ( Critical = 0 Device @mycps1 = qfe:0 PingOptimize = 0 ) Process vxcpserv ( PathName = "/opt/VRTScps/bin/vxcpserv" ConfInterval = 30 RestartLimit = 3 ) Quorum quorum ( QuorumResources = { cpsvip1, cpsvip2 } ) cpsvip1 requires cpsnic1 cpsvip2 requires cpsnic2 vxcpserv requires quorum
// // // // //
467
// // // //
cluster cps1 ( UserNames = { admin = ajkCjeJgkFkkIskEjh, "mycps1.symantecexample.com@root@vx" = JK, "mycps2.symantecexample.com@root@vx" = dl } Administrators = { admin, "mycps1.symantecexample.com@root@vx", "mycps2.symantecexample.com@root@vx" } SecureClus = 1 ) system mycps1 (
468
) system mycps2 ( ) group CPSSG ( SystemList = { mycps1 = 0, mycps2 = 1 } AutoStartList = { mycps1, mycps2 } ) DiskGroup cpsdg ( DiskGroup = cps_dg ) IP cpsvip1 ( Critical = 0 Device @mycps1 = hme0 Device @mycps2 = hme0 Address = "10.209.81.88" NetMask = "255.255.252.0" ) IP cpsvip2 ( Critical = 0 Device @mycps1 = qfe:0 Device @mycps2 = qfe:0 Address = "10.209.81.89" NetMask = "255.255.252.0" ) Mount cpsmount ( MountPoint = "/etc/VRTScps/db" BlockDevice = "/dev/vx/dsk/cps_dg/cps_volume" FSType = vxfs FsckOpt = "-y" ) NIC cpsnic1 ( Critical = 0 Device @mycps1 = hme0 Device @mycps2 = hme0 PingOptimize = 0 NetworkHosts @mycps1 = { "10.209.81.10 } )
469
NIC cpsnic2 ( Critical = 0 Device @mycps1 = qfe:0 Device @mycps2 = qfe:0 PingOptimize = 0 ) Process vxcpserv ( PathName = "/opt/VRTScps/bin/vxcpserv" ) Quorum quorum ( QuorumResources = { cpsvip1, cpsvip2 } ) Volume cpsvol ( Volume = cps_volume DiskGroup = cps_dg ) cpsmount requires cpsvol cpsvip1 requires cpsnic1 cpsvip2 requires cpsnic2 cpsvol requires cpsdg vxcpserv requires cpsmount vxcpserv requires quorum
// // // // // // // // // // // // // //
resource dependency tree group CPSSG { IP cpsvip1 { NIC cpsnic1 } IP cpsvip2 { NIC cpsnic2 } Process vxcpserv {
470
// // // // // // // // // // }
Appendix
About installing VCS on a single node Creating a single-node cluster using the installer program Creating a single-node cluster manually Setting the path variable for a manual single node installation Installing VCS software manually on a single node Configuring VCS Verifying single-node operation
472
Installing VCS on a single node Creating a single-node cluster using the installer program
Install the VCS software on See Starting the installer for the single node cluster the system using the on page 472. installer.
To prepare the single node cluster to join a larger cluster To prepare the single node cluster to be a stand-alone single node cluster
When you prepare it to join a larger cluster, enable it with LLT and GAB. For a stand-alone cluster, you do not need to enable LLT and GAB. For more information about LLT and GAB: See About LLT and GAB on page 25.
Enter a single system name. While you configure, the installer asks if you want to enable LLT and GAB:
If you plan to run VCS on a single node without any need for adding cluster node online, you have an option to proceed
473
without starting GAB and LLT. Starting GAB and LLT is recommended. Do you want to start GAB and LLT? [y,n,q,?] (y)
Answer n if you want to use the single node cluster as a stand-alone cluster. Answer y if you plan to incorporate the single node cluster into a multi-node cluster in the future. Continue with the installation.
Install the VCS software manually and add a license key Remove any LLT or GAB configuration files and rename LLT and GAB startup files. A single-node cluster does not require the node-to-node communication service, LLT, or the membership communication service, GAB. Create and modify the VCS configuration files. Start VCS and verify single-node operation.
See Configuring VCS on page 474. See Verifying single-node operation on page 474.
474
Installing VCS on a single node Installing VCS software manually on a single node
See Installing VCS software manually on page 209. See Adding a license key for a manual installation on page 213.
Configuring VCS
You now need to configure VCS. See Configuring VCS manually on page 228.
Bring up VCS manually as a single-node cluster using hastart with the -onenode option:
# hastart -onenode
Note: You can also use SMF command to start VCS as a single-node cluster.
# svcadm enable system/vcs-onenode
Verify that the had and hashadow daemons are running in single-node mode:
# ps -ef | grep had root root 285 288 1 1 0 14:49:31 ? 0 14:49:33 ? 0:02 /opt/VRTSvcs/bin/had -onenode 0:00 /opt/VRTSvcs/bin/hashadow
Appendix
Using the UDP layer for LLT Manually configuring LLT over UDP using IPv4 Manually configuring LLT over UDP using IPv6 LLT over UDP sample /etc/llttab
LLT must be used over WANs When hardware, such as blade servers, do not support LLT over Ethernet
LLT over UDP is slower than LLT over Ethernet. Use LLT over UDP only when the hardware configuration makes it necessary.
Make sure that the LLT private links are on separate subnets. Set the broadcast address in /etc/llttab explicitly depending on the subnet for each link.
476
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
Make sure that each NIC has an IP address that is configured before configuring LLT. Make sure the IP addresses in the /etc/llttab files are consistent with the IP addresses of the network interfaces. Make sure that each link has a unique not well-known UDP port. See Selecting UDP ports on page 478. Set the broadcast address correctly for direct-attached (non-routed) links. See Sample configuration: direct-attached links on page 480. For the links that cross an IP router, disable broadcast features and specify the IP address of each link manually in the /etc/llttab file. See Sample configuration: links crossing IP routers on page 482.
Display the content of the /etc/llttab file on the first node galaxy:
galaxy # cat /etc/llttab set-node galaxy set-cluster 1 link link1 /dev/udp - udp link link2 /dev/udp - udp
50000 50001
Verify the subnet mask using the ifconfig command to ensure that the two links are on separate subnets.
Display the content of the /etc/llttab file on the second node nebula:
nebula # cat /etc/llttab set-node nebula set-cluster 1 link link1 /dev/udp - udp link link2 /dev/udp - udp
50000 50001
Verify the subnet mask using the ifconfig command to ensure that the two links are on separate subnets.
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
477
See Sample configuration: direct-attached links on page 480. See Sample configuration: links crossing IP routers on page 482.
Table F-1 describes the fields of the link command that are shown in the /etc/llttab file examples. Note that some of the fields differ from the command for standard LLT links. Table F-1 Field
tag-name
device node-range
link-type udp-port
MTU
"-" is the default, which has a value of 8192. The value may be increased or decreased depending on the configuration. Use the lltstat -l command to display the current value. IP address of the link on the local node. For clusters with enabled broadcasts, specify the value of the subnet broadcast address. "-" is the default for clusters spanning routers.
IP address bcast-address
478
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
address
Use available ports in the private range 49152 to 65535 Do not use the following ports:
Ports from the range of well-known ports, 0 to 1023 Ports from the range of registered ports, 1024 to 49151
To check which ports are defined as defaults for a node, examine the file /etc/services. You should also use the netstat command to list the UDP ports currently in use. For example:
# netstat -a | more UDP Local Address Remote Address State -------------------- -------------------- ------*.sunrpc Idle *.* Unbound *.32771 Idle *.32776 Idle *.32777 Idle *.name Idle *.biff Idle *.talk Idle *.32779 Idle . . . *.55098 Idle *.syslog Idle
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
479
*.58702 *.*
Idle Unbound
Look in the UDP section of the output; the UDP ports that are listed under Local Address are already in use. If a port is listed in the /etc/services file, its associated name is displayed rather than the port number in the output.
For example:
480
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
# cat /etc/llttab set-node nodexyz set-cluster 100 link link1 /dev/udp - udp 50000 - 192.168.30.1 192.168.30.255 link link2 /dev/udp - udp 50001 - 192.168.31.1 192.168.31.255
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
481
Figure F-1
UDP Endpoint qfe0 UDP Port = 50000 IP = 192.1.2.1 Link Tag = link1 Solaris x64 Node0 UDP Endpoint e1000g1 UDP Port = 50001 IP = 192.1.3.1 Link Tag = link2
Node1
UDP Endpoint e1000g0 UDP Port = 50000 IP = 192.1.2.1 Link Tag = link1
The configuration that the /etc/llttab file for Node 0 represents has directly attached crossover links. It might also have the links that are connected through a hub or switch. These links do not cross routers. LLT broadcasts requests peer nodes to discover their addresses. So the addresses of peer nodes do not need to be specified in the /etc/llttab file using the set-addr command. For direct attached links, you do need to set the broadcast address of
482
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
the links in the /etc/llttab file. Verify that the IP addresses and broadcast addresses are set correctly by using the ifconfig -a command.
set-node Node0 set-cluster 1 #configure Links #link tag-name device node-range link-type udp port MTU \ IP-address bcast-address link link1 /dev/udp - udp 50000 - 192.1.2.1 192.1.2.255 link link2 /dev/udp - udp 50001 - 192.1.3.1 192.1.3.255
Configuring LLT over UDP Manually configuring LLT over UDP using IPv4
483
Figure F-2
UDP Endpoint qfe0 UDP Port = 50000 IP = 192.1.1.1 Link Tag = link1 Solaris x64 Node0 on site A
UDP Endpoint e1000g1 UDP Port = 50001 IP = 192.1.2.1 Link Tag = link2
Node1 on site B
UDP Endpoint e1000g0 UDP Port = 50000 IP = 192.1.1.1 Link Tag = link1
The configuration that the following /etc/llttab file represents for Node 1 has links crossing IP routers. Notice that IP addresses are shown for each link on each peer node. In this configuration broadcasts are disabled. Hence, the broadcast address does not need to be set in the link command of the /etc/llttab file.
set-node Node1 set-cluster 1
484
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
link link1 /dev/udp - udp 50000 - 192.1.3.1 link link2 /dev/udp - udp 50001 - 192.1.4.1 #set address of each link for all peer nodes in the cluster #format: set-addr node-id link tag-name address set-addr 0 link1 192.1.1.1 set-addr 0 link2 192.1.2.1 set-addr 2 link1 192.1.5.2 set-addr 2 link2 192.1.6.2 set-addr 3 link1 192.1.7.3 set-addr 3 link2 192.1.8.3 #disable LLT broadcasts set-bcasthb 0 set-arp 0
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
485
Make sure that each NIC has an IPv6 address that is configured before configuring LLT. Make sure the IPv6 addresses in the /etc/llttab files are consistent with the IPv6 addresses of the network interfaces. Make sure that each link has a unique not well-known UDP port. See Selecting UDP ports on page 486. For the links that cross an IP router, disable multicast features and specify the IPv6 address of each link manually in the /etc/llttab file. See Sample configuration: links crossing IP routers on page 489.
See Sample configuration: direct-attached links on page 487. See Sample configuration: links crossing IP routers on page 489.
Note that some of the fields in Table F-3 differ from the command for standard LLT links. Table F-3 describes the fields of the link command that are shown in the /etc/llttab file examples. Table F-3 Field
tag-name
device node-range
link-type udp-port
MTU
"-" is the default, which has a value of 8192. The value may be increased or decreased depending on the configuration. Use the lltstat -l command to display the current value. IPv6 address of the link on the local node.
IPv6 address
486
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
address
Use available ports in the private range 49152 to 65535 Do not use the following ports:
Ports from the range of well-known ports, 0 to 1023 Ports from the range of registered ports, 1024 to 49151
To check which ports are defined as defaults for a node, examine the file /etc/services. You should also use the netstat command to list the UDP ports currently in use. For example:
# netstat -a | more
UDP: IPv4 Local Address Remote Address State -------------------- -------------------- ---------*.sunrpc Idle
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
487
*.* *.32772 *.* *.32773 *.lockd *.32777 *.32778 *.32779 *.32780 *.servicetag *.syslog *.16161 *.32789 *.177 *.32792 *.32798 *.snmpd *.32802 *.* *.* *.*
Unbound Idle Unbound Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Idle Unbound Unbound Unbound
UDP: IPv6 Local Address Remote Address State If ------------------------- ------------------------- ---------- ----*.servicetag Idle *.177 Idle
Look in the UDP section of the output; the UDP ports that are listed under Local Address are already in use. If a port is listed in the /etc/services file, its associated name is displayed rather than the port number in the output.
488
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
Figure F-3
Solaris SPARC Node0
Node1
Solaris x64 Node0 UDP Port = 50001 IP = fe80::21a:64ff:fe92:1b47 Link Tag = link2 Node1
Switches UDP Port = 50000 IP = fe80::21a:64ff:fe92:1b46 Link Tag = link1 fe80::21a:64ff:fe92:1a92 Link Tag = link1
The configuration that the /etc/llttab file for Node 0 represents has directly attached crossover links. It might also have the links that are connected through a hub or switch. These links do not cross routers. LLT uses IPv6 multicast requests for peer node address discovery. So the addresses of peer nodes do not need to be specified in the /etc/llttab file using the set-addr command. Use the ifconfig -a command to verify that the IPv6 address is set correctly.
set-node Node0 set-cluster 1
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
489
#configure Links #link tag-name device node-range link-type udp port MTU \ IP-address mcast-address link link1 /dev/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1b46 link link1 /dev/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1b47 -
490
Configuring LLT over UDP Manually configuring LLT over UDP using IPv6
Figure F-4
Solaris SPARC Node0 on site A
Solaris x64 Node0 on site UDP Port = 50001 A IP = fe80::21a:64ff:fe92:1a93 Link Tag = link2
Node1 on site B
The configuration that the following /etc/llttab file represents for Node 1 has links crossing IP routers. Notice that IPv6 addresses are shown for each link on each peer node. In this configuration multicasts are disabled.
set-node Node1 set-cluster 1 link link1 /dev/udp6 - udp6 50000 - fe80::21a:64ff:fe92:1a92 link link1 /dev/udp6 - udp6 50001 - fe80::21a:64ff:fe92:1a93 #set address of each link for all peer nodes in the cluster
491
set-addr node-id link tag-name address 0 link1 fe80::21a:64ff:fe92:1b46 0 link2 fe80::21a:64ff:fe92:1b47 2 link1 fe80::21a:64ff:fe92:1d70 2 link2 fe80::21a:64ff:fe92:1d71 3 link1 fe80::209:6bff:fe1b:1c94 3 link2 fe80::209:6bff:fe1b:1c95
492
Appendix
Log on to another system over a network Execute commands on a remote system Copy files from one system to another
494
Configuring the secure shell or the remote shell for communications Setting up inter-system communication
The ssh shell provides strong authentication and secure communications over channels. It is intended to replace rlogin, rsh, and rcp.
Configuring ssh
The procedure to configure ssh uses OpenSSH example file names and commands. Note: You can configure ssh in other ways. Regardless of how ssh is configured, complete the last step in the example to verify the configuration. To configure ssh
1 2
Log in as root on the source system from which you want to install the Veritas product. To generate a DSA key pair on the source system, type the following:
# ssh-keygen -t dsa
Press Enter to accept the default location of /.ssh/id_dsa. System output similar to the following is displayed:
Enter passphrase (empty for no passphrase):
4 5
Do not enter a passphrase. Press Enter. Enter same passphrase again. Press Enter again. Make sure the /.ssh directory is on all the target installation systems. If that directory is absent, create it on the target system and set the write permission to root only:
# mkdir /.ssh # chmod go-w / # chmod 700 /.ssh # chmod go-rwx /.ssh
Configuring the secure shell or the remote shell for communications Setting up inter-system communication
495
Make sure the secure file transfer program (SFTP) is enabled on all the target installation systems. To enable SFTP, the /etc/ssh/sshd_config file must contain the following two lines:
PermitRootLogin yes Subsystem sftp /usr/lib/ssh/sftp-server
If the lines are not there, add them and restart SSH. To restart SSH on Solaris 10, type the following command:
# svcadm restart ssh
To copy the public DSA key, /.ssh/id_dsa.pub to each target system, type the following commands:
# sftp target_sys
If you run this step for the first time on a system, output similar to the following appears:
Connecting to target_sys... The authenticity of host 'target_sys (10.182.00.00)' can't be established. DSA key fingerprint is fb:6f:9e:61:91:9e:44:6b:87:86:ef:68:a6:fd:87:7d. Are you sure you want to continue connecting (yes/no)?
10 Enter the root password. 11 At the sftp prompt, type the following command:
sftp> put /.ssh/id_dsa.pub
496
Configuring the secure shell or the remote shell for communications Setting up inter-system communication
13 To begin the ssh session on the target system, type the following command:
# ssh target_sys
15 After you log in, enter the following command to append the authorization
key to the id_dsa.pub file:
# cat /id_dsa.pub >> /.ssh/authorized_keys
16 Delete the id_dsa.pub public key file. Before you delete this public key file,
make sure to complete the following tasks:
The file is copied to the target (host) system The file is added to the authorized keys file
To delete the id_dsa.pub public key file, type the following command:
# rm /id_dsa.pub
18 When you install from a source system that is also an installation target, add
the local system id_dsa.pub key to the local /.ssh/authorized_key file. The installation can fail if the installation source system is not authenticated.
Configuring the secure shell or the remote shell for communications Setting up inter-system communication
497
This step is shell-specific and is valid only while the shell is active. You must execute the procedure again if you close the shell during the session.
20 To verify that you can connect to the target system, type the following
command:
# ssh -l root target_sys uname -a
The commands should execute on the remote system without any requests for a passphrase or password from the system.
498
Configuring the secure shell or the remote shell for communications Setting up inter-system communication
Appendix
What to do if you see a licensing reminder Restarting the installer after a failed connection Starting and stopping processes for the Veritas products Installer cannot create UUID for the cluster LLT startup script displays errors The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails Issues during fencing startup on VCS cluster nodes set up for server-based fencing
500
http://go.symantec.com/sfhakeyless for details and free download), or - add a valid license key matching the functionality in use on this host using the command 'vxlicinst'
To comply with the terms of the EULA, and remove these messages, you must do one of the following within 60 days:
Install a valid license key corresponding to the functionality in use on the host. After you install the license key, you must validate the license key using the following command:
# /opt/VRTS/bin/vxkeyless
Continue with keyless licensing by managing the server or cluster with a management server. For more information about keyless licensing, see the following URL: http://go.symantec.com/sfhakeyless
Use the -stop option to stop the product installation script. For example, to stop the product's processes, enter the following command:
# ./installer -stop
Troubleshooting VCS installation Installer cannot create UUID for the cluster
501
Use the -start option to start the product installation script. For example, to start the product's processes, enter the following command:
# ./installer -start
You may see the error message during VCS configuration, upgrade, or when you add a node to the cluster using the installer. Workaround: To start VCS, you must run the uuidconfig.pl script manually to configure the UUID on each cluster node. To configure the cluster UUID when you create a cluster manually
On one node in the cluster, perform the following command to populate the cluster UUID on each node in the cluster.
# /opt/VRTSvcs/bin/uuidconfig.pl -clus -configure nodeA nodeB ... nodeN
Where nodeA, nodeB, through nodeN are the names of the cluster nodes.
502
Troubleshooting VCS installation The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
address - 00:04:23:AC:24:2D LLT lltconfig ERROR V-14-2-15664 LLT could not configure any link
Check the log files that get generated in the /var/svc/log directory for any errors. Recommended action: Ensure that all systems on the network have unique clusterid-nodeid pair. You can use the lltdump -f device -D command to get the list of unique clusterid-nodeid pairs connected to the network. This utility is available only for LLT-over-ethernet.
The vxfentsthdw utility fails when SCSI TEST UNIT READY command fails
While running the vxfentsthdw utility, you may see a message that resembles as follows:
Issuing SCSI TEST UNIT READY to disk reserved by other node FAILED. Contact the storage provider to have the hardware configuration fixed.
The disk array does not support returning success for a SCSI TEST UNIT READY command when another host has the disk reserved using SCSI-3 persistent reservations. This happens with the Hitachi Data Systems 99XX arrays if bit 186 of the system mode option is not enabled.
Issues during fencing startup on VCS cluster nodes set up for server-based fencing
Table H-1 Issue
cpsadm command on the VCS cluster gives connection error
Ensure that the CP server is reachable from all the VCS cluster nodes.
Check that the VCS cluster nodes use the correct CP server virtual IP or virtual hostname and the correct port number. Check the /etc/vxfenmode file. Ensure that the running CP server is using the same virtual IP/virtual hostname and port number.
Troubleshooting VCS installation Issues during fencing startup on VCS cluster nodes set up for server-based fencing
503
Table H-1
Issue
Authorization failure
Authentication failure
If you had configured secure communication between the CP server and the VCS cluster (client cluster) nodes, authentication failure can occur due to the following causes: Symantec Product Authentication Services (AT) is not properly configured on the CP server and/or the VCS cluster. The CP server and the VCS cluster nodes use different root brokers, and trust is not established between the authentication brokers:
504
Troubleshooting VCS installation Issues during fencing startup on VCS cluster nodes set up for server-based fencing
Appendix
Two unique client clusters that are served by 3 CP servers: See Figure I-1 on page 506. Client cluster that is served by highly available CP server and 2 SCSI-3 disks: Two node campus cluster that is served be remote CP server and 2 SCSI-3 disks: Multiple client clusters that are served by highly available CP server and 2 SCSI-3 disks:
506
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. Figure I-1 Two unique client clusters served by 3 CP servers
VLAN Private network
et ern Eth witch S et ern Eth witch S et ern Eth witch S
GigE
GigE
GigE
GigE
GigE
GigE
Cluster -1 node 1
Cluster-1 node 2
Cluster -2 node 1
GigE
Cluster-2 node 2
HB
3 C NI vxfenmode= customized
C NI A
mycps3.company.com
CP Server 1
CP Server 2
vxcpserv NIC
vxcpserv NIC
vxcpserv NIC
VIP 1
VIP 2
VIP 3
/etc/VRTScps/db
/etc/VRTScps/db
/etc/VRTScps/db
GigE
NIC 1
NIC 1
NIC 1
NIC 1 NIC 2
NIC 2
NIC 2
NIC 2
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
507
In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. The two SCSI-3 disks are part of the disk group vxfencoorddg. The third coordination point is a CP server hosted on an SFHA cluster, with its own shared database and coordinator disks. Figure I-2 Client cluster served by highly available CP server and 2 SCSI-3 disks
VLAN Private network
et ern Eth witch S et ern Eth witch S
GigE
Cluster -1 node 1
GigE
Cluster-1 node 2
HB
NI
Client cluster
HB A
C NI
GigE
om
wit CS
ch
GigE
ny .c
om
yc ps 1. c
disk1 disk2
CPS-Primary node
GigE
SFHA cluster
my
pa
cp
.c s2
CPS-standby node
vx
VIP
NI HB A
vx
cp
cp
se
SCSI-3 LUNs as 2 coordination points The coordinator disk group specified in /etc/vxfenmode should have these 2 disks.
se
rv
rv
VIP
NI HB A
SAN
Sw h itc
FC
Data LUNs
Coordinator LUNs
n pa om
om y.c
GigE
NIC 1 NIC 2
508
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
Two node campus cluster served by remote CP server and 2 SCSI-3 disks
Figure I-3 displays a configuration where a two node campus cluster is being served by one remote CP server and 2 local SCSI-3 LUN (disks). In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. The two SCSI-3 disks (one from each site) are part of disk group vxfencoorddg. The third coordination point is a CP server on a single node VCS cluster.
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
509
Figure I-3
SITE 1
SITE 2
et ern Eth itch Sw
Client Applications
et ern Eth itch LAN Sw et ern Eth itch Sw et ern Eth itch Sw
Cluster node 1
Cluster node 2
Cluster node 3
Cluster node 4
IC
IC
On the client cluster: vxfenmode=customized vxfen_mechanism=cps cps1=[VIP]:14250 vxfendg=vxfencoorddg The coordinator disk group specified in /etc/vxfenmode should have one SCSI3 disk from site1 and another from site2.
IC
SAN
ch FC Sw itc
h itc h
S FC
wit
SAN
wit
FC
DWDM
Sw
Dark Fibre
Coordinator Data LUN 2 Storage Array LUNs
)
SITE 3
o. CPS hosted rt n om po y.c on single node 0 ( pan 25 14 om VCS cluster P]: s.c [VI ycp c = m ps1 rv se cp vx VIP
NI C
Legends Private Interconnects (GigE) Public Links (GigE) Dark Fiber Connections San 1 Connections San 2 Connections
IC
ch
510
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
Multiple client clusters served by highly available CP server and 2 SCSI-3 disks
Figure I-4 displays a configuration where multiple client clusters are being served by one highly available CP server and 2 local SCSI-3 LUNS (disks). In the vxfenmode file on the client nodes, vxfenmode is set to customized with vxfen mechanism set to cps. The two SCSI-3 disks are are part of the disk group vxfencoorddg. The third coordination point is a CP server, hosted on an SFHA cluster, with its own shared database and coordinator disks.
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
511
Figure I-4
Multiple client clusters served by highly available CP server and 2 SCSI-3 disks
VLAN Private network
et ern Eth witch S et ern Eth witch S
GigE
GigE
GigE
GigE
GigE
Cluster -2 node 1
Cluster-2 node 2
GigE
Cluster -1 node 1
Cluster-1 node 2
GigE
C NI A HB
C NI
NI HB A
C NI
SAN
it Sw ch
FC
disk1 disk2
SCSI-3 LUNs as 2 coordinator disks The coordinator disk group specified in /etc/vxfenmode should have these 2 disks.
GigE
GigE
se
rv
vx
NI H
cp
VIP
H BA
C NI
SAN
w CS h itc
Coordinator LUNs
GigE
GigE
NIC 1 NIC 2
NIC 1 NIC 2
NIC 1 NIC 2
NIC 1 NIC 2
NIC 1
NIC 1 NIC 2
NIC 2
512
Sample VCS cluster setup diagrams for CP server-based I/O fencing Configuration diagrams for setting up server-based I/O fencing
Appendix
514
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
Use the following command on all nodes exporting an NFS file system. This command displays the major and minor numbers for the block device.
# ls -lL block_device
The variable block_device refers to a partition where a file system is mounted for export by NFS. Use this command on each NFS file system. For example, type:
# ls -lL /dev/dsk/c1t1d0s2
Note that the major numbers (32) and the minor numbers (1) match, satisfactorily meeting the requirement for NFS file systems. To reconcile the major numbers that do not match on disk partitions
Reconcile the major and minor numbers, if required. For example, if the output in the previous section resembles the following, perform the instructions beginning step 2: Output on Node A:
crw-r----1 root sys 32,1 Dec 3 11:50 /dev/dsk/c1t1d0s2
Output on Node B:
crw-r----1 root sys 36,1 Dec 3 11:55 /dev/dsk/c1t1d0s2
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
515
Attempt to change the major number on System B (now 36) to match that of System A (32). Use the command:
# haremajor -sd major_number
4 5
If the command succeeds, go to step 8. If the command fails, you may see a message resembling:
Error: Preexisting major number 32 These are available numbers on this system: 128... Check /etc/name_to_major on all systems for available numbers.
Notice that the number 36 (the major number on Node A) is not available on Node B. Run the haremajor command on Node B and change it to 128,
# haremajor -sd 128
Run the same command on Node A. If the command fails on Node A, the output lists the available numbers. Rerun the command on both nodes, setting the major number to one available to both. Reboot each system on which the command succeeds. Proceed to reconcile the major numbers for your next partition.
8 9
1 2
In the example, the minor numbers are 1 and 3 and are reconciled by setting to 30 on each node. Type the following command on both nodes using the name of the block device:
# ls -1 /dev/dsk/c1t1d0s2
devices/sbus@1f,0/QLGC,isp@0,10000/sd@1,0:d,raw
The device name (in bold) includes the slash following the word devices, and continues to, but does not include, the colon.
516
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
Type the following command on both nodes to determine the instance numbers that the SCSI driver uses:
# grep sd /etc/path_to_inst | sort -n -k 2,2
In the output, the instance numbers are in the second field. The instance number that is associated with the device name that matches the name for Node A displayed in step 2, is "1."
Compare instance numbers for the device in the output on each node. After you review the instance numbers, perform one of the following tasks:
If the instance number from one node is unused on the other it does not appear in the output of step 3edit /etc/path_to_inst. You edit this file to make the second nodes instance number similar to the number of the first node.
If the instance numbers in use on both nodes, edit /etc/path_to_inst on both nodes. Change the instance number that is associated with the device name to an unused number. The number needs to be greater than the highest number that other devices use. For example, the output of step 3 shows the instance numbers that all devices use (from 0 to 29). You edit the file /etc/path_to_inst on each node and reset the instance numbers to 30.
Type the following command to reboot each node on which /etc/path_to_inst was modified:
# reboot -- -rv
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
517
To list the devices, use the ls -lL block_device command on each node:
# ls -lL /dev/vx/dsk/shareddg/vol3
518
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
Use the following command on each node exporting an NFS file system. The command displays the major numbers for vxio and vxspec that Veritas Volume Manager uses . Note that other major numbers are also displayed, but only vxio and vxspec are of concern for reconciliation:
# grep vx /etc/name_to_major
Output on Node A:
vxdmp 30 vxio 32 vxspec 33 vxfen 87 vxglm 91
Output on Node B:
vxdmp 30 vxio 36 vxspec 37 vxfen 87 vxglm 91
To change Node Bs major numbers for vxio and vxspec to match those of Node A, use the command:
haremajor -vx major_number_vxio major_number_vxspec
If the command succeeds, proceed to step 8. If this command fails, you receive a report similar to the following:
Error: Preexisting major number 32 These are available numbers on this system: 128... Check /etc/name_to_major on all systems for available numbers.
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
519
If you receive this report, use the haremajor command on Node A to change the major number (32/33) to match that of Node B (36/37). For example, enter:
# haremajor -vx 36 37
If the command fails again, you receive a report similar to the following:
Error: Preexisting major number 36 These are available numbers on this node: 126... Check /etc/name_to_major on all systems for available numbers.
If you receive the second report, choose the larger of the two available numbers (in this example, 128). Use this number in the haremajor command to reconcile the major numbers. Type the following command on both nodes:
# haremajor -vx 128 129
8 9
Reboot each node on which haremajor was successful. If the minor numbers match, proceed to reconcile the major and minor numbers of your next NFS block device. consult the vxdg(1M) manual page. The manual page provides instructions on reconciling the Veritas Volume Manager minor numbers, and gives specific reference to the reminor option. Node where the vxio driver number have been changed require rebooting.
10 If the block device on which the minor number does not match is a volume,
520
Reconciling major/minor numbers for NFS shared disks Reconciling major/minor numbers for NFS shared disks
Appendix
Compatability issues when installing Veritas Cluster Server with other products
This appendix includes the following topics:
Installing, uninstalling, or upgrading Storage Foundation products when other Veritas products are present Installing, uninstalling, or upgrading Storage Foundation products when VOM is already present Installing, uninstalling, or upgrading Storage Foundation products when NetBackup is already present
Installing, uninstalling, or upgrading Storage Foundation products when other Veritas products are present
Installing Storage Foundation when other Veritas products are installed can create compatibility issues. For example, installing Storage Foundation products when VOM, ApplicationHA, and NetBackup are present on the systems.
522
Compatability issues when installing Veritas Cluster Server with other products Installing, uninstalling, or upgrading Storage Foundation products when VOM is already present
Installing, uninstalling, or upgrading Storage Foundation products when VOM is already present
If you plan to install or upgrade Storage Foundation products on systems where VOM has already been installed, be aware of the following compatibility issues:
When you install or upgrade Storage Foundation products where SFM or VOM Central Server is present, the installer skips the VRTSsfmh upgrade and leaves the SFM Central Server and Managed Host packages as is. When uninstalling Storage Foundation products where SFM or VOM Central Server is present, the installer does not uninstall VRTSsfmh. When you install or upgrade Storage Foundation products where SFM or VOM Managed Host is present, the installer gives warning messages that it will upgrade VRTSsfmh.
Installing, uninstalling, or upgrading Storage Foundation products when NetBackup is already present
If you plan to install or upgrade Storage Foundation on systems where NetBackup has already been installed, be aware of the following compatibility issues:
When you install or upgrade Storage Foundation products where NetBackup is present, the installer does not uninstall VRTSpbx and VRTSicsco. It does not upgrade VRTSat. When you uninstall Storage Foundation products where NetBackup is present, the installer does not uninstall VRTSpbx, VRTSicsco, and VRTSat.
Index
Symbols
/etc/llttab LLT directives 225
A
abort sequence 69 about global clusters 27 adding ClusterService group 232 users 126 adding node to a one-node cluster 377 attributes UseFence 236
B
block device partitions example file name 513 volumes example file name 513 bundled agents types.cf file 228
C
cables cross-over Ethernet 390 cluster creating a single-node cluster installer 472 manual 473 four-node configuration 24 removing a node from 401 verifying operation 351 Cluster Manager 29 installing Java Console 339 ClusterService group adding manually 232
cold start running VCS 26 commands format 67 gabconfig 227, 349 hastart 401 hastatus 351 hastop 369 hasys 351 lltconfig 453 lltstat 347 vxdisksetup (initializing disks) 137 vxlicinst 134135, 213 vxlicrep 133, 215 communication channels 25 communication disk 25 configuration files types.cf 228 configuring GAB 227 hardware 36 LLT manual 223 private network 60 rsh 63 ssh 63, 493 switches 60 configuring VCS adding users 126 event notification 127, 129 global clusters 131 required information 73 script-based installer 113 secure mode 121 starting 114 controllers private Ethernet 60 SCSI 64 coordinator disks DMP devices 32 for I/O fencing 32 setting up 234
524
Index
D
data disks for I/O fencing 32 demo key 215 directives LLT 225 disk space directories 36 language pack 36 required 36 disks adding and initializing 137 coordinator 234 testing with vxfentsthdw 140 verifying node access 142 documentation accessing 337
H
hardware configuration 24 configuring network and storage 36 hastart 401 hastatus -summary command 351 hastop command 369 hasys -display command 351 hubs 60 independent 390
I
I/O fencing checking disks 140 setting up 233 shared storage 140 I/O fencing requirements non-SCSI-3 41 installer program uninstalling language packages 362 Installing VCS with the Web-based installer 163 installing JumpStart 216 language packages 87 manually 213 manual 209 post 132 required disk space 36 simulator 342 installing manually Chinese language packages 212 Japanese language packages 212 installing VCS required information 73 installvcs options 45 installvcs prompts b 45 n 45 y 45
E
eeprom parameters 60 Ethernet controllers 60, 390
F
FC-AL controllers 67 fibre channel 36 flarcreate 219 Flash archive 219 post-deployment scripts 220 functions go 69
G
GAB description 25 manual configuration 227 port membership information 349 starting 231 verifying 349 gabconfig command 227, 349 -a (verifying GAB) 349 gabtab file creating 227 verifying after installation 453
J
Japanese language packages 212
Index
525
Java Console 29 installing 339 installing on UNIX 339 JumpStart installing 216
L
language packages 362 Chinese 212 disk space 36 Japanese 212 license keys adding with vxlicinst 134, 213 obtaining 54 replacing demo key 135, 215 licenses information about 133 showing information 215 links private network 453 Live Upgrade preparing 303 upgrade paths 301 upgrading Solaris on alternate boot disk 307 LLT description 25 directives 225 interconnects 70 manual configuration 223 starting 231 verifying 347 LLT directives link 225 link-lowpri 225 set-cluster 225 set-node 225 lltconfig command 453 llthosts file verifying after installation 453 lltstat command 347 llttab file verifying after installation 453
major and minor numbers checking 514, 517 shared devices 513 MANPATH variable setting 68 manual installation preparing 207 media speed 70 optimizing 70 membership information 349 mounting software disc 71
N
network partition preexisting 26 protecting against 24 Network partitions protecting against 25 network switches 60 NFS 23 NFS services shared storage 513 non-SCSI-3 fencing manual configuration 250 setting up 250 non-SCSI-3 I/O fencing requirements 41 non-SCSI3 fencing setting up 154 using installvcs program 154
O
optimizing media speed 70 overview VCS 23
P
parameters eeprom 60 PATH variable setting 68 VCS commands 346 persistent reservations SCSI-3 64 phased 269
M
MAC addresses 60 main.cf file contents after installation 459 main.cf files 464
526
Index
phased upgrade 269, 271 example 270 port a membership 349 port h membership 349 port membership information 349 post-deployment scripts 220 preparing Live Upgrade 303 manual installation 207 prerequisites uninstalling 359 private network configuring 60
R
RAM installation requirement 36 removing a system from a cluster 401 requirements Ethernet controllers 36 fibre channel 36 hardware 36 RAM Ethernet controllers 36 SCSI host bus adapter 36 response files 47 rolling upgrade 297 versions 295 rsh 115, 493 configuration 63
shared storage Fibre Channel setting up 67 NFS services 513 simulator installing 342 single-node cluster adding a node to 377 single-system cluster creating 472473 SMTP email notification 127 SNMP trap notification 129 ssh 115, 493 configuration 63 configuring 493 starting configuration installvcs program 115 Veritas product installer 115 starting VCS after manual upgrade 230 storage fully shared vs. distributed 24 setting up shared fibre 67 shared 24 switches 60 Symantec Product Authentication Service 121 system communication using rsh ssh 493 system state attribute value 351
T
types.cf 228 bundled agents 228 types.cf file 228
S
script-based installer VCS configuration overview 113 SCSI driver determining instance numbers 515 SCSI host bus adapter 36 SCSI-3 persistent reservations 64 SCSI-3 persistent reservations verifying 233 seeding 26 automatic 26 manual 26 setting MANPATH variable 68 PATH variable 68
U
uninstalling prerequisites 359 uninstalling language packages 362 upgrade phased 269, 271 upgrade paths Live Upgrade 301 SPARC 260 x64 Platform Edition 260 upgrading phased 269 rolling 297 using Live Upgrade 301
Index
527
V
variables MANPATH 68 PATH 68 VCS basics 23 command directory path variable 346 configuration files main.cf 457 configuring 113 coordinator disks 234 documentation 337 manually installing 209 notifications 27 replicated states on each system 24 starting 230231 VCS features 27 VCS installation verifying cluster operations 346 GAB operations 346 LLT operations 346 VCS notifications SMTP notification 27 SNMP notification 27 Veritas Operations Manager 29 Volume Manager Fibre Channel 67 vxdisksetup command 137 vxlicinst command 134, 213 vxlicrep command 133, 215
W
Web-based installer 163