Symantec™ Cluster Server 6.1 Administrator's Guide - Linux: January 2014
Symantec™ Cluster Server 6.1 Administrator's Guide - Linux: January 2014
January 2014
Symantec™ Cluster Server Administrator’s Guide
The software described in this book is furnished under a license agreement and may be used
only in accordance with the terms of the agreement.
Legal Notice
Copyright © 2014 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, the Checkmark Logo, Veritas, Veritas Storage Foundation,
CommandCentral, NetBackup, Enterprise Vault, and LiveUpdate are trademarks or registered
trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other
names may be trademarks of their respective owners.
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Symantec
Corporation and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in
Commercial Computer Software or Commercial Computer Software Documentation", as
applicable, and any successor regulations, whether delivered by Symantec as on premises
or hosted services. Any use, modification, reproduction release, performance, display or
disclosure of the Licensed Software and Documentation by the U.S. Government shall be
solely in accordance with the terms of this Agreement.
Symantec Corporation
350 Ellis Street
Mountain View, CA 94043
http://www.symantec.com
Technical Support
Symantec Technical Support maintains support centers globally. Technical Support’s
primary role is to respond to specific queries about product features and functionality.
The Technical Support group also creates content for our online Knowledge Base.
The Technical Support group works collaboratively with the other functional areas
within Symantec to answer your questions in a timely fashion. For example, the
Technical Support group works with Product Engineering and Symantec Security
Response to provide alerting services and virus definition updates.
Symantec’s support offerings include the following:
■ A range of support options that give you the flexibility to select the right amount
of service for any size organization
■ Telephone and/or Web-based support that provides rapid response and
up-to-the-minute information
■ Upgrade assurance that delivers software upgrades
■ Global support purchased on a regional business hours or 24 hours a day, 7
days a week basis
■ Premium service offerings that include Account Management Services
For information about Symantec’s support offerings, you can visit our website at
the following URL:
www.symantec.com/business/support/index.jsp
All support services will be delivered in accordance with your support agreement
and the then-current enterprise technical support policy.
Customer service
Customer service information is available at the following URL:
www.symantec.com/business/support/
Customer Service is available to assist with non-technical questions, such as the
following types of issues:
■ Questions regarding product licensing or serialization
■ Product registration updates, such as address or name changes
■ General product information (features, language availability, local dealers)
■ Latest information about product updates and upgrades
■ Information about upgrade assurance and support contracts
■ Information about the Symantec Buying Programs
■ Advice about Symantec's technical support options
■ Nontechnical presales questions
■ Issues that are related to CD-ROMs or manuals
Support agreement resources
If you want to contact Symantec regarding an existing support agreement, please
contact the support agreement administration team for your region as follows:
Documentation
Product guides are available on the media in PDF format. Make sure that you are
using the current version of the documentation. The document version appears on
page 2 of each guide. The latest product documentation is available on the Symantec
website.
https://sort.symantec.com/documents
Your feedback on product documentation is important to us. Send suggestions for
improvements and reports on errors or omissions. Include the title and document
version (located on the second page), and chapter and section titles of the text on
which you are reporting. Send feedback to:
[email protected]
For information regarding the latest HOWTO articles, documentation updates, or
to ask a question regarding product documentation, visit the Storage and Clustering
Documentation forum on Symantec Connect.
https://www-secure.symantec.com/connect/storage-management/
forums/storage-and-clustering-documentation
Enabling and disabling secure mode for the cluster ...................... 245
Migrating from secure mode to secure mode with FIPS ................ 247
Using the -wait option in scripts that use VCS commands ................... 247
Running HA fire drills ................................................................... 248
About administering simulated clusters from the command line ............ 249
IP Address
Application
Storage Storage
Start procedure The application must have a command to start it and all resources it
may require. VCS brings up the required resources in a specific order,
then brings up the application by using the defined start procedure.
For example, to start an Oracle database, VCS must know which Oracle
utility to call, such as sqlplus. VCS must also know the Oracle user,
instance ID, Oracle home directory, and the pfile.
For example, You cannot kill all httpd processes on a Web server
because it also stops other Web servers.
If VCS cannot stop an application cleanly, it may call for a more forceful
method, like a kill signal. After a forced stop, a clean-up procedure may
be required for various process-specific and application-specific items
that may be left behind. These items include shared memory segments
or semaphores.
Introducing Symantec Cluster Server 32
About cluster control guidelines
Monitor procedure The application must have a monitor procedure that determines if the
specified application instance is healthy. The application must allow
individual monitoring of unique instances.
For example, the monitor procedure for a Web server connects to the
specified server and verifies that it serves Web pages. In a database
environment, the monitoring application can connect to the database
server and perform SQL commands to verify read and write access to
the database.
following options: linking /usr/local to a file system that is mounted from the shared
storage device or mounting file system from the shared device on /usr/local.
The application must also store data to disk instead of maintaining it in memory.
The takeover system must be capable of accessing all required information. This
requirement precludes the use of anything inside a single system inaccessible by
the peer. NVRAM accelerator boards and other disk caching mechanisms for
performance are acceptable, but must be done on the external array and not on
the local host.
VCS supports clusters with up to 64 nodes. You can configure applications to run
on specific nodes within the cluster.
About networking
Networking in the cluster is used for the following purposes:
■ Communications between the cluster nodes and the customer systems.
■ Communications between the cluster nodes.
See “About cluster control, communications, and membership” on page 45.
Database IP Address
File Network
Disk Group
Resource dependencies determine the order in which resources are brought online
or taken offline. For example, you must import a disk group before volumes in the
disk group start, and volumes must start before you mount file systems. Conversely,
you must unmount file systems before volumes stop, and volumes must stop before
you deport disk groups.
A parent is brought online after each child is brought online, and this continues up
the tree, until finally the application starts. Conversely, to take a managed application
offline, VCS stops resources by beginning at the top of the hierarchy. In this example,
the application stops first, followed by the database application. Next the IP address
and file systems stop concurrently. These resources do not have any resource
dependency between them, and this continues down the tree.
Introducing Symantec Cluster Server 36
Logical components of VCS
Child resources must be brought online before parent resources are brought online.
Parent resources must be taken offline before child resources are taken offline. If
resources do not have parent-child interdependencies, they can be brought online
or taken offline concurrently.
Categories of resources
Different types of resources require different levels of control.
Table 1-1 describes the three categories of VCS resources.
On-Only VCS starts On-Only resources, but does not stop them.
Application
A single node can host any number of service groups, each providing a discrete
service to networked clients. If the server crashes, all service groups on that node
must be failed over elsewhere.
Service groups can be dependent on each other. For example, a managed
application might be a finance application that is dependent on a database
application. Because the managed application consists of all components that are
required to provide the service, service group dependencies create more complex
managed applications. When you use service group dependencies, the managed
application is the entire dependency tree.
See “About service group dependencies” on page 481.
If you do not use the product installer to install VCS, you must run the uuidconfig.pl
utility to configure the UUID for the cluster.
See “Configuring and unconfiguring the cluster UUID value” on page 239.
■ During initial node startup, to probe and determine the status of all
resources on the system.
■ After every online and offline operation.
■ Periodically, to verify that the resource remains in its correct state.
Under normal circumstances, the monitor entry point is run every
60 seconds when a resource is online. The entry point is run every
300 seconds when a resource is expected to be offline.
■ When you probe a resource using the following command:
# hares -probe res_name -sys system_name.
imf_init Initializes the agent to interface with the IMF notification module. This
function runs when the agent starts up.
imf_getnotification Gets notification about resource state changes. This function runs after
the agent initializes with the IMF notification module. This function
continuously waits for notification and takes action on the resource
upon notification.
Action Performs actions that can be completed in a short time and which are
outside the scope of traditional activities such as online and offline.
Some agents have predefined action scripts that you can run by invoking
the action function.
To see the updated information, you can invoke the info agent function
explicitly from the command line interface by running the following
command:
agent framework. You can enable or disable the intelligent monitoring functionality
of the VCS agents that are IMF-aware. For a list of IMF-aware agents, see the
Symantec Cluster Server Bundled Agents Reference Guide.
See “How intelligent resource monitoring works” on page 43.
See “Enabling and disabling intelligent resource monitoring for agents manually”
on page 225.
See “Enabling and disabling IMF for agents by using script” on page 227.
Poll-based monitoring can consume a fairly large percentage of system resources
such as CPU and memory on systems with a huge number of resources. This not
only affects the performance of running applications, but also places a limit on how
many resources an agent can monitor efficiently.
However, with IMF-based monitoring you can either eliminate poll-based monitoring
completely or reduce its frequency. For example, for process offline and online
monitoring, you can completely avoid the need for poll-based monitoring with
IMF-based monitoring enabled for processes. Similarly for vxfs mounts, you can
eliminate the poll-based monitoring with IMF monitoring enabled. Such reduction
in monitor footprint will make more system resources available for other applications
to consume.
Note: Intelligent Monitoring Framework for mounts is supported only for the VxFS,
CFS, and NFS mount types.
With IMF-enabled agents, VCS will be able to effectively monitor larger number of
resources.
Thus, intelligent monitoring has the following benefits over poll-based monitoring:
■ Provides faster notification of resource state changes
■ Reduces VCS system utilization due to reduced monitor function footprint
■ Enables VCS to effectively monitor a large number of resources
Consider enabling IMF for an agent in the following cases:
■ You have a large number of process resources or mount resources under VCS
control.
■ You have any of the agents that are IMF-aware.
For information about IMF-aware agents, see the following documentation:
■ See the Symantec Cluster Server Bundled Agents Reference Guide for details
on whether your bundled agent is IMF-aware.
Introducing Symantec Cluster Server 43
Logical components of VCS
■ See the Symantec Storage Foundation Cluster File System High Availability
Installation Guide for IMF-aware agents in CFS environments.
The architecture uses an IMF daemon (IMFD) that collects notifications from the
user space notification providers (USNPs) and passes the notifications to the AMF
driver, which in turn passes these on to the appropriate agent. IMFD starts on the
first registration with IMF by an agent that requires Open IMF.
The Open IMF architecture provides the following benefits:
■ IMF can group events of different types under the same VCS resource and is
the central notification provider for kernel space events and user space events.
■ More agents can become IMF-aware by leveraging the notifications that are
available only from user space.
■ Agents can get notifications from IMF without having to interact with USNPs.
For example, Open IMF enables the AMF driver to get notifications from vxnotify,
the notification provider for Veritas Volume Manager. The AMF driver passes these
notifications on to the DiskGroup agent. For more information on the DiskGroup
agent, see the Symantec Cluster Server Bundled Agents Reference Guide.
Agent classifications
The different kinds of agents that work with VCS include bundled agents, enterprise
agents, and custom agents.
The engine uses agents to monitor and manage resources. It collects information
about resource states from the agents on the local system and forwards it to all
cluster members.
The local engine also receives information from the other cluster members to update
its view of the cluster. HAD operates as a replicated state machine (RSM). The
engine that runs on each node has a completely synchronized view of the resource
status on each node. Each instance of HAD follows the same code path for corrective
action, as required.
The RSM is maintained through the use of a purpose-built communications package.
The communications package consists of the protocols Low Latency Transport
(LLT) and Group Membership Services and Atomic Broadcast (GAB).
See “About inter-system cluster communications” on page 312.
The hashadow process monitors HAD and restarts it when required.
■ Cluster Communications
GAB’s second function is reliable cluster communications. GAB provides
guaranteed delivery of point-to-point and broadcast messages to all nodes. The
VCS engine uses a private IOCTL (provided by GAB) to tell GAB that it is alive.
■ Heartbeat
LLT is responsible for sending and receiving heartbeat traffic over network links.
The Group Membership Services function of GAB uses this heartbeat to
determine cluster membership.
■ All VCS users are system and domain users and are configured using
fully-qualified user names. For example, administrator@vcsdomain. VCS
provides a single sign-on mechanism, so authenticated users do not need to
sign on each time to connect to a cluster.
For secure communication, VCS components acquire credentials from the
authentication broker that is configured on the local system. In VCS 6.0 and later,
a root and authentication broker is automatically deployed on each node when a
secure cluster is configured. The acquired certificate is used during authentication
and is presented to clients for the SSL handshake.
VCS and its components specify the account name and the domain in the following
format:
■ HAD Account
name = HAD
domain = VCS_SERVICES@Cluster UUID
■ CmdServer
name = CMDSERVER
domain = VCS_SERVICES@Cluster UUID
For instructions on how to set up Security Services while setting up the cluster, see
the Symantec Cluster Server installation documentation.
See “Enabling and disabling secure mode for the cluster” on page 245.
Veritas Operations A Web-based graphical user interface for monitoring and administering
Manager the cluster.
VCS command line The VCS command-line interface provides a comprehensive set of
interface (CLI) commands for managing and administering the cluster.
See “About administering VCS from the command line” on page 174.
Symantec High In a physical or virtual environment, you can use the Symantec High
Availability Availability Configuration wizard to configure monitoring for generic
Configuration applications.
wizard
See the Symantec Cluster Server Generic Application Agent
Configuration Guide for more information.
■ Oracle
■ SAP WebAS
■ WebSphere MQ
Figure 1-4
NFSRestart
IP
Share
NFSRestart
DiskGroup
VCS starts the agents for DiskGroup, Mount, Share, NFS, NIC, IP, and NFSRestart
on all systems that are configured to run NFS_Group.
The resource dependencies are configured as follows:
Introducing Symantec Cluster Server 52
Putting the pieces together
■ The /home file system (configured as a Mount resource), requires that the disk
group (configured as a DiskGroup resource) is online before you mount.
■ The lower NFSRestart resource requires that the file system is mounted and
that the NFS daemons (NFS) are running.
■ The NFS export of the home file system (Share) requires that the lower
NFSRestart resource is up.
■ The high availability IP address, nfs_IP, requires that the file system (Share) is
shared and that the network interface (NIC) is up.
■ The upper NFSRestart resource requires that the IP address is up.
■ The NFS daemons and the disk group have no child dependencies, so they can
start in parallel.
■ The NIC resource is a persistent resource and does not require starting.
You can configure the service group to start automatically on either node in the
preceding example. It then can move or fail over to the second node on command
or automatically if the first node fails. On failover or relocation, to make the resources
offline on the first node, VCS begins at the top of the graph. When it starts them on
the second node, it begins at the bottom.
Chapter 2
About cluster topologies
This chapter includes the following topics:
Application
Application
This configuration is the simplest and most reliable. The redundant server is on
stand-by with full performance capability. If other applications are running, they
present no compatibility issues.
Application1
Application1 Application2
Application2
Redundant Server
Redundant Server
Most shortcomings of early N-to-1 cluster configurations are caused by the limitations
of storage architecture. Typically, it is impossible to connect more than two hosts
to a storage array without complex cabling schemes and their inherent reliability
problems, or expensive arrays with multiple controller ports.
Spare
About cluster topologies 57
About advanced failover configurations
SG SG SG
SG SG SG
SG SG
SG SG SG
SG SG
SG SG SG
SG = Service Group
If any node fails, each instance is started on a different node. this action ensures
that no single node becomes overloaded. This configuration is a logical evolution
of N + 1; it provides cluster standby capacity instead of a standby server.
N-to-N configurations require careful testing to ensure that all applications are
compatible. You must specify a list of systems on which a service group is allowed
to run in the event of a failure.
Site A Site B
SG SG SG SG
SG SG
SG SG
SG SG SG
SG
SG SG SG SG
SG = Service Group
About cluster topologies 60
Cluster topologies and storage configurations
A campus cluster requires two independent network links for heartbeat, two storage
arrays each providing highly available disks, and public network connectivity between
buildings on same IP subnet. If the campus cluster setup resides on different subnets
with one for each site, then use the VCS DNS agent to handle the network changes
or issue the DNS changes manually.
See “ How VCS campus clusters work” on page 621.
Service Group
Replication
You can also configure replicated data clusters without the ability to fail over locally,
but this configuration is not recommended.
See “ How VCS replicated data clusters work” on page 612.
Public Clients
Cluster A Network Redirected Cluster B
Application
Failover
Oracle Oracle
Group Group
Replicated
Data
Separate Separate
Storage Storage
In a VCS cluster, the first system to be brought online reads the configuration file
and creates an internal (in-memory) representation of the configuration. Systems
that are brought online after the first system derive their information from systems
that are in the cluster.
You must stop the cluster if you need to modify the files manually. Changes made
by editing the configuration files take effect when the cluster is restarted. The node
where you made the changes should be the first node to be brought back online.
include "types.cf"
Cluster definition Defines the attributes of the cluster, the cluster name and the
names of the cluster users.
cluster demo (
UserNames = { admin = cDRpdxPmHzpS }
)
System definition Lists the systems designated as part of the cluster. The
system names must match the name returned by the
command uname -a.
system Server1
system Server2
Service group definition Service group definitions in main.cf comprise the attributes
of a particular service group.
group NFS_group1 (
SystemList = { Server1=0, Server2=1 }
AutoStartList = { Server1 }
)
DiskGroup DG_shared1 (
DiskGroup = shared1
)
Service group dependency To configure a service group dependency, place the keyword
clause requires in the service group declaration of the main.cf file.
Position the dependency clause before the resource
dependency specifications and after the resource declarations.
requires group_x
<dependency category>
<dependency location>
<dependency rigidity>
Note: Sample configurations for components of global clusters are listed separately.
See “ VCS global clusters: The building blocks” on page 540.
You can assign system priority explicitly in the SystemList attribute by assigning
numeric values to each system name. For example:
If you do not assign numeric priority values, VCS assigns a priority to the system
without a number by adding 1 to the priority of the preceding system. For example,
if the SystemList is defined as follows, VCS assigns the values SystemA = 0,
SystemB = 2, SystemC = 3.
Note that a duplicate numeric priority value may be assigned in some situations:
Initial configuration
When VCS is installed, a basic main.cf configuration file is created with the cluster
name, systems in the cluster, and a Cluster Manager user named admin with the
password password.
The following is an example of the main.cf for cluster demo and systems SystemA
and SystemB.
include "types.cf"
cluster demo (
UserNames = { admin = cDRpdxPmHzpS }
)
system SystemA (
)
system SystemB (
)
include "applicationtypes.cf"
include "listofsystems.cf"
include "applicationgroup.cf"
VCS configuration concepts 68
About the types.cf file
If you include other .cf files in main.cf, the following considerations apply:
■ Resource type definitions must appear before the definitions of any groups that
use the resource types.
In the following example, the applicationgroup.cf file includes the service group
definition for an application. The service group includes resources whose
resource types are defined in the file applicationtypes.cf. In this situation, the
applicationtypes.cf file must appear first in the main.cf file.
For example:
include "applicationtypes.cf"
include "applicationgroup.cf"
■ If you define heartbeats outside of the main.cf file and include the heartbeat
definition file, saving the main.cf file results in the heartbeat definitions getting
added directly to the main.cf file.
type DiskGroup (
static keylist SupportedActions = {
"license.vfd", "disk.vfd", "udid.vfd",
"verifyplex.vfd", campusplex, volinuse,
checkudid, numdisks, joindg, splitdg,
getvxvminfo }
static int NumThreads = 1
static int OnlineRetryLimit = 1
static str ArgList[] = { DiskGroup,
VCS configuration concepts 69
About the types.cf file
For another example, review the following main.cf and types.cf files that represent
an IP resource:
■ The high-availability address is configured on the interface, which is defined by
the Device attribute.
■ The IP address is enclosed in double quotes because the string contains periods.
See “About attribute data types” on page 70.
■ The VCS engine passes the identical arguments to the IP agent for online,
offline, clean, and monitor. It is up to the agent to use the arguments that it
requires. All resource names must be unique in a VCS cluster.
main.cf for Linux:
IP nfs_ip1 (
Device = eth0
Address = "192.168.1.201"
NetMask = "255.255.252.0"
)
type IP (
static keylist RegList = { NetMask }
static keylist SupportedActions = { "device.vfd", "route.vfd" }
static str ArgList[] = { Device, Address, NetMask, PrefixLen,
Options, IPOptions, IPRouteOptions }
str Device
str Address
str NetMask
int PrefixLen = 1000
VCS configuration concepts 70
About VCS attributes
str Options
str IPOptions
str IPRouteOptions
)
Boolean A boolean is an integer, the possible values of which are 0 (false) and
1 (true).
Scalar A scalar has only one value. This is the default dimension.
VCS configuration concepts 71
About VCS attributes
Keylist A keylist is an unordered list of strings, and each string is unique within
the list. Use a comma (,) or a semi-colon (;) to separate values.
A set of braces ({}) after the attribute name denotes that an attribute is
an association.
For example, to associate the average time and timestamp values with
an attribute:
■ Type-independent
Attributes that all agents (or resource types) understand. Examples:
RestartLimit and MonitorInterval; these can be set for any resource
type.
Typically, these attributes are set for all resources of a specific type.
For example, setting MonitorInterval for the IP resource type affects
all IP resources.
■ Type-dependent
Attributes that apply to a particular resource type. These attributes
appear in the type definition file (types.cf) for the agent.
Example: The Address attribute applies only to the IP resource type.
Attributes defined in the file types.cf apply to all resources of a
particular resource type. Defining these attributes in the main.cf file
overrides the values in the types.cf file for a specific resource.
For example, if you set StartVolumes = 1 for the DiskGroup types.cf,
it sets StartVolumes to True for all DiskGroup resources, by default.
If you set the value in main.cf , it overrides the value on a
per-resource basis.
■ Static
These attributes apply for every resource of a particular type. These
attributes are prefixed with the term static and are not included in
the resource’s argument list. You can override some static attributes
and assign them resource-specific values.
An example of local attributes can be found in the following resource type where
IP addresses and routing options are assigned per machine.
MultiNICA mnic (
Device@sys1 = { eth0 = "166.98.16.103", eth1 = "166.98.16.103"
}
Device@sys2 = { eth0 = "166.98.16.104", eth2 = "166.98.16.104"
}
NetMask = "255.255.255.0"
RouteOptions@sys1 = "-net 192.100.201.0 192.100.13.7"
RouteOptions@sys2 = "-net 192.100.201.1 192.100.13.8"
)
PERL5LIB Root directory for Perl executables. (applicable only for Windows)
Default: /etc/VRTSvcs
Note: If this variable is added or modified, you must reboot the system
to apply the changes.
VCS configuration concepts 75
VCS environment variables
VCS_DEBUG_LOG_TAGS Enables debug logs for the VCS engine, VCS agents, and HA commands.
You must set VCS_DEBUG_LOG_TAGS before you start HAD or before
you execute HA commands.
See “Enabling debug logs for the VCS engine” on page 674.
Default: Fully qualified host name of the remote host as defined in the
VCS_HOST environment variable or in the .vcshost file.
VCS_DOMAINTYPE The type of Security domain such as unixpwd, nt, nis, nisplus, ldap, or vx.
Default: unixpwd
VCS_DIAG Directory where VCS dumps HAD cores and FFDC data.
VCS_ENABLE_LDF Designates whether or not log data files (LDFs) are generated. If set to
1, LDFs are generated. If set to 0, they are not.
Default: /opt/VRTSvcs
Default: h
VCS configuration concepts 76
VCS environment variables
VCS_GAB_TIMEOUT_SECS Timeout in seconds for HAD to send heartbeats to GAB under normal
system load conditions.
Default: 30 seconds
VCS_GAB_PEAKLOAD_TIMEOUT_SECS Timeout in seconds for HAD to send heartbeats to GAB under peak system
load conditions.
Default: 30 seconds
Default: SYSLOG
VCS_HAD_RESTART_TIMEOUT Set this variable to designate the amount of time the hashadow process
waits (sleep time) before restarting HAD.
Default: 0
Default: /var/VRTSvcs
Note: If this variable is added or modified, you must reboot the system
to apply the changes.
Default: vcs-app
Note: Before you start the VCS engine (HAD), configure the specified
service. If a service is not specified, the VCS engine starts with port 14141.
The cluster-level attribute OpenExternalCommunicationPort determines
whether the port is open or not.
Default: /var/VRTSvcs
This directory is created in /tmp under the following conditions:
Note: The startup and shutdown of AMF, LLT, GAB, VxFEN, and VCS engine are
inter-dependent. For a clean startup or shutdown of VCS, you must either enable
or disable the startup and shutdown modes for all these modules.
In a single-node cluster, you can disable the start and stop environment variables
for LLT, GAB, and VxFEN if you have not configured these kernel modules.
Table 3-3 describes the start and stop variables for VCS.
VCS configuration concepts 79
VCS environment variables
AMF_START Startup mode for the AMF driver. By default, the AMF driver is
enabled to start up after a system reboot.
/etc/sysconfig/amf
Default: 1
AMF_STOP Shutdown mode for the AMF driver. By default, the AMF driver is
enabled to stop during a system shutdown.
/etc/sysconfig/amf
Default: 1
LLT_START Startup mode for LLT. By default, LLT is enabled to start up after a
system reboot.
/etc/sysconfig/llt
Default: 1
LLT_STOP Shutdown mode for LLT. By default, LLT is enabled to stop during
a system shutdown.
/etc/sysconfig/llt
Default: 1
GAB_START Startup mode for GAB. By default, GAB is enabled to start up after
a system reboot.
/etc/sysconfig/gab
Default: 1
GAB_STOP Shutdown mode for GAB. By default, GAB is enabled to stop during
a system shutdown.
/etc/sysconfig/gab
Default: 1
VCS configuration concepts 80
VCS environment variables
Table 3-3 Start and stop environment variables for VCS (continued)
/etc/sysconfig/vxfen
Default: 1
/etc/sysconfig/vxfen
Default: 1
VCS_START Startup mode for VCS engine. By default, VCS engine is enabled to
start up after a system reboot.
/etc/sysconfig/vcs
Default: 1
VCS_STOP Shutdown mode for VCS engine. By default, VCS engine is enabled
to stop during a system shutdown.
/etc/sysconfig/vcs
Default: 1
Section 2
Administration - Putting VCS
to work
■ User privileges for OS user groups for clusters running in secure mode
Cluster Cluster administrators are assigned full privileges. They can make
administrator configuration read-write, create and delete groups, set group
dependencies, add and delete systems, and add, modify, and delete
users. All group and resource operations are allowed. Users with Cluster
administrator privileges can also change other users’ privileges and
passwords.
Cluster operator Cluster operators can perform all cluster-level, group-level, and
resource-level operations, and can modify the user’s own password
and bring service groups online.
Note: Cluster operators can change their own passwords only if
configuration is in read or write mode. Cluster administrators can change
the configuration to the read or write mode.
Users with this role can be assigned group administrator privileges for
specific service groups.
Group operator Group operators can bring service groups and resources online and
take them offline. Users can also temporarily freeze or unfreeze service
groups.
Cluster guest Cluster guests have read-only access to the cluster, which means that
they can view the configuration, but cannot change it. They can modify
their own passwords only if the configuration is in read or write mode.
They cannot add or update users. Additionally, users with this privilege
can be assigned group administrator or group operator privileges for
specific service groups.
Note: By default, newly created users are assigned cluster guest
permissions.
Group guest Group guests have read-only access to the service group, which means
that they can view the configuration, but cannot change it. The group
guest role is available for clusters running in secure mode.
Cluster Administrator
includes privileges for
Cluster Operator
includes privileges for
Cluster Guest
includes privileges for
Group Administrator
includes privileges for Group Operator
includes privileges for GroupGuest
For example, cluster administrator includes privileges for group administrator, which
includes privileges for group operator.
If you do not have root privileges and the external communication port for VCS is
not open, you cannot run CLI commands. If the port is open, VCS prompts for your
VCS user name and password when you run haxxx commands.
You can use the halogin command to save the authentication information so that
you do not have to enter your credentials every time you run a VCS command.
See “Logging on to VCS” on page 193.
See “Cluster attributes” on page 801.
Situation and rule Roles assigned in the VCS Privileges that VCS grants
configuration Tom
Table 4-4 VCS privileges for users with multiple roles (continued)
Situation and rule Roles assigned in the VCS Privileges that VCS grants
configuration Tom
■ Administering resources
■ Administering systems
■ Administering clusters
■ Running commands
■ Editing attributes
■ Administering logs
■ Support of third-party accessibility tools. Note that Symantec has not tested
screen readers for languages other than English.
■ Text-only display of frequently viewed windows.
■ Log on to a cluster.
See “Logging on to a cluster and logging off” on page 121.
■ Make sure you have adequate privileges to perform cluster operations.
See “About VCS user privileges and roles” on page 82.
# xhost +
2 Configure the shell environment variable DISPLAY on the system where Cluster
Manager will be launched. For example, if you use Korn shell, type the following
command to appear on the system myws:
# export DISPLAY=myws:0
ForwardX11 yes
2 Log on to the remote system and start an X clock program that you can use
to test the forward connection.
xclock &
Do not set the DISPLAY variable on the client. X connections forwarded through
a secure shell use a special local display setting.
Administering the cluster from Cluster Manager (Java console) 94
Getting started prerequisites
GatewayPorts yes
2 From the client system, forward a port (client_port) to port 14141 on the VCS
server.
You may not be able to set GatewayPorts in the configuration file if you use
openSSH. In this case, use the -g option in the command.
3 Open another window on the client system and start the Java Console.
$/opt/VRTSvcs/bin/hagui
4 Add a cluster panel in the Cluster Monitor. When prompted, enter the name of
client system as the host and the client_port as the port. Do not enter
localhost.
/opt/VRTSvcs/bin/hagui
The command hagui will not work across firewalls unless all outgoing server
ports are open.
■ VCS Engine: 14141
■ Command server: 14150
■ Secure cluster : 14149
Administering the cluster from Cluster Manager (Java console) 95
Components of the Java Console
Icon Description
Cluster
System
Service Group
Resource Type
Resource
OFFLINE
Administering the cluster from Cluster Manager (Java console) 96
Components of the Java Console
Icon Description
PARTIAL
UP AND IN JEOPARDY
FROZEN
AUTODISABLED
UNKNOWN
ADMIN_WAIT
Table 5-2 lists the buttons from left to right as it appears on the Cluster monitor
toolbar.
Button Description
menu on a collapsed, scrolling view of Cluster Monitor, the scrolling stops while it
accesses the menu.
■ Click Native(Windows or Motif) look & feel or Java (Metal) look & feel.
■ Click Apply.
The window is divided into three panes. The top pane includes a toolbar that enables
you to quickly perform frequently used operations. The left pane contains a
configuration tree with three tabs: Service Groups, Systems, and Resource Types.
The right pane contains a panel that displays various views relevant to the object
selected in the configuration tree.
To access Cluster Explorer
1 Log on to the cluster.
2 Click anywhere in the active Cluster Monitor panel.
or
Right-click the selected Cluster Monitor panel and click Explorer View from the
menu.
Note: Some buttons may be disabled depending on the type of cluster (local or
global) and the privileges with which you logged on to the cluster.
Button Description
Add Service Group. Displays the Add Service Group dialog box.
Online Service Group. Displays the Online Service Group dialog box.
Offline Service Group. Displays the Offline Service Group dialog box.
Show the Logs. Displays alerts and messages that the VCS engine
generates, VCS agents, and commands issued from the console.
Administering the cluster from Cluster Manager (Java console) 105
About Cluster Explorer
Button Description
Virtual Fire Drill. Checks whether a resource can fail over to another
node in the cluster. Requires agents that support the running of virtual
fire drills.
a particular view. The console enables you to "tear off" each view to appear in a
separate window.
■ Click any object in the configuration tree to access the Status View and Properties
View.
■ Click a cluster in the configuration tree to access the Service Group view, the
System Connectivity view, and the Remote Cluster Status View (for global
clusters only).
■ Click a service group in the configuration tree to access the Resource view.
To create a tear-off view
On the View menu, click Tear Off, and click the appropriate view from the menu.
or
Right-click the object in the configuration tree, click View, and click the appropriate
view from the menu.
Status view
The Status View summarizes the state of the object selected in the configuration
tree. Use this view to monitor the overall status of a cluster, system, service group,
resource type, and resource.
For example, if a service group is selected in the configuration tree, the Status View
displays the state of the service group and its resources on member systems. It
also displays the last five critical or error logs. Point to an icon in the status table
to open a ScreenTip about the relevant VCS object.
Figure 5-5 shows the status view.
Administering the cluster from Cluster Manager (Java console) 107
About Cluster Explorer
For global clusters, this view displays the state of the remote clusters. For global
groups, this view shows the status of the groups on both local and remote clusters.
To access the Status view
1 From Cluster Explorer, click an object in the configuration tree.
2 In the view panel, click the Status tab.
Properties view
The Properties View displays the attributes of VCS objects. These attributes describe
the scope and parameters of a cluster and its components.
Figure 5-6 shows the Properties view.
Administering the cluster from Cluster Manager (Java console) 108
About Cluster Explorer
To view information on an attribute, click the attribute name or the icon in the Help
column of the table.
See “About VCS attributes” on page 70.
By default, this view displays key attributes of the object selected in the configuration
tree. The Properties View for a resource displays key attributes of the resource and
attributes specific to the resource types. It also displays attributes whose values
have been overridden.
See “Overriding resource type static attributes” on page 151.
To view all attributes associated with the selected VCS object, click Show all
attributes.
To access the properties view
1 From Cluster Explorer, click a VCS object in the configuration tree.
2 In the view panel, click the Properties tab.
Administering the cluster from Cluster Manager (Java console) 109
About Cluster Explorer
Resource view
The Resource view displays the resources in a service group. Use the graph and
ScreenTips in this view to monitor the dependencies between resources and the
status of the service group on all or individual systems in a cluster.
Figure 5-8 shows the Resource view.
group on the system. Click the service group icon to view the resource graph on
all systems in the cluster.
To access the Resource view
1 From Cluster Explorer, click the service groups tab in the configuration tree.
2 Click a service group in the configuration tree.
3 In the view panel, click the Resources tab.
Click Link to set or disable the link mode for the Service Group and Resource views.
Note: There are alternative ways to set up dependency links without using the Link
button.
The link mode enables you to create a dependency link by clicking on the parent
icon, dragging the yellow line to the icon that will serve as the child, and then clicking
the child icon. Use the Esc key to delete the yellow dependency line connecting
the parent and child during the process of linking the two icons.
If the Link mode is not activated, click and drag an icon along a horizontal plane to
move the icon. Click Auto Arrange to reset the appearance of the graph. The view
resets the arrangement of icons after the addition or deletion of a resource, service
group, or dependency link. Changes in the Resource and Service Group views will
be maintained after the user logs off and logs on to the Java Console at a later
time.
■ To move the view to the left or right, click a distance (in pixels) from the
drop-down list box between the hand icons. Click the <- or -> hand icon to move
the view in the desired direction.
■ To shrink or enlarge the view, click a size factor from the drop-down list box
between the magnifying glass icons. Click the - or + magnifying glass icon to
modify the size of the view.
■ To view a segment of the graph, point to the box to the right of the + magnifying
glass icon. Use the red outline in this box to encompass the appropriate segment
of the graph. Click the newly outlined area to view the segment.
■ To return to the original view, click the magnifying glass icon labeled 1.
VCS monitors systems and their services over a private network. The systems
communicate via heartbeats over an additional private network, which enables them
to recognize which systems are active members of the cluster, which are joining or
leaving the cluster, and which have failed.
Administering the cluster from Cluster Manager (Java console) 113
About Cluster Explorer
VCS protects against network failure by requiring that all systems be connected by
two or more communication channels. When a system is down to a single heartbeat
connection, VCS can no longer differentiate between the loss of a system and the
loss of a network connection. This situation is referred to as jeopardy.
Point to a system icon to display a ScreenTip on the links and heartbeats. If a
system in the cluster is experiencing a problem connecting to other systems, the
system icon changes its appearance to indicate the link is down. In this situation,
a jeopardy warning may appear in the ScreenTip for this system.
To access the System Connectivity view
1 From Cluster Explorer, click a cluster in the configuration tree.
2 In the view panel, click the System Connectivity tab.
This view enables you to declare a remote cluster fault as a disaster, disconnect,
or outage. Point to a table cell to view information about the VCS object.
To access the Remote Cluster Status view
1 From Cluster Explorer, click a cluster in the configuration tree.
2 In the view panel, click the Remote Cluster Status tab.
Administering the cluster from Cluster Manager (Java console) 114
Accessing additional features of the Java Console
Template view
The Template View displays the service group templates available in VCS.
Templates are predefined service groups that define the resources, resource
attributes, and dependencies within the service group. Use this view to add service
groups to the cluster configuration, and copy the resources within a service group
template to existing service groups.
In this window, the left pane displays the templates available on the system to which
Cluster Manager is connected. The right pane displays the selected template’s
resource dependency graph.
Template files conform to the VCS configuration language and contain the extension
.tf. These files reside in the VCS configuration directory.
Figure 5-13 shows the Template view.
System Manager
Use System Manager to add and remove systems in a service group’s system list.
A priority number (starting with 0) is assigned to indicate the order of systems on
which the service group will start in case of a failover. If necessary, double-click
the entry in the Priority column to enter a new value. Select the Startup check box
to add the systems to the service groups AutoStartList attribute. This enables the
service group to automatically come online on a system every time HAD is started.
User Manager
User Manager enables you to add and delete user profiles and to change user
privileges. If VCS is not running in secure mode, User Manager enables you to
change user passwords. You must be logged in as Cluster Administrator to access
User Manager.
Administering the cluster from Cluster Manager (Java console) 116
Accessing additional features of the Java Console
Command Center
Command Center enables you to build and execute VCS commands; most
commands that are executed from the command line can also be executed through
this window. The left pane of the window displays a Commands tree of all VCS
operations. The right pane displays a view panel that describes the selected
command. The bottom pane displays the commands being executed.
The commands tree is organized into Configuration and Operations folders. Click
the icon to the left of the Configuration or Operations folder to view its subfolders
and command information in the right pane. Point to an entry in the commands tree
to display information about the selected command.
Figure 5-14 shows the Command center window.
Configuration wizard
Use Configuration Wizard to create and assign service groups to systems in a
cluster.
See “Creating service groups with the configuration wizard” on page 141.
Administering the cluster from Cluster Manager (Java console) 117
Accessing additional features of the Java Console
Cluster query
Use Cluster Query to run SQL-like queries from Cluster Explorer. VCS objects that
can be queried include service groups, systems, resources, and resource types.
Some queries can be customized, including searching for the system’s online group
count and specific resource attributes.
Administering the cluster from Cluster Manager (Java console) 118
Accessing additional features of the Java Console
Logs
The Logs dialog box displays the log messages generated by the VCS engine, VCS
agents, and commands issued from Cluster Manager to the cluster. Use this dialog
box to monitor and take actions on alerts on faulted global clusters and failed service
group failover attempts.
Note: To ensure the time stamps for engine log messages are accurate, make sure
to set the time zone of the system running the Java Console to the same time zone
as the system running the VCS engine.
■ Click the VCS Logs tab to view the log type, time, and details of an event. Each
message presents an icon in the first column of the table to indicate the message
type. Use this window to customize the display of messages by setting filter
criteria.
■ Click the Agent Logs tab to display logs according to system, resource type,
and resource filter criteria. Use this tab to view the log type, time, and details of
an agent event.
■ Click the Command Logs tab to view the status (success or failure), time,
command ID, and details of a command. The Command Log only displays
commands issued in the current session.
Administering the cluster from Cluster Manager (Java console) 119
Accessing additional features of the Java Console
■ Click the Alerts tab to view situations that may require administrative action.
Alerts are generated when a local group cannot fail over to any system in the
local cluster, a global group cannot fail over, or a cluster fault takes place. A
current alert will also appear as a pop-up window when you log on to a cluster
through the console.
To access the Logs dialog box
From Cluster Explorer, click Logs on the View menu.
or
On the Cluster Explorer toolbar, click Show the Logs.
Logging on to a cluster
This topic describes how to log on to a cluster.
Administering the cluster from Cluster Manager (Java console) 122
Administering Cluster Monitor
You can use nis or nis+ accounts or accounts set up on the local system.
If you do not enter the name of the domain, VCS assumes the domain is
the local system.
If the user does not have root privileges on the system, VCS assigns guest
privileges to the user. To override these privileges, add the domain user to
the VCS administrators’ list.
See “Administering user profiles” on page 123.
■ The Java Console connects to the cluster using the authentication broker
and the domain type provided by the engine. To change the authentication
broker or the domain type, click Advanced.
See “About security services” on page 48.
Select a new broker and domain type, as required.
■ Click OK.
Administering the cluster from Cluster Manager (Java console) 123
Administering user profiles
■ The Server Credentials dialog box displays the credentials of the cluster
service to which the console is connected.
To disable this dialog box from being displayed every time you connect to
the cluster, select the Do not show during startup check box
■ Click OK to connect to the cluster.
The animated display shows various objects, such as service groups and
resources, being transferred from the server to the console.
Cluster Explorer is launched automatically upon initial logon, and the icons in
the cluster panel change color to indicate an active panel.
Adding a user
To add a user, follow these steps:
1 From Cluster Explorer, click User Manager on the File menu.
2 In the User Manager dialog box, click New User.
3 In the Add User dialog box:
4 Click Close.
Deleting a user
To delete a user, follow these steps:
Administering the cluster from Cluster Manager (Java console) 125
Administering user profiles
Note: This module is not available if the cluster is running in secure mode.
5 Click Close.
To change a password as an operator or guest
1 From Cluster Explorer, click Change Password on the File menu.
2 In the Change Password dialog box:
■ Enter the new password.
■ Reenter the password in the Confirm Password field.
■ Click OK.
3 Click Close.
Administering the cluster from Cluster Manager (Java console) 126
Administering user profiles
■ Select the appropriate check boxes to grant privileges to the user. To grant
Group Administrator or Group Operator privileges, proceed to the next step.
Otherwise, proceed to the last step.
■ Click Select Groups.
■ Click the groups for which you want to grant privileges to the user, then
click the right arrow to move the groups to the Selected Groups box.
■ Click OK in the Change Privileges dialog box, then click Close in the User
Manager dialog box.
3 In the Available Systems box, click the systems on which the service group
will be added.
4 Click the right arrow to move the selected systems to the Systems for Service
Group box. The priority number (starting with 0) is automatically assigned to
indicate the order of systems on which the service group will start in case of a
failover. If necessary, double-click the entry in the Priority column to enter a
new value.
Select the Startup check box to add the systems to the service groups
AutoStartList attribute. This enables the service group to automatically come
online on a system every time HAD is started.
5 Click the appropriate service group type. A failover service group runs on only
one system at a time; a parallel service group runs concurrently on multiple
systems.
6 To add a new service group based on a template, click Templates... Otherwise,
proceed to step 9.
7 Click the appropriate template name.
8 Click OK.
9 Click Apply.
Administering the cluster from Cluster Manager (Java console) 130
Administering service groups
3 Use System Manager to add the service group to systems in the cluster.
See “System Manager” on page 115.
Note: You cannot delete service groups with dependencies. To delete a linked
service group, you must first delete the link.
■ Select the No Preonline check box to bring the service group online without
invoking the preonline trigger.
■ Click Show Command in the bottom left corner to view the command
associated with the service group. Click Hide Command to close the view
of the command.
■ Click OK.
3 Select the persistent check box if necessary. The persistent option maintains
the frozen state after a reboot if you save this change to the configuration.
4 Click Apply.
Note: The flush operation does not halt the resource operations (such as online,
offline, migrate, and clean) that are running. If a running operation succeeds after
a flush command was fired, the resource state might change depending on the
operation.
5 Click Yes.
To delete a service group dependency from Command Center
1 In the Command Center configuration tree, expand Commands >
Configuration > Dependencies > Unlink Service Groups.
2 Click the parent resource group in the Service Groups box. After selecting
the parent group, the corresponding child groups are displayed in the Child
Service Groups box.
3 Click the child service group.
4 Click Apply.
2 Click the right arrow to move the available system to the Systems for Service
Group table.
3 Select the Startup check box to add the systems to the service groups
AutoStartList attribute. This enables the service group to automatically come
online on a system every time HAD is started.
4 The priority number (starting with 0) is assigned to indicate the order of systems
on which the service group will start in case of a failover. If necessary,
double-click the entry in the Priority column to enter a new value.
5 Click OK.
To remove a system from the service group’s system list
1 In the System Manager dialog box, click the system in the Systems for Service
Group table.
2 Click the left arrow to move the system to the Available Systems box.
3 Click OK.
Note: VCS also provides wizards to create service groups for applications and NFS
shares. See the chapter "Configuring applications and resources in VCS" for more
information about these wizards.
Administering the cluster from Cluster Manager (Java console) 142
Administering service groups
4 Click Next again to configure the service group with a template and proceed
to 7. Click Finish to add an empty service group to the selected cluster systems
and configure it at a later time.
5 Click the template on which to base the new service group. The Templates
box lists the templates available on the system to which Cluster Manager is
connected. The resource dependency graph of the templates, the number of
resources, and the resource types are also displayed. Click Next.
Administering the cluster from Cluster Manager (Java console) 143
Administering service groups
6 If a window notifies you that the name of the service group or resource within
the service group is already in use, proceed to 9.
7 Click Next to apply all of the new names listed in the table to resolve the name
clash.
or
Modify the clashing names by entering text in the field next to the Apply button,
clicking the location of the text for each name from the Correction drop-down
list box, clicking Apply, and clicking Next.
8 Click Next to create the service group. A progress indicator displays the status.
9 After the service group is successfully created, click Next to edit attributes
using the wizard. Click Finish to edit attributes at a later time using Cluster
Explorer.
10 Review the attributes associated with the resources of the service group. If
necessary, proceed to 11 to modify the default values of the attributes.
Otherwise, proceed to 12 to accept the default values and complete the
configuration.
11 Modify the values of the attributes (if necessary).
12 Click Finish.
Administering the cluster from Cluster Manager (Java console) 144
Administering resources
Administering resources
Use the Java Console to administer resources in the cluster. Use the console to
add and delete, bring online and take offline, probe, enable and disable, clear, and
link and unlink resources. You can also import resource types to the configuration.
Adding a resource
The Java Console provides several ways to add a resource to a service group. Use
Cluster Explorer or Command Center to perform this task.
To add a resource from Cluster Explorer
1 In the Service Groups tab of the Cluster Explorer configuration tree, click a
service group to which the resource will be added.
2 On the Edit menu, click Add, and click Resource.
or
Click Add Resource in the Cluster Explorer toolbar.
3 Enter the details of the resource:
■ Enter the name of the resource.
■ Click the resource type.
■ Edit resource attributes according to your configuration. The Java Console
also enables you to edit attributes after adding the resource.
■ Select the Critical and Enabled check boxes, if applicable. The Critical
option is selected by default.
A critical resource indicates the service group is faulted when the resource,
or any resource it depends on, faults. An enabled resource indicates agents
monitor the resource; you must specify the values of mandatory attributes
before enabling a resource. If a resource is created dynamically while VCS
is running, you must enable the resource before VCS monitors it. VCS will
not bring a disabled resource nor its children online, even if the children
are enabled.
■ Click Show Command in the bottom left corner to view the command
associated with the resource. Click Hide Command to close the view of
the command.
■ Click OK.
Administering the cluster from Cluster Manager (Java console) 145
Administering resources
4 Click Copy, and click Self from the menu to copy the resource. Click Copy,
and click Self and Child Nodes from the menu to copy the resource with its
dependent resources.
5 In the Service Groups tab of the Cluster Explorer configuration tree, click the
service group to which to add the resources.
6 In the Cluster Explorer view panel, click the Resources tab.
7 Right-click the Resource view panel and click Paste from the menu. After the
resources are added to the service group, edit the attributes to configure the
resources.
Note: The RemoteGroup agent represents that state of a failover service group;
the agent is not supported with parallel service groups.
Administering the cluster from Cluster Manager (Java console) 147
Administering resources
■ Click Next.
■ Choose the OnOff option to monitor the remote service group, bring
the remote group online, and take it offline from the local cluster.
■ Click Next.
6 Review the text in the dialog box and click Finish to add the RemoteGroup
resource to the specified service group in the local cluster.
7 Create dependencies between the RemoteGroup resource and the existing
resources of the service group.
See “Linking resources” on page 154.
Administering the cluster from Cluster Manager (Java console) 149
Administering resources
Deleting a resource
This topic describes how to delete a resource.
To delete a resource from Cluster Explorer
1 In the Service Groups tab of the configuration tree, right-click the resource.
or
Click a service group in the configuration tree, click the Resources tab, and
right-click the resource icon in the view panel.
2 Click Delete from the menu.
3 Click Yes.
To delete a resource from Command Center
1 In the Command Center configuration tree, expand Commands >
Configuration > Cluster Objects > Delete Resource.
2 Click the resource.
3 Click Apply.
To take child resources offline from Command Center while ignoring the state of
the parent resource
1 In the Command Center configuration tree, expand Commands > Operations
> Controls > OffProp Resource.
2 Click the resource.
3 Click the system on which to take the resource, and the child resources, offline.
4 Select the ignoreparent check box.
5 Click Apply.
Probing a resource
This topic describes how to probe a resource to check that it is configured. For
example, you might probe a resource to check if it is ready to be brought online.
To probe a resource from Cluster Explorer
1 In the Service Groups tab of the configuration tree, right-click the resource.
2 Click Probe, and click the appropriate system from the menu.
To probe a resource from Command Center
1 In the Command Center configuration tree, expand Commands > Operations
> Controls > Probe Resource.
2 Click the resource.
3 Click the system on which to probe the resource.
4 Click Apply.
4 Click OK.
The selected attributes appear in the Overridden Attributes table in the
Properties view for the resource.
5 To modify the default value of an overridden attribute, click the icon in the Edit
column of the attribute.
To restore default settings to a type’s static attribute
1 Right-click the resource in the Service Groups tab of the configuration tree or
in the Resources tab of the view panel.
2 Click Remove Attribute Overrides.
3 Select the overridden attributes to be restored to their default settings.
4 Click OK.
Clearing a resource
Clear a resource to remove a fault and make the resource available to go online.
A resource fault can occur in a variety of situations, such as a power failure or a
faulty configuration.
To clear a resource from Cluster Explorer
1 In the Service Groups tab of the configuration tree, right-click the resource.
2 Click Clear Fault, and click the system from the menu. Click Auto instead of
a specific system to clear the fault on all systems where the fault occurred.
To clear a resource from Command Center
1 In the Command Center configuration tree, expand Commands > Operations
> Availability > Clear Resource.
2 Click the resource. To clear the fault on all systems listed in the Systems box,
proceed to step 5. To clear the fault on a specific system, proceed to step 3.
3 Select the Per System check box.
Administering the cluster from Cluster Manager (Java console) 154
Administering resources
Linking resources
Use Cluster Explorer or Command Center to link resources in a service group.
To link resources from Cluster Explorer
1 In the configuration tree, click the Service Groups tab.
2 Click the service group to which the resources belong.
3 In the view panel, click the Resources tab. This opens the resource
dependency graph.
To link a parent resource with a child resource, do the following:
■ Click Link...
■ Click the parent resource.
■ Move the mouse towards the child resource. The yellow line "snaps" to the
child resource. If necessary, press Esc to delete the line between the parent
and the pointer before it snaps to the child.
■ Click the child resource.
■ In the Confirmation dialog box, click Yes.
or
Right-click the parent resource, and click Link from the menu. In the Link
Resources dialog box, click the resource that will serve as the child. Click
OK.
■ Click OK.
3 Click the parent resource in the Service Group Resources box. After selecting
the parent resource, the potential resources that can serve as child resources
are displayed in the Child Resources box.
Unlinking resources
Use Cluster Explorer or Command Center to unlink resources in a service group.
To unlink resources from Cluster Explorer
1 From the configuration tree, click the Service Groups tab.
2 Click the service group to which the resources belong.
3 In the view panel, click the Resources tab.
4 In the Resources View, right-click the link between the resources.
Administering the cluster from Cluster Manager (Java console) 156
Administering resources
3 Click Close.
Administering systems
Use the Java Console to administer systems in the cluster. Use the console to add,
delete, freeze, and unfreeze systems.
Adding a system
Cluster Explorer and Command Center enable you to add a system to the cluster.
A system must have an entry in the LLTTab configuration file before it can be added
to the cluster.
To add a system from Cluster Explorer
1 On the Edit menu, click Add, and click System.
or
Click Add System on the Cluster Explorer toolbar.
2 Enter the name of the system.
3 Click Show Command in the bottom left corner to view the command
associated with the system. Click Hide Command to close the view of the
command.
4 Click OK.
Administering the cluster from Cluster Manager (Java console) 160
Administering systems
Deleting a system
This topic describes how to delete a system.
To delete a system from Command Center
1 In the Command Center configuration tree, expand Commands >
Configuration > Cluster Objects > Delete System.
2 Click the system.
3 Click Apply.
Freezing a system
Freeze a system to prevent service groups from coming online on the system.
To freeze a system from Cluster Explorer
1 Click the Systems tab of the configuration tree.
2 In the configuration tree, right-click the system, click Freeze, and click
Temporary or Persistent from the menu. The persistent option maintains the
frozen state after a reboot if the user saves this change to the configuration.
To freeze a system from Command Center
1 In the Command Center configuration tree, expand Commands > Operations
> Availability > Freeze System.
2 Click the system.
3 If necessary, select the persistent and evacuate check boxes. The evacuate
option moves all service groups to a different system before the freeze operation
takes place. The persistent option maintains the frozen state after a reboot if
the user saves this change to the configuration.
4 Click Apply.
Administering the cluster from Cluster Manager (Java console) 161
Administering clusters
Unfreezing a system
Unfreeze a frozen system to enable service groups to come online on the system.
To unfreeze a system from Cluster Explorer
1 Click the Systems tab of the configuration tree.
2 In the configuration tree, right-click the system and click Unfreeze.
To unfreeze a system from Command Center
1 In the Command Center configuration tree, expand Commands > Operations
> Availability > Unfreeze System.
2 Click the system.
3 Click Apply.
Administering clusters
Use the Java Console to specify the clusters you want to view from the console,
and to modify the VCS configuration. The configuration describes the parameters
of the entire cluster. Use Cluster Explorer or Command Center to open, save, and
"save and close" a configuration.
Running commands
Use Command Center to run commands on a cluster.
Commands are organized within the Command Center as "Configuration" commands
and "Operation" commands.
To run a command from Command Center
1 From Command Center, click the command from the command tree. If
necessary, expand the tree to view the command.
2 In the corresponding command interface, click the VCS objects and appropriate
options (if necessary).
3 Click Apply.
Administering the cluster from Cluster Manager (Java console) 163
Editing attributes
Editing attributes
Use the Java Console to edit attributes of VCS objects. By default, the Java Console
displays key attributes and type specific attributes. To view all attributes associated
with an object, click Show all attributes.
To edit an attribute from Cluster Explorer
1 From the Cluster Explorer configuration tree, click the object whose attributes
you want to edit.
2 In the view panel, click the Properties tab. If the attribute does not appear in
the Properties View, click Show all attributes.
3 In the Properties or Attributes View, click the icon in the Edit column of the
Key Attributes or Type Specific Attributes table. In the Attributes View, click
the icon in the Edit column of the attribute.
4 In the Edit Attribute dialog box, enter the changes to the attribute values as
follows:
■ To edit a scalar value:
Enter or click the value.
■ To edit a non-scalar value:
Use the + button to add an element. Use the - button to delete an element.
■ To change the attribute’s scope:
Click the Global or Per System option.
■ To change the system for a local attribute:
Click the system from the menu.
5 Click OK.
To edit an attribute from Command Center
1 In the Command Center configuration tree, expand Commands >
Configuration > Attributes > Modify vcs_object Attributes.
2 Click the VCS object from the menu.
Administering the cluster from Cluster Manager (Java console) 164
Querying the cluster configuration
3 In the attribute table, click the icon in the Edit column of the attribute.
4 In the Edit Attribute dialog box, enter the changes to the attribute values as
follows:
■ To edit a scalar value:
Enter or click the value.
■ To edit a non-scalar value:
Use the + button to add an element. Use the - button to delete an element.
■ To change the attribute’s scope:
Click the Global or Per System option.
■ To change the system for a local attribute:
Click the system from the menu.
5 Click OK.
but the Notifier resource is configured under another group, you can modify the
attributes of the existing Notifier resource and system list for that group. If the
ClusterService group is configured but the Notifier resource is not configured, the
Notifier resource will be created and added to the ClusterService group.
To set up event notification by using the Notifier wizard
1 From Cluster Explorer, click Notifier Wizard... on the Tools menu.
or
On the Cluster Explorer toolbar, click Launch Notifier Resource Configuration
Wizard.
2 Click Next.
3 In the Service Group Configuration for Notifier dialog box, do the following:
■ Enter the name of the notifier resource to be created. For example, "ntfr".
■ Click the target systems in the Available Systems box.
■ Click the right arrow to move the systems to the Systems for Service
Group table. To remove a system from the table, click the system and click
the left arrow.
■ Select the Startup check box to add the systems to the service groups
AutoStartList attribute. This enables the service group to automatically come
online on a system every time HAD is started.
■ The priority number (starting with 0) is assigned to indicate the order of
systems on which the service group will start in case of a failover. If
necessary, double-click the entry in the Priority column to enter a new
value.
4 Click Next.
5 Choose the mode of notification that needs to be configured. Select the check
boxes to configure SNMP and/or SMTP (if applicable).
Administering the cluster from Cluster Manager (Java console) 167
Setting up VCS event notification by using the Notifier wizard
7 Click Next.
9 Click Next.
11 Click Next.
12 Click the Bring the Notifier Resource Online check box, if desired.
13 Click Next.
14 Click Finish.
Administering logs
The Java Console enables you to customize the log display of messages that the
engine generates. In the Logs dialog box, you can set filter criteria to search and
view messages, and monitor and resolve alert messages.
To view the VCS Log pop-up, select View and Logs from the drop-down menu or
click Show the Logs from the toolbar.
To browse the logs for detailed views of each log message, double-click the event’s
description. Use the arrows in the VCS Log details pop-up window to navigate
backward and forward through the message list.
Administering the cluster from Cluster Manager (Java console) 169
Administering logs
3 Click OK.
Monitoring alerts
The Java Console sends automatic alerts that require administrative action and
appear on the Alerts tab of the Logs dialog box. Use this tab to take action on the
alert or delete the alert.
Administering the cluster from Cluster Manager (Java console) 171
Administering logs
4 Click OK.
To delete an alert
1 In the Alert tab or dialog box, click the alert to delete.
2 Click Delete Alert.
3 Provide the details for this operation:
■ Enter the reason for deleting the alert.
■ Click OK.
Administering the cluster from Cluster Manager (Java console) 172
Administering VCS Simulator
■ Administering LLT
■ Starting VCS
■ Stopping VCS
■ Logging on to VCS
■ Administering agents
■ Administering systems
Administering the cluster from the command line 174
About administering VCS from the command line
... Used to specify that the hagrp -modify group attribute value … [-sys
argument can have several system]
values.
See “About administering VCS from the command line” on page 174.
# uname -n
The entries in this file must correspond to those in the files /etc/llthosts and
/etc/llttab.
Note: VCS must be in read-write mode before you can change the configuration.
Note: Do not use the vcsencrypt utility when you enter passwords from the Java
console.
To encrypt a password
1 Run the utility from the command line.
# vcsencrypt -vcs
2 The utility prompts you to enter the password twice. Enter the password and
press Return.
Enter Password:
Enter Again:
3 The utility encrypts the password and displays the encrypted password. Use
this password to edit the VCS configuration file main.cf.
Note: Do not use the vcsencrypt utility when you enter passwords from the Java
console.
vcsencrypt -agent
Administering the cluster from the command line 177
About administering VCS from the command line
2 The utility prompts you to enter the password twice. Enter the password and
press Return.
3 The utility encrypts the password and displays the encrypted password. Use
this password to edit the VCS configuration file main.cf.
# haconf -makerw
# vcsencrypt -gensecinfo
# haconf -dump
3 Encrypt the agent password with the security key that you generated.
■ On a node where VCS is running, enter the following command:
■ When prompted, enter a password and press Return. The utility prompts
you to enter the password twice.
Enter Password:
Enter Again:
The utility encrypts the password and displays the encrypted password.
4 Verify that VCS uses the new encryption mechanism by doing the following:
■ Verify that the SecInfo cluster attribute is added to the main.cf file with the
security key as the value of the attribute.
■ Verify that the password that you encrypted resembles the following:
SApswd=7c:a7:4d:75:78:86:07:5a:de:9d:7a:9a:8c:6e:53:c6
# haconf -makerw
# cd /opt/VRTS/bin
./vxlicinst -k XXXX-XXXX-XXXX-XXXX-XXXX-XXX
You must update licensing information on all nodes before proceeding to the
next step.
3 Update cluster-level licensing information:
# haclus -updatelic
# vxkeyless displayall
Administering LLT
You can use the LLT commands such as lltdump and lltconfig to administer
the LLT links. See the corresponding LLT manual pages for more information on
the commands.
See “About Low Latency Transport (LLT)” on page 47.
See “Displaying the cluster details and LLT version for LLT links” on page 182.
See “Adding and removing LLT links” on page 182.
See “Configuring aggregated interfaces under LLT” on page 184.
See “Configuring destination-based load balancing for LLT” on page 186.
Administering the cluster from the command line 182
Administering LLT
Displaying the cluster details and LLT version for LLT links
You can use the lltdump command to display the LLT version for a specific LLT
link. You can also display the cluster ID and node ID details.
See the lltdump(1M) manual page for more details.
To display the cluster details and LLT version for LLT links
◆ Run the following command to display the details:
# /opt/VRTSllt/lltdump -D -f link
For example, if eth2 is connected to sys1, then the command displays a list of
all cluster IDs and node IDs present on the network link eth2.
# /opt/VRTSllt/lltdump -D -f eth2
lltdump : Configuration:
device : eth2
sap : 0xcafe
promisc sap : 0
promisc mac : 0
cidsnoop : 1
=== Listening for LLT packets ===
cid nid vmaj vmin
3456 1 5 0
3456 3 5 0
83 0 4 0
27 1 3 7
3456 2 5 0
Note: When you add or remove LLT links, you need not shut down GAB or the high
availability daemon, had. Your changes take effect immediately, but are lost on the
next restart. For changes to persist, you must also update the /etc/llttab file.
Administering the cluster from the command line 183
Administering LLT
Where:
For link type ether, you can specify the device name as an interface
name. For example, eth0. Preferably, specify the device name as
eth-macaddress. For example, eth- xx:xx:xx:xx:xx:xx.
For link types udp and udp6, the device is the udp and udp6 device
name respectively.
bcast Broadcast address for the link type udp and rdma
SAP SAP to bind on the network links for link type ether
Administering the cluster from the command line 184
Administering LLT
For example:
■ For ether link type:
Note: If you want the addition of LLT links to be persistent after reboot, then
you must edit the /etc/lltab with LLT entries.
# lltconfig -u devtag
# /etc/init.d/llt stop
2 Add the following entry to the /etc/llttab file to configure an aggregated interface.
If the link command is valid for all systems, specify a dash (-).
Default is 0xcafe.
3 Restart LLT for the changes to take effect. Restart the other dependent modules
that you stopped in step 1.
# /etc/init.d/llt start
Default is 0xcafe.
lltconfig -F linkburst:0
# /etc/sysconfig/amf
# /etc/init.d/amf start
# /etc/sysconfig/amf
# /etc/init.d/amf stop
2 If you want minimum downtime of the agents, use the following steps to unload
the AMF kernel driver:
■ Run the following command to disable the AMF driver even if agents are
still registered with it.
# amfconfig -Uof
Starting VCS
You can start VCS using one of the following approaches:
■ Using the installvcs -start command
■ Manually start VCS on each node
To start VCS
1 To start VCS using the installvcs program, perform the following steps on any
node in the cluster:
■ Log in as root user.
■ Run the following command:
# /opt/VRTS/install/installvcs -start
2 To start VCS manually, run the following commands on each node in the cluster:
■ Log in as root user.
■ Start LLT and GAB. Start I/O fencing if you have configured it. Skip this
step if you want to start VCS on a single-node cluster.
Optionally, you can start AMF if you want to enable intelligent monitoring.
# hastart
# hastart -onenode
Administering the cluster from the command line 189
Stopping VCS
See “Starting the VCS engine (HAD) and related processes” on page 189.
# gabconfig -a
Make sure that port a and port h memberships exist in the output for all nodes
in the cluster. If you configured I/O fencing, port b membership must also exist.
When VCS is started, it checks the state of its local configuration file and registers
with GAB for cluster membership. If the local configuration is valid, and if no other
system is running VCS, it builds its state from the local configuration file and enters
the RUNNING state.
If the configuration on all nodes is invalid, the VCS engine waits for manual
intervention, or for VCS to be started on a system that has a valid configuration.
See “System states” on page 737.
To start the VCS engine
◆ Run the following command:
# hastart
To start the VCS engine when all systems are in the ADMIN_WAIT state
◆ Run the following command from any system in the cluster to force VCS to
use the configuration file from the system specified by the variable system:
# hastart -onenode
Stopping VCS
You can stop VCS using one of the following approaches:
Administering the cluster from the command line 190
Stopping VCS
# /opt/VRTS/install/installvcs -stop
2 To stop VCS manually, run the following commands on each node in the cluster:
■ Log in as root user.
■ Take the VCS service groups offline and verify that the groups are offline.
# hastop -local
See “Stopping the VCS engine and related processes” on page 191.
■ Verify that the VCS engine port h is closed.
# gabconfig -a
■ Stop I/O fencing if you have configured it. Stop GAB and then LLT.
Option Description
-all Stops HAD on all systems in the cluster and takes all service groups
offline.
-local Stops HAD on the system on which you typed the command
-force Allows HAD to be stopped without taking service groups offline on the
system. The value of the EngineShutdown attribute does not influence
the behavior of the -force option.
-evacuate When combined with -local or -sys, migrates the system’s active
service groups to another system in the cluster, before the system is
stopped.
-noautodisable Ensures the service groups that can run on the node where the hastop
command was issued are not autodisabled. This option can be used
with -evacuate but not with -force.
and the OnGrpCnt attributes are non-zero. VCS continues to wait for the service
groups to go offline before it shuts down.
See “Troubleshooting resources” on page 698.
About stopping VCS with options other than the -force option
When VCS is stopped by options other than -force on a system with online service
groups, the groups that run on the system are taken offline and remain offline. VCS
indicates this by setting the attribute IntentOnline to 0. Use the option -force to
enable service groups to continue being online while the VCS engine (HAD) is
brought down and restarted. The value of the IntentOnline attribute remains
unchanged after the VCS engine restarts.
Note: VCS does not consider this attribute when the hastop is issued with the
following options: -force or -local -evacuate -noautodisable.
Configure one of the following values for the attribute depending on the desired
functionality for the hastop command:
Table 6-3 shows the engine shutdown values for the attribute.
EngineShutdown Description
Value
DisableClusStop Do not process the hastop -all command; process all other hastop
commands.
PromptClusStop Prompt for user confirmation before you run the hastop -all
command; process all other hastop commands.
PromptLocal Prompt for user confirmation before you run the hastop -local
command; process all other hastop commands except hastop -sys
command.
Administering the cluster from the command line 193
Logging on to VCS
EngineShutdown Description
Value
PromptAlways Prompt for user confirmation before you run any hastop command.
Logging on to VCS
VCS prompts for user name and password information when non-root users run
haxxx commands. Use the halogin command to save the authentication information
so that you do not have to enter your credentials every time you run a VCS
command. Note that you may need specific privileges to run VCS commands.
When you run the halogin command, VCS stores encrypted authentication
information in the user’s home directory. For clusters that run in secure mode, the
command also sets up a trust relationship and retrieves a certificate from an
authentication broker.
If you run the command for different hosts, VCS stores authentication information
for each host. After you run the command, VCS stores the information until you end
the session.
For clusters that run in secure mode, you also can generate credentials for VCS to
store the information for 24 hours or for eight years and thus configure VCS to not
prompt for passwords when you run VCS commands as non-root users.
Administering the cluster from the command line 194
Logging on to VCS
2 Define the node on which the VCS commands will be run. Set the VCS_HOST
environment variable to the name of the node. To run commands in a remote
cluster, you set the variable to the virtual IP address that was configured in the
ClusterService group.
3 Log on to VCS:
# halogin -endallsessions
After you end a session, VCS prompts you for credentials every time you run
a VCS command.
Administering the cluster from the command line 195
About managing VCS configuration files
# hasys -state
Verifying a configuration
Use hacf to verify (check syntax of) the main.cf and the type definition file, types.cf.
VCS does not run if hacf detects errors in the configuration.
Administering the cluster from the command line 196
About managing VCS configuration files
To verify a configuration
◆ Run the following command:
BackupInterval 0
# haconf -makerw
Saving a configuration
When you save a configuration, VCS renames the file main.cf.autobackup to main.cf.
VCS also save your running configuration to the file main.cf.autobackup.
If have not configured the BackupInterval attribute, VCS saves the running
configuration.
See “Scheduling automatic backups for VCS configuration files” on page 196.
Administering the cluster from the command line 197
About managing VCS users from the command line
To save a configuration
◆ Run the following command
# haconf -makerw
Specify the user name and the domain name to add a user on multiple nodes in
the cluster. This option requires multiple entries for a user, one for each node.
You cannot assign or change passwords for users when VCS is running in secure
mode.
The commands to add, modify, and delete a user must be executed only as root
or administrator and only if the VCS configuration is in read/write mode.
See “Setting the configuration to read or write” on page 197.
Note: You must add users to the VCS configuration to monitor and administer VCS
from the graphical user interface Cluster Manager.
Adding a user
Users in the category Cluster Guest cannot add users.
To add a user
1 Set the configuration to read/write mode:
# haconf -makerw
# haconf -makerw
For example,
Modifying a user
Users in the category Cluster Guest cannot modify users.
You cannot modify a VCS user in clusters that run in secure mode.
To modify a user
1 Set the configuration to read or write mode:
# haconf -makerw
Deleting a user
You can delete a user from the VCS configuration.
Administering the cluster from the command line 201
About querying VCS
To delete a user
1 Set the configuration to read or write mode:
# haconf -makerw
2 For users with Administrator and Operator access, remove their privileges:
Displaying a user
This topic describes how to display a list of users and their privileges.
To display a list of users
◆ Type the following command:
# hauser -list
# hauser -display
the VCS configuration or system states can be executed by all users: you do not
need root privileges.
The policy values can be any one of the following values: Priority, RoundRobin,
Load or BiggestAvailable.
Note: You cannot use the -forecast option when the service group state is
in transition. For example, VCS rejects the command if the service group is in
transition to an online state or to an offline state.
The -forecast option is supported only for failover service groups. In case of
offline failover service groups, VCS selects the target system based on the
service group’s failover policy.
The BiggestAvailable policy is applicable only when the service group attribute
Load is defined and cluster attribute Statistics is enabled.
The actual service group FailOverPolicy can be configured as any policy, but
the forecast is done as though FailOverPolicy is set to BiggestAvailable.
Querying resources
This topic describes how to perform a query on resources.
To display a resource’s dependencies
◆ Type the following command:
# hatype -list
Querying agents
Table 6-4 lists the run-time status for the agents that the haagent -display
command displays.
Faults Indicates the number of agent faults within one hour of the time the
fault began and the time the faults began.
Querying systems
This topic describes how to perform a query on systems.
To display a list of systems in the cluster
◆ Type the following command:
# hasys -list
If you do not specify a system, the command displays attribute names and
values for all systems.
Administering the cluster from the command line 206
About querying VCS
The -util option is applicable only if you set the cluster attribute Statistics to
Enabled and define at least one key in the cluster attribute HostMeters.
The command also indicates if the HostUtilization, and HostAvailableForecast
values are stale.
Querying clusters
This topic describes how to perform a query on clusters.
To display the value of a specific cluster attribute
◆ Type the following command:
# haclus -display
Querying status
This topic describes how to perform a query on status of service groups in the
cluster.
Administering the cluster from the command line 207
About querying VCS
Note: Run the hastatus command with the -summary option to prevent an incessant
output of online state transitions. If the command is used without the option, it will
repeatedly display online state transitions until it is interrupted by the command
CTRL+C.
To display the status of all service groups in the cluster, including resources
◆ Type the following command:
# hastatus
If you do not specify a service group, the status of all service groups appears.
The -sound option enables a bell to ring each time a resource faults.
The -time option prints the system time at which the status was received.
To display the status of service groups and resources on specific systems
◆ Type the following command:
To display the status of cluster faults, including faulted service groups, resources,
systems, links, and agents
◆ Type the following command:
# hastatus -summary
Administering the cluster from the command line 208
About querying VCS
# hamsg -help
# hamsg -list
The option -path specifies where hamsg looks for the specified LDF. If not
specified, hamsg looks for files in the default directory:
/var/VRTSvcs/ldf
To display specific LDF data
◆ Type the following command:
-any Specifies hamsg return messages that match any of the specified
query options.
-otype Specifies hamsg return messages that match the specified object
type
-oname Specifies hamsg return messages that match the specified object
name.
-path Specifies where hamsg looks for the specified LDF. If not specified,
hamsg looks for files in the default directory /var/VRTSvcs/ldf.
Attribute=~Value (the value is the prefix of the attribute, for example a query for
the state of a resource = ~FAULTED returns all resources whose state begins with
FAULTED.)
Multiple conditional statements can be used and imply AND logic.
You can only query attribute-value pairs that appear in the output of the command
hagrp -display.
To display the list of service groups whose values match a conditional statement
◆ Type the following command:
The variable service_group must be unique among all service groups defined
in the cluster.
This command initializes a service group that is ready to contain various
resources. To employ the group properly, you must populate its SystemList
attribute to define the systems on which the group may be brought online and
taken offline. (A system list is an association of names and integers that
represent priority values.)
Administering the cluster from the command line 211
About administering service groups
Note that you cannot delete a service group until all of its resources are deleted.
You may also define a service group as parallel. To set the Parallel attribute
to 1, type the following command. (Note that the default for this attribute is 0,
which designates the service group as a failover group.):
You cannot modify this attribute if resources have already been added to the
service group.
You can modify the attributes SystemList, AutoStartList, and Parallel only by
using the command hagrp -modify. You cannot modify attributes created by
the system, such as the state of the service group.
Administering the cluster from the command line 212
About administering service groups
You must take the service group offline on the system that is being modified.
When you add a system to a service group’s system list, the system must have
been previously added to the cluster. When you use the command line, you can
use the hasys -add command.
When you delete a system from a service group’s system list, the service group
must not be online on the system to be deleted.
If you attempt to change a service group’s existing system list by using hagrp
-modify without other options (such as -add or -update) the command fails.
To start a service group on a system and bring online only the resources already
online on another system
◆ Type the following command:
If the service group does not have resources online on the other system, the
service group is brought online on the original system and the checkpartial
option is ignored.
Note that the checkpartial option is used by the Preonline trigger during
failover. When a service group that is configured with Preonline =1 fails over
to another system (system 2), the only resources brought online on system 2
are those that were previously online on system 1 prior to failover.
To bring a service group and its associated child service groups online
◆ Type one of the following commands:
■ # hagrp -online -propagate service_group -sys system
Note: See the man pages associated with the hagrp command for more information
about the -propagate option.
To take a service group offline only if all resources are probed on the system
◆ Type the following command:
To take a service group and its associated parent service groups offline
◆ Type one of the following commands:
■ # hagrp -offline -propagate service_group -sys system
Note: See the man pages associated with the hagrp command for more information
about the -propagate option.
A service group can be switched only if it is fully or partially online. The -switch
option is not supported for switching hybrid service groups across system
zones.
Switch parallel global groups across clusters by using the following command:
VCS brings the parallel service group online on all possible nodes in the remote
cluster.
A service group can be migrated only if it is fully online. The -migrate option
is supported only for failover service groups and for resource types that have
the SupportedOperations attribute set to migrate.
See “Resource type attributes” on page 750.
The service group must meet the following requirements regarding configuration:
■ A single mandatory resource that can be migrated, having the
SupportedOperations attribute set to migrate and the Operations attribute set
to OnOff
■ Other optional resources with Operations attribute set to None or OnOnly
The -migrate option is supported for the following configurations:
■ Stand alone service groups
■ Service groups having one or both of the following configurations:
■ Parallel child service groups with online local soft or online local firm
dependencies
■ Parallel or failover parent service group with online global soft or online
remote soft dependencies
The option -persistent enables the freeze to be remembered when the cluster
is rebooted.
Administering the cluster from the command line 216
About administering service groups
Clearing a resource initiates the online process previously blocked while waiting
for the resource to become clear.
■ If system is specified, all faulted, non-persistent resources are cleared from
that system only.
■ If system is not specified, the service group is cleared on all systems in the
group’s SystemList in which at least one non-persistent resource has faulted.
Note: The flush operation does not halt the resource operations (such as online,
offline, migrate, and clean) that are running. If a running operation succeeds after
a flush command was fired, the resource state might change depending on the
operation.
Use the command hagrp -flush to clear the internal state of VCS. The hagrp
-flush command transitions resource state from ‘waiting to go online’ to ‘not waiting’.
You must use the hagrp -flush -force command to transition resource state
from ‘waiting to go offline’ to ‘not waiting’.
Administering the cluster from the command line 218
About administering service groups
#!/bin/ksh
PATH=/opt/VRTSvcs/bin:$PATH; export PATH
if [ $# -ne 1 ]; then
echo "usage: $0 <system name>"
exit 1
fi
hagrp -list |
while read grp sys junk
do
locsys="${sys##*:}"
case "$locsys" in
"$1")
hagrp -flush "$grp" -sys "$locsys"
;;
esac
done
# haflush systemname
Administering agents
Under normal conditions, VCS agents are started and stopped automatically.
To start an agent
◆ Run the following command:
To stop an agent
◆ Run the following command:
The -force option stops the agent even if the resources for the agent are
online. Use the -force option when you want to upgrade an agent without taking
its resources offline.
Administering the cluster from the command line 220
About administering resources
Note: The addition of resources on the command line requires several steps, and
the agent must be prevented from managing the resource until the steps are
completed. For resources defined in the configuration file, the steps are completed
before the agent is started.
Adding resources
This topic describes how to add resources to a service group or remove resources
from a service group.
To add a resource
◆ Type the following command:
The resource name must be unique throughout the cluster. The resource type
must be defined in the configuration language. The resource belongs to the
group service_group.
Administering the cluster from the command line 221
About administering resources
Deleting resources
This topic describes how to delete resources from a service group.
To delete a resource
◆ Type the following command:
VCS does not delete online resources. However, you can enable deletion of
online resources by changing the value of the DeleteOnlineResources attribute.
See “Cluster attributes” on page 801.
To delete a resource forcibly, use the -force option, which takes the resoure
offline irrespective of the value of the DeleteOnlineResources attribute.
The agent managing the resource is started on a system when its Enabled
attribute is set to 1 on that system. Specifically, the VCS engine begins to
monitor the resource for faults. Agent monitoring is disabled if the Enabled
attribute is reset to 0.
Administering the cluster from the command line 222
About administering resources
Note that global attributes cannot be modified with the hares -local command.
Table 6-5 lists the commands to be used to localize attributes depending on their
dimension.
Note: If multiple values are specified and if one is invalid, VCS returns
an error for the invalid value, but continues to process the others. In
the following example, if sysb is part of the attribute SystemList, but
sysa is not, sysb is deleted and an error message is sent to the log
regarding sysa.
# haconf -makerw
3 If required, change the values of the MonitorFreq key and the RegisterRetryLimit
key of the IMF attribute.
See the Symantec Cluster Server Bundled Agents Reference Guide for
agent-specific recommendations to set these attributes.
Administering the cluster from the command line 226
About administering resources
5 Make sure that the AMF kernel driver is configured on all nodes in the cluster.
/etc/init.d/amf status
Configure the AMF driver if the command output returns that the AMF driver
is not loaded or not configured.
See “Administering the AMF kernel driver” on page 186.
6 Restart the agent. Run the following commands on each node.
# haconf -makerw
2 To disable intelligent resource monitoring for all the resources of a certain type,
run the following command:
Note: VCS provides haimfconfig script to enable or disable the IMF functionality for
agents. You can use the script with VCS in running or stopped state. Use the script
to enable or disable IMF for the IMF-aware bundled agents, enterprise agents, and
custom agents.
See “Enabling and disabling IMF for agents by using script” on page 227.
haimfconfig -enable
haimfconfig -disable
This command enables IMF for the specified agents. It also configures and
loads the AMF module on the system if the module is not already loaded. If
the agent is a custom agent, the command prompts you for the Mode and
MonitorFreq values if Mode value is not configured properly.
Note: The command prompts you whether you want to make the
configuration changes persistent. If you choose No, the command exits. If
you choose Yes, it enables IMF and dumps the configuration by using the
haconf -dump -makero command.
■ If VCS is not running, changes to the Mode value (for all specified agents)
and MonitorFreq value (for all specified custom agents only) need to be
made by modifying the VCS configuration files. Before the command makes
any changes to configuration files, it prompts you for a confirmation. If you
choose Yes, it modifies the VCS configuration files. IMF gets enabled for
the specified agent when VCS starts.
Example
The command enables IMF for the Mount agent and the Application agent.
To disable IMF for a set of agents
◆ Run the following command:
This command disables IMF for specified agents by changing the Mode value
to 0 for each agent and for all resources that had overridden the Mode values.
Administering the cluster from the command line 229
About administering resources
■ If VCS is running, the command changes the Mode value of the agents and
the overridden Mode values of all resources of these agents to 0.
Note: The command prompts you whether you want to make the
configuration changes persistent. If you choose No, the command exits. If
you choose Yes, it enables IMF and dumps the configuration by using the
haconf -dump -makero command.
■ If VCS is not running, any change to the Mode value needs to be made by
modifying the VCS configuration file. Before it makes any changes to
configuration files, the command prompts you for a confirmation. If you
choose Yes, it sets the Mode value to 0 in the configuration files.
Example
The command disables IMF for the Mount agent and Application agent.
This command sets the value of AMF_START to 1 in the AMF configuration file.
It also configures and loads the AMF module on the system.
To disable AMF on a system
◆ Run the following command:
This command unconfigures and unloads the AMF module on the system if
AMF is configured and loaded. It also sets the value of AMF_START to 0 in the
AMF configuration file.
Note: AMF is not directly unconfigured by this command if the agent is registered
with AMF. The script prompts you if you want to disable AMF for all agents forcefully
before it unconfigures AMF.
Administering the cluster from the command line 230
About administering resources
To view the changes made when the script disables IMF for an agent
◆ Run the following command:
haimfconfig -display
Examples:
If IMF is disabled for Mount agent (when Mode is set to 0) and enabled for rest of
the installed IMF-aware agents:
haimfconfig -display
Administering the cluster from the command line 231
About administering resources
#Agent STATUS
Application ENABLED
Mount DISABLED
Process ENABLED
DiskGroup ENABLED
If IMF is disabled for Mount agent (when VCS is running, agent is running and not
registered with AMF module) and enabled for rest of the installed IMF-aware agents:
haimfconfig -display
#Agent STATUS
Application ENABLED
Mount DISABLED
Process ENABLED
DiskGroup ENABLED
If IMF is disabled for all installed IMF-aware agents (when AMF module is not
loaded):
haimfconfig -display
#Agent STATUS
Application DISABLED
Mount DISABLED
Process DISABLED
DiskGroup DISABLED
If IMF is partially enabled for Mount agent (Mode is set to 3 at type level and to 0
at resource level for some resources) and enabled fully for rest of the installed
IMF-aware agents.
haimfconfig -display
#Agent STATUS
Application ENABLED
Mount ENABLED|PARTIAL
Administering the cluster from the command line 232
About administering resources
Process ENABLED
DiskGroup ENABLED
If IMF is partially enabled for Mount agent (Mode is set to 0 at type level and to 3
at resource level for some resources) and enabled fully for rest of the installed
IMF-aware agents:
haimfconfig -display
#Agent STATUS
Application ENABLED
Mount ENABLED|PARTIAL
Process ENABLED
DiskGroup ENABLED
To unlink resources
◆ Type the following command:
The command stops all parent resources in order before taking the specific
resource offline.
To take a resource offline and propagate the command to its children
◆ Type the following command:
Probing a resource
This topic describes how to probe a resource.
To prompt an agent to monitor a resource on a system
◆ Type the following command:
Though the command may return immediately, the monitoring process may
not be completed by the time the command returns.
Clearing a resource
This topic describes how to clear a resource.
To clear a resource
◆ Type the following command:
Initiate a state change from RESOURCE_FAULTED to RESOURCE_OFFLINE:
Clearing a resource initiates the online process previously blocked while waiting
for the resource to become clear. If system is not specified, the fault is cleared
on each system in the service group’s SystemList attribute.
See “Clearing faulted resources in a service group” on page 216.
This command also clears the resource’s parents. Persistent resources whose
static attribute Operations is defined as None cannot be cleared with this
command and must be physically attended to, such as replacing a raw disk.
The agent then updates the status automatically.
You must delete all resources of the type before deleting the resource type.
To add or modify resource types in main.cf without shutting down VCS
◆ Type the following command:
type FileOnOff (
static str AgentClass = RT
static str AgentPriority = 10
static str ScriptClass = RT
static str ScriptPriority = 40
static str ArgList[] = { PathName }
str PathName
)
Note: For attributes AgentClass and AgentPriority, changes are effective immediately.
For ScriptClass and ScriptPriority, changes become effective for scripts fired after
the execution of the hatype command.
For example, to set the AgentPriority attribute of the FileOnOff resource to 10,
type:
For example, to set the ScriptClass of the FileOnOff resource to 40, type:
Administering systems
Administration of systems includes tasks such as modifying system attributes,
freezing or unfreezing systems, and running commands.
To modify a system’s attributes
◆ Type the following command:
To freeze a system (prevent groups from being brought online or switched on the
system)
◆ Type the following command:
-evacuate Fails over the system’s active service groups to another system
in the cluster before the freeze is enabled.
The utility configures the cluster UUID on the cluster nodes based on
whether a cluster UUID exists on any of the VCS nodes:
■ If no cluster UUID exists or if the cluster UUID is different on the cluster
nodes, then the utility does the following:
■ Generates a new cluster UUID using the /opt/VRTSvcs/bin/osuuid.
■ Creates the /etc/vx/.uuids/clusuuid file where the utility stores the
cluster UUID.
■ Configures the cluster UUID on all nodes in the cluster.
■ If a cluster UUID exists and if the UUID is same on all the nodes, then the
utility retains the UUID.
Use the -force option to discard the existing cluster UUID and create new
cluster UUID.
■ If some nodes in the cluster have cluster UUID and if the UUID is the same,
then the utility configures the existing UUID on the remaining nodes.
The utility copies the cluster UUID from a system that is specified using
the-from_sys option to all the systems that are specified using the -to_sys
option.
had -version
hastart -version
2 Run one of the following commands to retrieve information about the engine
version.
had -v
hastart -v
6 Remove the entries for the node from the /etc/llthosts file on each remaining
node.
7 Change the node count entry from the /etc/gabtab file on each remaining
node.
8 Unconfigure GAB and LLT on the node leaving the cluster.
9 Remove VCS and other RPMs from the node.
10 Remove GAB and LLT configuration files from the node.
The following is a list of changes that you can make and information concerning
ports in a VCS environment:
■ Changing VCS's default port.
Add an entry for a VCS service name In /etc/services file, for example:
Where 3333 in the example is the port number where you want to run VCS.
When the engine starts, it listens on the port that you configured above (3333)
for the service. You need to modify the port to the /etc/services file on all the
nodes of the cluster.
■ You do not need to make changes for agents or HA commands. Agents and
HA commands use locally present UDS sockets to connect to the engine, not
TCP/IP connections.
■ You do not need to make changes for HA commands that you execute to talk
to a remotely running VCS engine (HAD), using the facilities that the VCS_HOST
environment variable provides. You do not need to change these settings
because the HA command queries the /etc/services file and connects to the
appropriate port.
■ For the Java Console GUI, you can specify the port number that you want the
GUI to connect to while logging into the GUI. You have to specify the port number
that is configured in the /etc/services file (for example 3333 above).
To change the default port
1 Stop VCS.
2 Add an entry service name vcs-app in /etc/services.
3 You need to modify the port to the /etc/services file on all the nodes of the
cluster.
4 Restart VCS.
5 Check the port.
Note: For the attributes EngineClass and EnginePriority, changes are effective
immediately. For ProcessClass and ProcessPriority changes become effective only
for processes fired after the execution of the haclus command.
cluster vcs-india (
EngineClass = "RT"
EnginePriority = "20"
ProcessClass = "TS"
ProcessPriority = "40"
)
# /opt/VRTS/install/installvcs -security
To enable secure mode with FIPS, start the installvcs program with the
-security -fips option.
If you already have secure mode enabled and need to move to secure mode
with FIPS, complete the steps in the following procedure.
See “Migrating from secure mode to secure mode with FIPS” on page 247.
The installer displays the directory where the logs are created.
3 Review the output as the installer verifies whether VCS configuration files exist.
The installer also verifies that VCS is running on all systems in the cluster.
4 The installer checks whether the cluster is in secure mode or non-secure mode.
If the cluster is in non-secure mode, the installer prompts whether you want to
enable secure mode.
5 Review the output as the installer modifies the VCS configuration files to enable
secure mode in the cluster, and restarts VCS.
To disable secure mode in a VCS cluster
1 Start the installvcs program with the -security option.
# /opt/VRTS/install/installvcs -security
The installer displays the directory where the logs are created.
2 Review the output as the installer proceeds with a verification.
Administering the cluster from the command line 247
Using the -wait option in scripts that use VCS commands
3 The installer checks whether the cluster is in secure mode or non-secure mode.
If the cluster is in secure mode, the installer prompts whether you want to
disable secure mode.
4 Review the output as the installer modifies the VCS configuration files to disable
secure mode in the cluster, and restarts VCS.
# installvcs -security
■ /var/VRTSat
■ /var/VRTSat_lhc
Use the -sys option when the scope of the attribute is local.
Use the -sys option when the scope of the attribute is local.
See the man pages associated with these commands for more information.
The command runs the infrastructure check and verifies whether the system
<sysname> has the required infrastructure to host the resource <resname>,
should a failover require the resource to come online on the system. For the
variable <sysname>, specify the name of a system on which the resource is
offline. The variable <vfdaction> specifies the Action defined for the agent. The
"HA fire drill checks" for a resource type are defined in the SupportedActions
attribute for that resource and can be identified with the .vfd suffix.
Administering the cluster from the command line 249
About administering simulated clusters from the command line
The command runs the infrastructure check and verifies whether the system
<sysname> has the required infrastructure to host resources in the service
group <grpname> should a failover require the service group to come online
on the system. For the variable <sysname>, specify the name of a system on
which the group is offline
To fix detected errors
◆ Type the following command.
The variable <vfdaction> represents the check that reported errors for the
system <sysname>. The "HA fire drill checks" for a resource type are defined
in the SupportedActions attribute for that resource and can be identified with
the .vfd suffix.
Agent Description
DiskGroup Brings Veritas Volume Manager (VxVM) disk groups online and offline,
monitors them, and make them highly available. The DiskGroup agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
Agent Description
DiskGroupSnap Brings resources online and offline and monitors disk groups used for
fire drill testing. The DiskGroupSnap agent enables you to verify the
configuration integrity and data integrity in a Campus Cluster
environment with VxVM stretch mirroring. The service group that
contains the DiskGroupSnap agent resource has an offline local
dependency on the application’s service group. This is to ensure that
the fire drill service group and the application service group are not
online at the same site.
DiskReservation Enables you to reserve and monitor all SCSI disks or a percentage of
disks for a system. Such reservations prevent disk data corruption by
restricting other nodes from accessing and writing to the reserved disks
by giving exclusive access to system for a shared disk.
VolumeSet Brings Veritas Volume Manager (VxVM) volume sets online and offline,
and monitors them. Use the VolumeSet agent to make a volume set
highly available. VolumeSet resources depend on DiskGroup resources.
LVMLogicalVolume Brings resource online and offline, and monitors Logical Volume
Manager (LVM2) logical volumes. You can use this agent to make
logical volumes highly available and to monitor them. LVMLogicalVolume
resources depend on LVMVolumeGroup resources.
LVMVolumeGroup Brings Logical Volume Manager (LVM2) volume groups online and
offline, monitors them, and make them highly available. No fixed
dependencies exist for the LVMVolumeGroup agent.
When you create a volume group on disks with a single path, Symantec
recommends that you use the DiskReservation agent.
Mount Brings resources online and offline, monitors file system or NFS client
mount points, and make them highly available. The Mount agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
Agent Description
Agent Description
DNS Updates and monitors the mapping of host names to IP addresses and
canonical names (CNAME). The DNS agent performs these tasks for
a DNS zone when it fails over nodes across subnets (a wide-area
failover). Use the DNS agent when the failover source and target nodes
are on different subnets. The DNS agent updates the name server and
allows clients to connect to the failed over instance of the application
service.
Agent Description
NFS Manages NFS daemons which process requests from NFS clients. The
NFS Agent manages the rpc.nfsd/nfsd daemon and the rpc.mountd
daemon on the NFS server. If NFSv4 support is enabled, it also
manages the rpc.idmapd/nfsmapid daemon. Additionally, the NFS Agent
also manages NFS lock and status daemons.
Share Shares, unshares, and monitors a single local resource for exporting
an NFS file system that is mounted by remote systems. Share resources
depend on NFS. In an NFS service group, the IP family of resources
depends on Share resources.
SambaServer Starts, stops, and monitors the smbd process as a daemon. You can
use the SambaServer agent to make an smbd daemon highly available
or to monitor it. The smbd daemon provides Samba share services.
The SambaServer agent, with SambaShare and NetBIOS agents, allows
a system running a UNIX or UNIX-like operating system to provide
services using the Microsoft network protocol. It has no dependent
resource.
Configuring applications and resources in VCS 255
VCS bundled agents for UNIX
Agent Description
SambaShare Adds, removes, and monitors a share by modifying the specified Samba
configuration file. You can use the SambaShare agent to make a Samba
Share highly available or to monitor it. SambaShare resources depend
on SambaServer, NetBios, and Mount resources.
NetBIOS Starts, stops, and monitors the nmbd daemon. You can use the NetBIOS
agent to make the nmbd daemon highly available or to monitor it. The
nmbd process broadcasts the NetBIOS name, or the name by which
the Samba server is known in the network. The NetBios resource
depends on the IP or the IPMultiNIC resource.
Agent Description
Apache Brings an Apache Server online, takes it offline, and monitors its
processes. Use the Apache Web server agent with other agents to
make an Apache Web server highly available. This type of resource
depends on IP and Mount resources. The Apache agent can detect
when an Apache Web server is brought down gracefully by an
administrator. When Apache is brought down gracefully, the agent does
not trigger a resource fault even though Apache is down.
Table 7-4 Services and Application agents and their description (continued)
Agent Description
Application Brings applications online, takes them offline, and monitors their status.
Use the Application agent to specify different executables for the online,
offline, and monitor routines for different programs. The executables
must exist locally on each node. You can use the Application agent to
provide high availability for applications that do not have bundled agents,
enterprise agents, or custom agents. This type of resource can depend
on IP, IPMultiNIC, and Mount resources. The Application agent supports
both IMF-based monitoring and traditional poll-based monitoring.
Process Starts, stops, and monitors a process that you specify. Use the Process
agent to make a process highly available. This type of resource can
depend on IP, IPMultiNIC, and Mount resources. The Process agent
supports both IMF-based monitoring and traditional poll-based
monitoring.
ProcessOnOnly Starts and monitors a process that you specify. Use the agent to make
a process highly available. No child dependencies exist for this resource.
Table 7-5 VCS infrastructure and support agents and their description
Agent Description
NotifierMngr Starts, stops, and monitors a notifier process, making it highly available.
The notifier process manages the reception of messages from VCS
and the delivery of those messages to SNMP consoles and SMTP
servers. The NotifierMngr resource can depend on the NIC resource.
Configuring applications and resources in VCS 257
VCS bundled agents for UNIX
Table 7-5 VCS infrastructure and support agents and their description
(continued)
Agent Description
Phantom Enables VCS to determine the state of parallel service groups that do
not include OnOff resources. No dependencies exist for the Phantom
resource.
Agent Description
ElifNone Monitors a file and checks for the file’s absence. You can use the
ElifNone agent to test service group behavior. No dependencies exist
for the ElifNone resource.
FileNone Monitors a file and checks for the file’s existence. You can use the
FileNone agent to test service group behavior. No dependencies exist
for the FileNone resource.
Configuring applications and resources in VCS 258
Configuring NFS service groups
Agent Description
FileOnOff Creates, removes, and monitors files. You can use the FileOnOff agent
to test service group behavior. No dependencies exist for the FileOnOff
resource.
FileOnOnly Creates and monitors files but does not remove files. You can use the
FileOnOnly agent to test service group behavior. No dependencies
exist for the FileOnOnly resource.
About NFS
Network File System (NFS) allows network users to access shared files stored on
an NFS server. NFS lets users manipulate shared files transparently as if the files
were on a local disk.
NFS terminology
Key terms used in NFS operations include:
NFS Server The computer that makes the local file system accessible to users
on the network.
NFS Client The computer which accesses the file system that is made available
by the NFS server.
rpc.mountd A daemon that runs on NFS servers. It handles initial requests from
NFS clients. NFS clients use the mount command to make requests.
On the server side, it receives lock requests from the NFS client and
passes the requests to the kernel-based nfsd.
On the client side, it forwards the NFS lock requests from users to
the rpc.lockd/lockd on the NFS server.
Configuring applications and resources in VCS 259
Configuring NFS service groups
rpc.idmapd/nfsmapid A userland daemon that maps the NFSv4 username and group to
the local username and group of the system. This daemon is specific
to NFSv4.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend
to use NFSv4.
2 If you configure the backing store for the NFS exports using VxVM, create
DiskGroup and Mount resources for the mount point that you want to export.
If you configure the backing store for the NFS exports using LVM, configure
the LVMVolumeGroup resource and Mount resource for the mount point that
you want to export.
Refer to Storage agents chapter in the Symantec Cluster Server Bundled
Agents Reference Guide for details.
3 Create an NFSRestart resource. Set the Lower attribute of this NFSRestart
resource to 1. Ensure that NFSRes attribute points to the NFS resource that
is on the system.
For NFS lock recovery, make sure that the NFSLockFailover attribute and the
LocksPathName attribute have appropriate values. The NFSRestart resource
depends on the Mount and NFS resources that you have configured for this
service group.
Note: The NFSRestart resource gets rid of preonline and postoffline triggers
for NFS.
4 Create a Share resource. Set the PathName to the mount point that you want
to export. In case of multiple shares, create multiple Share resources with
different values for their PathName attributes. All the Share resources
configured in the service group should have dependency on the NFSRestart
resource with a value of 1 for the Lower attribute.
5 Create an IP resource. The value of the Address attribute for this IP resource
is used to mount the NFS exports on the client systems. Make the IP resource
depend on the Share resources that are configured in the service group.
Configuring applications and resources in VCS 262
Configuring NFS service groups
6 Create a DNS resource if you want NFS lock recovery. The DNS resource
depends on the IP resource. Refer to the sample configuration on how to
configure the DNS resource.
7 Create an NFSRestart resource. Set the NFSRes attribute to the NFS resource
(nfs) that is configured on the system. Set the Lower attribute of this NFSRestart
resource to 0. Make the NFSRestart resource depend on the IP resource or
the DNS resource (if you want to use NFS lock recovery.)
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend
to use NFSv4.
5 You must create a Phantom resource in this service group to display the correct
state of the service group.
Configuring applications and resources in VCS 263
Configuring NFS service groups
Creating the NFS exports service group for a multiple NFS environment
This service group contains the Share and IP resources for exports. The value for
the PathName attribute for the Share resource must be on shared storage and it
must be visible to all nodes in the cluster.
To create the NFS exports service group
1 Create an NFS Proxy resource inside the service group. This Proxy resource
points to the actual NFS resource that is configured on the system.
2 If you configure the backing store for the NFS exports with VxVM, create
DiskGroup and Mount resources for the mount point that you want to export.
If the backing store for the NFS exports is configured using LVM, configure the
LVMVolumeGroup resource and Mount resources for the mount points that
you want to export.
Refer to Storage agents in the Symantec Cluster ServerBundled Agents
Reference Guide for details.
3 Create an NFSRestart resource. Set the Lower attribute of this NFSRestart
resource to 1. Ensure that NFSRes attribute points to the NFS resource
configured on the system.
For NFS lock recovery, make sure that the NFSLockFailover attribute and the
LocksPathName attribute have appropriate values. The NFSRestart resource
depends on the Mount resources that you have configured for this service
group. The NFSRestart resource gets rid of preonline and postoffline triggers
for NFS.
4 Create a Share resource. Set the PathName attribute to the mount point that
you want to export. In case of multiple shares, create multiple Share resources
with different values for their PathName attributes. All the Share resources that
are configured in the service group need to have dependency on the
NFSRestart resource that has a value of 1 for its Lower attribute.
5 Create an IP resource. The value of the Address attribute for this IP resource
is used to mount the NFS exports on the client systems. Make the IP resource
depend on the Share resources that are configured in the service group.
6 Create a DNS resource if you want NFS lock recovery. The DNS resource
depends on the IP resource. Refer to the sample configuration on how to
configure the DNS resource.
7 Create an NFSRestart resource. Set the NFSRes attribute to the NFS resource
(nfs) that is configured on the system. Set the value of the Lower attribute for
this NFSRestart resource to 0. Make the NFSRestart resource depend on the
IP resource or the DNS resource to use NFS lock recovery.
Configuring applications and resources in VCS 264
Configuring NFS service groups
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
Note: You must set NFSLockFailover to 1 for NFSRestart resource if you intend to
use NFSv4.
Note: Ensure that all attributes except the Lower attribute are identical for the two
NFSRestart resources.
recovery and it does not prevent potential NFS ACK storms. Symantec does not
recommend this configuration.
3 Create a Share resource. Set the PathName to the mount point that you want
to export. In case of multiple shares, create multiple Share resources with
different values for their PathName attributes.
4 Create an IP resource. The value of the Address attribute for this IP resource
is used to mount the NFS exports on the client systems. Make the IP resource
depend on the Share resources that are configured in the service group.
Sample configurations
The following are the sample configurations for some of the supported NFS
configurations.
■ See “Sample configuration for a single NFS environment without lock recovery”
on page 267.
■ See “Sample configuration for a single NFS environment with lock recovery”
on page 269.
■ See “Sample configuration for a single NFSv4 environment” on page 272.
■ See “Sample configuration for a multiple NFSv4 environment” on page 274.
■ See “Sample configuration for a multiple NFS environment without lock recovery”
on page 277.
■ See “Sample configuration for a multiple NFS environment with lock recovery”
on page 280.
■ See “Sample configuration for configuring NFS with separate storage”
on page 283.
■ See “Sample configuration when configuring all NFS services in a parallel service
group” on page 285.
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
IP ip_sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Configuring applications and resources in VCS 269
Configuring NFS service groups
NFS nfs (
Nproc = 16
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
NFSRestart_sg11_L requires nfs
NFSRestart_sg11_L requires vcs_dg1_r01_2
NFSRestart_sg11_L requires vcs_dg1_r0_1
NFSRestart_sg11_U requires ip_sys1
ip_sys1 requires nic_sg11_eth0
ip_sys1 requires share_dg1_r01_2
ip_sys1 requires share_dg1_r0_1
share_dg1_r01_2 requires NFSRestart_sg11_L
share_dg1_r0_1 requires NFSRestart_sg11_L
vcs_dg1_r01_2 requires vol_dg1_r01_2
vcs_dg1_r0_1 requires vol_dg1_r0_1
vol_dg1_r01_2 requires vcs_dg1
vol_dg1_r0_1 requires vcs_dg1
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
IP ip_sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Configuring applications and resources in VCS 271
Configuring NFS service groups
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFS nfs (
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Configuring applications and resources in VCS 272
Configuring NFS service groups
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFS nfs (
NFSv4Support = 1
)
NFSRestart NFSRestart_sg11_L (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = nfs
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Configuring applications and resources in VCS 274
Configuring NFS service groups
system sys1 (
Configuring applications and resources in VCS 275
Configuring NFS service groups
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nproc = 6
NFSv4Support = 1
)
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
IP ip_sys2 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
Configuring applications and resources in VCS 276
Configuring NFS service groups
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
Configuring applications and resources in VCS 277
Configuring NFS service groups
include "types.cf"
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nproc = 6
)
Configuring applications and resources in VCS 278
Configuring NFS service groups
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
IP ip_sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
)
Configuring applications and resources in VCS 279
Configuring NFS service groups
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
include "types.cf"
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nproc = 6
)
Phantom ph1 (
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Configuring applications and resources in VCS 281
Configuring NFS service groups
DNS dns_11 (
Domain = "oradb.sym"
TSIGKeyFile = "/Koradb.sym.+157+13021.private"
StealthMasters = { "10.198.90.202" }
ResRecord @sys1 = { sys1 = "10.198.90.198" }
ResRecord @sys2 = { sys2 = "10.198.90.198" }
CreatePTR = 1
OffDelRR = 1
)
IP ip_sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
LocksPathName = "/testdir/VITA_dg1_r01_2"
NFSLockFailover = 1
)
Configuring applications and resources in VCS 282
Configuring NFS service groups
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nproc = 6
)
Phantom ph1 (
)
group sg11storage (
SystemList = { sys1 = 0, sys2 = 1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
Configuring applications and resources in VCS 284
Configuring NFS service groups
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Volume vol_dg1_r01_2 (
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
IP sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
NFSRestart NFSRestart_sg11_L (
NFSRes = n1
Lower = 1
)
Configuring applications and resources in VCS 285
Configuring NFS service groups
NFSRestart NFSRestart_sg11_U (
NFSRes = n1
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
cluster clus1 (
UseFence = SCSI3
)
system sys1 (
)
Configuring applications and resources in VCS 286
Configuring NFS service groups
system sys2 (
)
group nfs_sg (
SystemList = { sys1 = 0, sys2 = 1 }
Parallel = 1
AutoStartList = { sys1, sys2 }
)
NFS n1 (
Nproc = 6
)
NFSRestart nfsrestart (
NFSRes = n1
Lower = 2
)
nfsrestart requires n1
group sg11 (
SystemList = { sys1 = 0, sys2 = 1 }
AutoStartList = { sys1 }
)
IP ip_sys1 (
Device @sys1 = eth0
Device @sys2 = eth0
Address = "10.198.90.198"
NetMask = "255.255.248.0"
)
NIC nic_sg11_eth0 (
Device @sys1 = eth0
Device @sys2 = eth0
NetworkHosts = { "10.198.88.1" }
)
Proxy p11 (
TargetResName = n1
)
Configuring applications and resources in VCS 287
Configuring NFS service groups
Share share_dg1_r01_2 (
PathName = "/testdir/VITA_dg1_r01_2"
Options = rw
)
Share share_dg1_r0_1 (
PathName = "/testdir/VITA_dg1_r0_1"
Options = rw
)
group sg11storage (
SystemList = { sys1 = 0, sys2 = 1 }
)
DiskGroup vcs_dg1 (
DiskGroup = dg1
StartVolumes = 0
StopVolumes = 0
)
Mount vcs_dg1_r01_2 (
MountPoint = "/testdir/VITA_dg1_r01_2"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r01_2"
FSType = vxfs
FsckOpt = "-y"
)
Mount vcs_dg1_r0_1 (
MountPoint = "/testdir/VITA_dg1_r0_1"
BlockDevice = "/dev/vx/dsk/dg1/dg1_r0_1"
FSType = vxfs
FsckOpt = "-y"
)
Volume vol_dg1_r01_2 (
Configuring applications and resources in VCS 288
About configuring the RemoteGroup agent
Volume = dg1_r01_2
DiskGroup = dg1
)
Volume vol_dg1_r0_1 (
Volume = dg1_r0_1
DiskGroup = dg1
)
For information about Virtual Business Services, see the Virtual Business
Service–Availability User's Guide.
See “Adding a RemoteGroup resource from the Java Console” on page 146.
Note: When you set the value of ControlMode to OnlineOnly or to MonitorOnly, the
recommend value of the VCSSysName attribute of the RemoteGroup resource is
ANY. If you want one-to-one mapping between the local nodes and the remote
nodes, then a switch or fail over of local service group is impossible. It is important
to note that in both these configurations the RemoteGroup agent does not take the
remote service group offline.
Configuring applications and resources in VCS 290
About configuring the RemoteGroup agent
Note: If the remote cluster runs in secure mode, you must set the value for
DomainType or BrokerIp attributes.
include "types.cf"
cluster clus1 (
)
system sys1(
)
system sys2(
Configuring applications and resources in VCS 294
About configuring Samba service groups
group smbserver (
SystemList = { sys1= 0, sys2= 1 }
)
IP ip (
Device = eth0
Address = "10.209.114.201"
NetMask = "255.255.252.0"
)
NIC nic (
Device = eth0
NetworkHosts = { "10.209.74.43" }
)
NetBios nmb (
SambaServerRes = smb
NetBiosName = smb_vcs
Interfaces = { "10.209.114.201" }
)
SambaServer smb (
ConfFile = "/etc/samba/smb.conf"
LockDir = "/var/run"
SambaTopDir = "/usr"
)
SambaShare smb_share (
SambaServerRes = smb
ShareName = share1
ShareOptions = "path = /samba_share/; public = yes;
writable = yes"
)
ip requires nic
nmb requires smb
smb requires ip
smb_share requires nmb
Configuring applications and resources in VCS 295
Configuring the Coordination Point agent
The HA fire drill provides an option to fix specific errors detected during the
infrastructure check.
■ Simulator ports
Simulator ports
Table 8-1 lists the ports that VCS Simulator uses to connect to the various cluster
configurations. You can modify cluster configurations to adhere to your network
policies. Also, Symantec might change port assignments or add new ports based
on the number of simulator configurations.
Port Usage
15552 SOL_ORA_SRDF_C1:simulatorport
15553 SOL_ORA_SRDF_C2:simulatorport
15554 SOL_ORACLE:simulatorport
15555 LIN_NFS:simulatorport
15556 HP_NFS:simulatorport
15557 AIX_NFS:simulatorport
15558 Consolidation:simulatorport
15559 SOL_NPLUS1:simulatorport
15572 AcmePrimarySite:simulatorport
15573 AcmeSecondarySite:simulatorport
15580 Win_Exch_2K7_primary:simulatorport
15581 Win_Exch_2K7_secondary:simulatorport
15582 WIN_NTAP_EXCH_CL1:simulatorport
15583 WIN_NTAP_EXCH_CL2:simulatorport
15611 WIN_SQL2K5_VVR_C1:simulatorport
15612 WIN_SQL2K5_VVR_C2:simulatorport
15613 WIN_SQL2K8_VVR_C1:simulatorport
15614 WIN_SQL2K8_VVR_C2:simulatorport
Predicting VCS behavior using VCS Simulator 299
Administering VCS Simulator from the Java Console
Port Usage
15615 WIN_E2K10_VVR_C1:simulatorport
15616 WIN_E2K10_VVR_C2:simulatorport
Table 8-2 lists the ports that the VCS Simulator uses for the wide area connector
(WAC) process. Set the WAC port to -1 to disable WAC simulation.
Port Usage
15562 SOL_ORA_SRDF_C1:wacport
15563 SOL_ORA_SRDF_C2:wacport
15566 Win_Exch_2K7_primary:wacport
15567 Win_Exch_2K7_secondary:wacport
15570 WIN_NTAP_EXCH_CL1:wacport
15571 WIN_NTAP_EXCH_CL2:wacport
15582 AcmePrimarySite:wacport
15583 AcmeSecondarySite:wacport
15661 WIN_SQL2K5_VVR_C1:wacport
15662 WIN_SQL2K5_VVR_C2:wacport
15663 WIN_SQL2K8_VVR_C1:wacport
15664 WIN_SQL2K8_VVR_C2:wacport
15665 WIN_E2K10_VVR_C1:wacport
15666 WIN_E2K10_VVR_C2:wacport
Deleting a cluster
Deleting a simulated cluster removes all configuration files that are associated with
the cluster. Before deleting a cluster, make sure that the cluster is not configured
as a global cluster. You can delete global clusters from the Global View.
To delete a simulated cluster
1 From Simulator Explorer, select the cluster and click Delete Cluster.
2 In the confirmation dialog box, click Yes.
■ Select an existing global cluster or enter the name for a new global cluster.
■ From the Available Clusters list, select the clusters to add to the global
cluster and click the right arrow. The clusters move to the Configured
Clusters list.
■ Click OK.
Bringing a system up
Bring a system up to simulate a running system.
To bring a system up
1 From Cluster Explorer, click the Systems tab of the configuration tree.
2 Right-click the system in an unknown state, and click Up.
Note: VCS Simulator treats clusters that are created from the command line and
the Java Console separately. Hence, clusters that are created from the command
line are not visible in the graphical interface. If you delete a cluster from the
command line, you may see the cluster in the Java Console.
See “To simulate global clusters from the command line” on page 307.
Predicting VCS behavior using VCS Simulator 307
Administering VCS Simulator from the command line interface
Do not use default_clus as the cluster name when simulating a global cluster.
VCS Simulator copies the sample configurations to the path
%VCS_SIMULATOR_HOME%\clustername and creates a system named
clustername_sys1.
For example, to add cluster clus_a using ports 15555 and 15575, run the
following command:
4 Set the following environment variables to access VCS Simulator from the
command line:
■ set %VCS_SIM_PORT%=port_number
■ set %VCS_SIM_WAC_PORT%=wacport
Note that you must set these variables for each simulated cluster, otherwise
Simulator always connects default_clus, the default cluster.
You can use the Java Console to link the clusters and to configure global
service groups.
Predicting VCS behavior using VCS Simulator 308
Administering VCS Simulator from the command line interface
Command Description
hasim -deleteclus Deletes the specified cluster. Deleting the cluster removes
<clus> all files and directories associated with the cluster.
Command Description
hasim -disablel10n Disables localized inputs for attribute values. Use this option
when simulating UNIX configurations on Windows systems.
AGENT COMMAND-LINE
UTILITIES
AGENT GUI
AGENT-SPECIFIC CODE
AGENT FRAMEWORK
Status Control
HAD
The agent uses the agent framework, which is compiled into the agent itself. For
each resource type configured in a cluster, an agent runs on each cluster system.
The agent handles all resources of that type. The engine passes commands to the
agent and the agent returns the status of command execution. For example, an
agent is commanded to bring a resource online. The agent responds back with the
success (or failure) of the operation. Once the resource is online, the agent
communicates with the engine only if this status changes.
daemon operates as a replicated state machine, which means all systems in the
cluster have a synchronized state of the cluster configuration. This is accomplished
by the following:
■ All systems run an identical version of HAD.
■ HAD on each system maintains the state of its own resources, and sends all
cluster information about the local system to all other machines in the cluster.
■ HAD on each system receives information from the other cluster systems to
update its own view of the cluster.
■ Each system follows the same code path for actions on the cluster.
The replicated state machine communicates over a purpose-built communications
package consisting of two components, Group Membership Services/Atomic
Broadcast (GAB) and Low Latency Transport (LLT).
See “About Group Membership Services/Atomic Broadcast (GAB)” on page 313.
See “About Low Latency Transport (LLT)” on page 314.
Figure 9-3 illustrates the overall communications paths between two systems of
the replicated state machine model.
■ Cluster communications
GAB’s second function is reliable cluster communications. GAB provides ordered
guaranteed delivery of messages to all cluster systems. The Atomic Broadcast
functionality is used by HAD to ensure that all systems within the cluster receive
all configuration change messages, or are rolled back to the previous state,
much like a database atomic commit. While the communications function in
GAB is known as Atomic Broadcast, no actual network broadcast traffic is
generated. An Atomic Broadcast message is a series of point to point unicast
messages from the sending system to each receiving system, with a
corresponding acknowledgement from each receiving system.
LLT can be configured to designate specific cluster interconnect links as either high
priority or low priority. High priority links are used for cluster communications to
GAB as well as heartbeat signals. Low priority links, during normal operation, are
used for heartbeat and link state maintenance only, and the frequency of heartbeats
is reduced to 50% of normal to reduce network overhead.
If there is a failure of all configured high priority links, LLT will switch all cluster
communications traffic to the first available low priority link. Communication traffic
will revert back to the high priority links as soon as they become available.
While not required, best practice recommends to configure at least one low priority
link, and to configure two high priority links on dedicated cluster interconnects to
provide redundancy in the communications path. Low priority links are typically
configured on the public or administrative network.
If you use different media speed for the private NICs, Symantec recommends that
you configure the NICs with lesser speed as low-priority links to enhance LLT
performance. With this setting, LLT does active-passive load balancing across the
private links. At the time of configuration and failover, LLT automatically chooses
the link with high-priority as the active link and uses the low-priority links only when
a high-priority link fails.
LLT sends packets on all the configured links in weighted round-robin manner. LLT
uses the linkburst parameter which represents the number of back-to-back packets
that LLT sends on a link before the next link is chosen. In addition to the default
weighted round-robin based load balancing, LLT also provides destination-based
load balancing. LLT implements destination-based load balancing where the LLT
link is chosen based on the destination node id and the port. With destination-based
load balancing, LLT sends all the packets of a particular destination on a link.
However, a potential problem with the destination-based load balancing approach
is that LLT may not fully utilize the available links if the ports have dissimilar traffic.
About communications, membership, and data protection in the cluster 316
About cluster membership
Symantec recommends destination-based load balancing when the setup has more
than two cluster nodes and more active LLT ports. You must manually configure
destination-based load balancing for your cluster to set up the port to LLT link
mapping.
See “Configuring destination-based load balancing for LLT” on page 186.
LLT on startup sends broadcast packets with LLT node id and cluster id information
onto the LAN to discover any node in the network that has same node id and cluster
id pair. Each node in the network replies to this broadcast message with its cluster
id, node id, and node name.
LLT on the original node does not start and gives appropriate error in the following
cases:
■ LLT on any other node in the same network is running with the same node id
and cluster id pair that it owns.
■ LLT on the original node receives response from a node that does not have a
node name entry in the /etc/llthosts file.
■ When the cluster initially boots, all systems in the cluster are unseeded.
■ GAB checks the number of systems that have been declared to be members
of the cluster in the /etc/gabtab file.
The number of systems declared in the cluster is denoted as follows:
/sbin/gabconfig -c -nN
where the variable # is replaced with the number of systems in the cluster.
Note: Symantec recommends that you replace # with the exact number of nodes
in the cluster.
■ When GAB on each system detects that the correct number of systems are
running, based on the number declared in /etc/gabtab and input from LLT, it
will seed.
■ If you have I/O fencing enabled in your cluster and if you have set the GAB
auto-seeding feature through I/O fencing, GAB automatically seeds the cluster
even when some cluster nodes are unavailable.
See “Seeding a cluster using the GAB auto-seed parameter through I/O fencing”
on page 317.
■ HAD will start on each seeded system. HAD will only run on a system that has
seeded.
HAD can provide the HA functionality only when GAB has seeded.
See “Manual seeding of a cluster” on page 318.
Warning: It is not recommended to seed the cluster manually unless the administrator
is aware of the risks and implications of the command.
Note:
If you have I/O fencing enabled in your cluster, you can set the GAB auto-seeding
feature through I/O fencing so that GAB automatically seeds the cluster even when
some cluster nodes are unavailable.
See “Seeding a cluster using the GAB auto-seed parameter through I/O fencing”
on page 317.
Before manually seeding the cluster, check that systems that will join the cluster
are able to send and receive heartbeats to each other. Confirm there is no possibility
of a network partition condition in the cluster.
Before manually seeding the cluster, do the following:
About communications, membership, and data protection in the cluster 319
About cluster membership
■ Check that systems that will join the cluster are able to send and receive
heartbeats to each other.
■ Confirm there is no possibility of a network partition condition in the cluster.
To manually seed the cluster, type the following command:
/sbin/gabconfig -x
Note there is no declaration of the number of systems in the cluster with a manual
seed. This command will seed all systems in communication with the system where
the command is run.
So, make sure not to run this command in more than one node in the cluster.
See “Seeding and I/O fencing” on page 681.
■ When LLT informs GAB of a heartbeat loss, the systems that are remaining in
the cluster coordinate to agree which systems are still actively participating in
the cluster and which are not. This happens during a time period known as GAB
Stable Timeout (5 seconds).
VCS has specific error handling that takes effect in the case where the systems
do not agree.
■ GAB marks the system as DOWN, excludes the system from the cluster
membership, and delivers the membership change to the fencing module.
■ The fencing module performs membership arbitration to ensure that there is not
a split brain situation and only one functional cohesive cluster continues to run.
The fencing module is turned on by default.
Review the details on actions that occur if the fencing module has been deactivated:
See “About cluster membership and data protection without I/O fencing” on page 354.
About communications, membership, and data protection in the cluster 320
About membership arbitration
split-brain when vxfen races for control of the coordination points and the winner
partition fences the ejected nodes from accessing the data disks.
Note: Typically, a fencing configuration for a cluster must have three coordination
points. Symantec also supports server-based fencing with a single CP server as
its only coordination point with a caveat that this CP server becomes a single point
of failure.
Note: The raw disk policy supports I/O fencing only when a single hardware
path from the node to the coordinator disks is available. If there are multiple
hardware paths from the node to the coordinator disks then we support dmp
disk policy. If few coordinator disks have multiple hardware paths and few have
a single hardware path, then we support only the dmp disk policy. For new
installations, Symantec recommends IO fencing with dmp disk policy even for
a single hardware path.
Note: With the CP server, the fencing arbitration logic still remains on the VCS
cluster.
■ The fencing start up script on each system uses VxVM commands to populate
the file /etc/vxfentab with the paths available to the coordinator disks.
See “About I/O fencing configuration files” on page 343.
■ When the fencing driver is started, it reads the physical disk names from the
/etc/vxfentab file. Using these physical disk names, it determines the serial
numbers of the coordinator disks and builds an in-memory list of the drives.
■ The fencing driver verifies that the systems that are already running in the cluster
see the same coordinator disks.
The fencing driver examines GAB port B for membership information. If no other
system is up and running, it is the first system up and is considered to have the
correct coordinator disk configuration. When a new member joins, it requests
the coordinator disks configuration. The system with the lowest LLT ID will
respond with a list of the coordinator disk serial numbers. If there is a match,
the new member joins the cluster. If there is not a match, vxfen enters an error
state and the new member is not allowed to join. This process ensures all
systems communicate with the same coordinator disks.
■ The fencing driver determines if a possible preexisting split brain condition exists.
This is done by verifying that any system that has keys on the coordinator disks
can also be seen in the current GAB membership. If this verification fails, the
fencing driver prints a warning to the console and system log and does not start.
■ If all verifications pass, the fencing driver on each system registers keys with
each coordinator disk.
Coordinator Disks
See “About the I/O fencing registration key format” on page 373.
When there is a perceived change in membership, membership arbitration works
as follows:
■ GAB marks the system as DOWN, excludes the system from the cluster
membership, and delivers the membership change—the list of departed
systems—to the fencing module.
■ The system with the lowest LLT system ID in the cluster races for control of the
coordinator disks
■ In the most common case, where departed systems are truly down or faulted,
this race has only one contestant.
■ In a split brain scenario, where two or more subclusters have formed, the
race for the coordinator disks is performed by the system with the lowest
LLT system ID of that subcluster. This system that races on behalf of all the
other systems in its subcluster is called the RACER node and the other
systems in the subcluster are called the SPECTATOR nodes.
■ During the I/O fencing race, if the RACER node panics or if it cannot reach the
coordination points, then the VxFEN RACER node re-election feature allows an
alternate node in the subcluster that has the next lowest node ID to take over
as the RACER node.
The racer re-election works as follows:
■ In the event of an unexpected panic of the RACER node, the VxFEN driver
initiates a racer re-election.
■ If the RACER node is unable to reach a majority of coordination points, then
the VxFEN module sends a RELAY_RACE message to the other nodes in
the subcluster. The VxFEN module then re-elects the next lowest node ID
as the new RACER.
■ With successive re-elections if no more nodes are available to be re-elected
as the RACER node, then all the nodes in the subcluster will panic.
■ The race consists of executing a preempt and abort command for each key of
each system that appears to no longer be in the GAB membership.
The preempt and abort command allows only a registered system with a valid
key to eject the key of another system. This ensures that even when multiple
systems attempt to eject other, each race will have only one winner. The first
system to issue a preempt and abort command will win and eject the key of the
other system. When the second system issues a preempt and abort command,
it cannot perform the key eject because it is no longer a registered system with
a valid key.
About communications, membership, and data protection in the cluster 325
About membership arbitration
Note: Forcing a manual seed at this point will allow the cluster to seed. However,
when the fencing module checks the GAB membership against the systems
that have keys on the coordinator disks, a mismatch will occur. vxfen will detect
a possible split brain condition, print a warning, and will not start. In turn, HAD
will not start. Administrative intervention is required.
See “Manual seeding of a cluster” on page 318.
operations are implemented in custom scripts. The vxfen driver invokes the custom
scripts.
The CP server-based coordination point uses a customized fencing framework.
Note that SCSI-3 PR based fencing arbitration can also be enabled using customized
fencing framework. This allows the user to specify a combination of SCSI-3 LUNs
and CP servers as coordination points using customized fencing. Customized
fencing can be enabled by specifying vxfen_mode=customized and
vxfen_mechanism=cps in the /etc/vxfenmode file.
Figure 9-6 displays a schematic of the customized fencing options.
cpsadm
vxfenadm
Customized
Scripts
User space
Client cluster node
vxfend
VXFEN
Kernel space
GAB
LLT
A user level daemon vxfend interacts with the vxfen driver, which in turn interacts
with GAB to get the node membership update. Upon receiving membership updates,
vxfend invokes various scripts to race for the coordination point and fence off data
About communications, membership, and data protection in the cluster 329
About membership arbitration
disks. The vxfend daemon manages various fencing agents. The customized fencing
scripts are located in the /opt/VRTSvcs/vxfen/bin/customized/cps directory.
The scripts that are involved include the following:
■ generate_snapshot.sh : Retrieves the SCSI ID’s of the coordinator disks and/or
UUID ID's of the CP servers
CP server uses the UUID stored in /etc/VRTScps/db/current/cps_uuid.
See “About the cluster UUID” on page 38.
■ join_local_node.sh: Registers the keys with the coordinator disks or CP servers
■ race_for_coordination_point.sh: Races to determine a winner after cluster
reconfiguration
■ unjoin_local_node.sh: Removes the keys that are registered in join_local_node.sh
■ fence_data_disks.sh: Fences the data disks from access by the losing nodes.
■ local_info.sh: Lists local node’s configuration parameters and coordination points,
which are used by the vxfen driver.
Site 1 Site 2
Node 1 Node 2
Public
Network
SAN
Coordinator disk #1
Coordinator disk #3
Coordinator disk #2
Note: Symantec recommends that you do not run any other applications on the
single node or SFHA cluster that is used to host CP server.
A single CP server can serve up to 128 VCS clusters. A common set of CP servers
can also serve up to 128 VCS clusters.
About communications, membership, and data protection in the cluster 331
About membership arbitration
Warning: The CP server database must not be edited directly and should only be
accessed using cpsadm(1M). Manipulating the database manually may lead to
undesirable results including system panics.
points, the application cluster as well as the CP server use a Universally Unique
Identifier (UUID) to uniquely identify an application cluster.
Figure 9-8 displays a configuration using three CP servers that are connected to
multiple application clusters.
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications)
Figure 9-9 Single CP server with two coordinator disks for each application
cluster
Fibre channel
application clusters
Fibre channel
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications) Public network
TCP/IP
TCP/IP
application clusters
(clusters which run VCS, SFHA, SFCFS, or SF Oracle RAC to
provide high availability for applications)
About communications, membership, and data protection in the cluster 334
About membership arbitration
Figure 9-11 CPSSG group when CP server hosted on a single node VCS cluster
vxcpserv
quorum
cpsvip
cpsnic
Figure 9-12 displays a schematic of the CPSSG group and its dependencies when
the CP server is hosted on an SFHA cluster.
Figure 9-12 CPSSG group when the CP server is hosted on an SFHA cluster
vxcpserv
quorum cpsmount
cpsvip cpsvol
cpsnic cpsdg
About communications, membership, and data protection in the cluster 335
About membership arbitration
type Quorum (
static str ArgList[] = { QuorumResources, Quorum, State }
str QuorumResources[]
int Quorum = 1
)
A root user on a CP server is given all the administrator privileges, and these
administrator privileges can be used to perform all the CP server specific operations.
CP server
(vxcpserv) vcsauthserver
CP client
vcsauthserver
(cpsadm)
Communication flow between CP server and VCS cluster nodes with security
configured on them is as follows:
■ Initial setup:
Identities of CP server and VCS cluster nodes are configured on respective
nodes by the VCS installer.
Note: At the time of fencing configuration, the installer establishes trust between
the CP server and the application cluster so that vxcpserv process can
authenticate requests from the application cluster nodes. If you manually
configured I/O fencing, then you must set up trust between the CP server and
the application cluster.
The cpsadm command gets the user name, domain type from the environment
variables CPS_USERNAME, CPS_DOMAINTYPE. Export these variables before
you run the cpsadm command manually. The customized fencing framework
exports these environment variables internally before you run the cpsadm
commands.
The CP server process (vxcpserv) uses its own user (CPSERVER) which is
added to the local vcsauthserver.
■ Getting credentials from authentication broker:
The cpsadm command tries to get the existing credentials that are present on
the local node. The installer generates these credentials during fencing
configuration.
About communications, membership, and data protection in the cluster 338
About membership arbitration
The vxcpserv process tries to get the existing credentials that are present on
the local node. The installer generates these credentials when it enables security.
■ Communication between CP server and VCS cluster nodes:
After the CP server establishes its credential and is up, it becomes ready to
receive data from the clients. After the cpsadm command obtains its credentials
and authenticates CP server credentials, cpsadm connects to the CP server.
Data is passed over to the CP server.
■ Validation:
On receiving data from a particular VCS cluster node, vxcpserv validates its
credentials. If validation fails, then the connection request data is rejected.
Note: For secure communication using HTTPS, you do not need to establish
trust between the CP server and the application cluster.
About communications, membership, and data protection in the cluster 339
About membership arbitration
The signed client certificate is used to establish the identity of the client. Once
the CP server authenticates the client, the client can issue the operational
commands that are limited to its own cluster.
■ Getting credentials from authentication broker:
The cpsadm command tries to get the existing credentials that are present on
the local node. The installer generates these credentials during fencing
configuration.
The vxcpserv process tries to get the existing credentials that are present on
the local node. The installer generates these credentials when it enables security.
■ Communication between CP server and VCS cluster nodes:
After the CP server establishes its credential and is up, it becomes ready to
receive data from the clients. After the cpsadm command obtains its credentials
and authenticates CP server credentials, cpsadm connects to the CP server.
Data is passed over to the CP server.
■ Validation:
On receiving data from a particular VCS cluster node, vxcpserv validates its
credentials. If validation fails, then the connection request data is rejected.
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/CPSERVER
# /opt/VRTScps/bin/cpsat showcred
About communications, membership, and data protection in the cluster 340
About membership arbitration
Note: The CP server configuration file (/etc/vxcps.conf) must not contain a line
specifying security=0. If there is no line specifying "security" parameter or if
there is a line specifying security=1, CP server with security is enabled (which
is the default).
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/CPSADM
# /opt/VRTScps/bin/cpsat showcred
The users described above are used only for authentication for the communication
between the CP server and the VCS cluster nodes.
For CP server's authorization, customized fencing framework on the VCS cluster
uses the following user if security is configured:
CPSADM@VCS_SERVICES@cluster_uuid
where cluster_uuid is the application cluster's universal unique identifier.
For each VCS cluster node, this user must be registered on the CP server database
before fencing starts on the VCS cluster node(s). This can be verified by issuing
the following command:
Username/Domain Type
CPSADM@VCS_SERVICES@77a2549c-1dd2-11b2-88d6-00306e4b2e0b/vx
Note: The configuration file (/etc/vxfenmode) on each client node must not contain
a line specifying security=0. If there is no line specifying "security" parameter or if
there is a line specifying security=1, client node starts with security enabled (which
is the default).
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/CPSERVER
# /opt/VRTScps/bin/cpsat showcred
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/CPSADM
# /opt/VRTScps/bin/cpsat showcred
VCS logic determines when to online a service group on a particular system. If the
service group contains a disk group, the disk group is imported as part of the service
group being brought online. When using SCSI-3 PR, importing the disk group puts
registration and reservation on the data disks. Only the system that has imported
the storage with SCSI-3 reservation can write to the shared storage. This prevents
a system that did not participate in membership arbitration from corrupting the
shared storage.
SCSI-3 PR ensures persistent reservations across SCSI bus resets.
Note: Use of SCSI 3 PR protects against all elements in the IT environment that
might be trying to write illegally to storage, not only VCS related elements.
File Description
/etc/sysconfig/vxfen This file stores the start and stop environment variables for I/O fencing:
■ VXFEN_START—Defines the startup behavior for the I/O fencing module after a system
reboot. Valid values include:
1—Indicates that I/O fencing is enabled to start up.
0—Indicates that I/O fencing is disabled to start up.
■ VXFEN_STOP—Defines the shutdown behavior for the I/O fencing module during a system
shutdown. Valid values include:
1—Indicates that I/O fencing is enabled to shut down.
0—Indicates that I/O fencing is disabled to shut down.
The installer sets the value of these variables to 1 at the end of VCS configuration.
If you manually configured VCS, you must make sure to set the values of these environment
variables to 1.
File Description
Note: You must use the same SCSI-3 disk policy on all the nodes.
■ security
This parameter is applicable only for server-based fencing.
1—Indicates that communication with the CP server is in secure mode. This setting is the
default.
0—Indicates that communication with the CP server is in non-secure mode.
■ List of coordination points
This list is required only for server-based fencing configuration.
Coordination points in server-based fencing can include coordinator disks, CP servers, or
both. If you use coordinator disks, you must create a coordinator disk group containing the
individual coordinator disks.
Refer to the sample file /etc/vxfen.d/vxfenmode_cps for more information on how to specify
the coordination points and multiple IP addresses for each CP server.
■ single_cp
This parameter is applicable for server-based fencing which uses a single highly available
CP server as its coordination point. Also applicable for when you use a coordinator disk
group with single disk.
■ autoseed_gab_timeout
This parameter enables GAB automatic seeding of the cluster even when some cluster
nodes are unavailable. This feature requires that I/O fencing is enabled.
0—Turns the GAB auto-seed feature on. Any value greater than 0 indicates the number of
seconds that GAB must delay before it automatically seeds the cluster.
-1—Turns the GAB auto-seed feature off. This setting is the default.
About communications, membership, and data protection in the cluster 345
Examples of VCS operation with I/O fencing
File Description
/etc/vxfentab When I/O fencing starts, the vxfen startup script creates this /etc/vxfentab file on each node.
The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Any time a
system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all the
coordinator points.
Note: The /etc/vxfentab file is a generated file; do not modify this file.
For disk-based I/O fencing, the /etc/vxfentab file on each node contains a list of all paths to
each coordinator disk along with its unique disk identifier. A space separates the path and the
unique disk identifier. An example of the /etc/vxfentab file in a disk-based fencing configuration
on one node resembles as follows:
■ Raw disk:
/dev/sdx HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006804E795D075
/dev/sdy HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006814E795D076
/dev/sdz HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006824E795D077
■ DMP disk:
/dev/vx/rdmp/sdx3 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006804E795D0A3
/dev/vx/rdmp/sdy3 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006814E795D0B3
/dev/vx/rdmp/sdz3 HITACHI%5F1724-100%20%20FAStT%5FDISKS%5F6
00A0B8000215A5D000006824E795D0C3
For server-based fencing, the /etc/vxfentab file also includes the security settings information.
For server-based fencing with single CP server, the /etc/vxfentab file also includes the single_cp
settings information.
See “Example: Four-system cluster where cluster interconnect fails” on page 347.
■ The GAB module on System0 determines System1 has failed due to loss of
heartbeat signal reported from LLT.
■ GAB passes the membership change to the fencing module on each system in
the cluster.
The only system that is still running is System0
■ System0 gains control of the coordinator disks by ejecting the key registered
by System1 from each coordinator disk.
The ejection takes place one by one, in the order of the coordinator disk’s serial
number.
■ When the fencing module on System0 successfully controls the coordinator
disks, HAD carries out any associated policy connected with the membership
change.
■ System1 is blocked access to the shared storage, if this shared storage was
configured in a service group that was now taken over by System0 and imported.
system0 system1
Coordinator Disks
Coordinator Disks
■ After LLT informs GAB of a heartbeat loss, the systems that are remaining do
a "GAB Stable Timeout” (5 seconds). In this example:
■ System0 and System1 agree that both of them do not see System2 and
System3
■ System2 and System3 agree that both of them do not see System0 and
System1
■ GAB marks the system as DOWN, and excludes the system from the cluster
membership. In this example:
■ GAB on System0 and System1 mark System2 and System3 as DOWN and
excludes them from cluster membership.
■ GAB on System2 and System3 mark System0 and System1 as DOWN and
excludes them from cluster membership.
About communications, membership, and data protection in the cluster 349
Examples of VCS operation with I/O fencing
■ GAB on each of the four systems passes the membership change to the vxfen
driver for membership arbitration. Each subcluster races for control of the
coordinator disks. In this example:
■ System0 has the lower LLT ID, and races on behalf of itself and System1.
■ System2 has the lower LLT ID, and races on behalf of itself and System3.
■ GAB on each of the four systems also passes the membership change to HAD.
HAD waits for the result of the membership arbitration from the fencing module
before taking any further action.
■ If System0 is not able to reach a majority of the coordination points, then the
VxFEN driver will initiate a racer re-election from System0 to System1 and
System1 will initiate the race for the coordination points.
■ Assume System0 wins the race for the coordinator disks, and ejects the
registration keys of System2 and System3 off the disks. The result is as follows:
■ System0 wins the race for the coordinator disk. The fencing module on
System0 sends a WON_RACE to all other fencing modules in the current
cluster, in this case System0 and System1. On receiving a WON_RACE,
the fencing module on each system in turn communicates success to HAD.
System0 and System1 remain valid and current members of the cluster.
■ If System0 dies before it sends a WON_RACE to System1, then VxFEN will
initiate a racer re-election from System0 to System1 and System1 will initiate
the race for the coordination points.
System1 on winning a majority of the coordination points remains valid and
current member of the cluster and the fencing module on System1 in turn
communicates success to HAD.
■ System2 loses the race for control of the coordinator disks. The fencing
module on System2 calls a kernel panic and the system restarts.
■ System3 sees another membership change from the kernel panic of System2.
Because that was the system that was racing for control of the coordinator
disks in this subcluster, System3 also panics.
■ HAD carries out any associated policy or recovery actions based on the
membership change.
■ System2 and System3 are blocked access to the shared storage (if the shared
storage was part of a service group that is now taken over by System0 or System
1).
■ To rejoin System2 and System3 to the cluster, the administrator must do the
following:
■ Shut down System2 and System3
About communications, membership, and data protection in the cluster 350
Examples of VCS operation with I/O fencing
Both private networks Node A races for Node B races for When Node B is
fail. majority of majority of ejected from cluster,
coordination points. coordination points. repair the private
networks before
If Node A wins race If Node B loses the
attempting to bring
for coordination race for the
Node B back.
points, Node A ejects coordination points,
Node B from the Node B panics and
shared disks and removes itself from
continues. the cluster.
Both private networks Node A continues to Node B has crashed. Restart Node B after
function again after work. It cannot start the private networks are
event above. database since it is restored.
unable to write to the
data disks.
Nodes A and B and Node A restarts and Node B restarts and Resolve preexisting
private networks lose I/O fencing driver I/O fencing driver split-brain condition.
power. Coordination (vxfen) detects Node (vxfen) detects Node
See “Fencing startup
points and data disks B is registered with A is registered with
reports preexisting
retain power. coordination points. coordination points.
split-brain”
The driver does not The driver does not
Power returns to on page 704.
see Node B listed as see Node A listed as
nodes and they
member of cluster member of cluster
restart, but private
because private because private
networks still have no
networks are down. networks are down.
power.
This causes the I/O This causes the I/O
fencing device driver fencing device driver
to prevent Node A to prevent Node B
from joining the from joining the
cluster. Node A cluster. Node B
console displays: console displays:
Potentially a Potentially a
preexisting preexisting
split brain. split brain.
Dropping out Dropping out
of the cluster. of the cluster.
Refer to the Refer to the
user user
documentation documentation
for steps for steps
required required
to clear to clear
preexisting preexisting
split brain. split brain.
About communications, membership, and data protection in the cluster 353
Examples of VCS operation with I/O fencing
Node A crashes while Node A is crashed. Node B restarts and Resolve preexisting
Node B is down. detects Node A is split-brain condition.
Node B comes up registered with the
See “Fencing startup
and Node A is still coordination points.
reports preexisting
down. The driver does not
split-brain”
see Node A listed as
on page 704.
member of the
cluster. The I/O
fencing device driver
prints message on
console:
Potentially a
preexisting
split brain.
Dropping out
of the cluster.
Refer to the
user
documentation
for steps
required
to clear
preexisting
split brain.
The disk array Node A continues to Node B continues to Power on the failed
containing two of the operate as long as no operate as long as no disk array so that
three coordination nodes leave the nodes leave the subsequent network
points is powered off. cluster. cluster. partition does not
cause cluster
No node leaves the
shutdown, or replace
cluster membership
coordination points.
The disk array Node A continues to Node B has left the Power on the failed
containing two of the operate in the cluster. cluster. disk array so that
three coordination subsequent network
points is powered off. partition does not
cause cluster
Node B gracefully
shutdown, or replace
leaves the cluster and
coordination points.
the disk array is still
powered off. Leaving See “Replacing I/O
gracefully implies a fencing coordinator
clean shutdown so disks when the cluster
that vxfen is properly is online” on page 382.
unconfigured.
The disk array Node A races for a Node B has left Power on the failed
containing two of the majority of cluster due to crash disk array and restart
three coordination coordination points. or network partition. I/O fencing driver to
points is powered off. Node A fails because enable Node A to
only one of the three register with all
Node B abruptly
coordination points is coordination points,
crashes or a network
available. Node A or replace
partition occurs
panics and removes coordination points.
between node A and
itself from the cluster.
node B, and the disk See “Replacing
array is still powered defective disks when
off. the cluster is offline”
on page 708.
the system is not down, and HAD does not attempt to restart the services on
another system.
In order for this differentiation to have meaning, it is important to ensure the cluster
interconnect links do not have a single point of failure, such as a network switch or
Ethernet card.
About jeopardy
In all cases, when LLT on a system no longer receives heartbeat messages from
another system on any of the configured LLT interfaces, GAB reports a change in
membership.
When a system has only one interconnect link remaining to the cluster, GAB can
no longer reliably discriminate between loss of a system and loss of the network.
The reliability of the system’s membership is considered at risk. A special
membership category takes effect in this situation, called a jeopardy membership.
This provides the best possible split-brain protection without membership arbitration
and SCSI-3 capable devices.
When a system is placed in jeopardy membership status, two actions occur if the
system loses the last interconnect link:
■ VCS places service groups running on the system in autodisabled state. A
service group in autodisabled state may failover on a resource or group fault,
but cannot fail over on a system fault until the autodisabled flag is manually
cleared by the administrator.
■ VCS operates the system as a single system cluster. Other systems in the
cluster are partitioned off in a separate cluster membership.
You can use the GAB registration monitoring feature to detect DDNA conditions.
See “About registration monitoring” on page 644.
Public Network
Regular membership: 0, 1, 2, 3
Public Network
Membership: 0,1, 2, 3
Jeopardy membership: 2
The cluster is reconfigured. Systems 0, 1, and 3 are in the regular membership and
System2 in a jeopardy membership. Service groups on System2 are autodisabled.
All normal cluster operations continue, including normal failover of service groups
due to resource fault.
Public Network
Systems 0, 1, and 3 recognize that System2 has faulted. The cluster is reformed.
Systems 0, 1, and 3 are in a regular membership. When System2 went into jeopardy
membership, service groups running on System2 were autodisabled. Even though
the system is now completely failed, no other system can assume ownership of
these service groups unless the system administrator manually clears the
AutoDisabled flag on the service groups that were running on System2.
However, after the flag is cleared, these service groups can be manually brought
online on other systems in the cluster.
About communications, membership, and data protection in the cluster 358
Examples of VCS operation without I/O fencing
Public Network
Public Network
Public Network
Other systems send all cluster status traffic to System2 over the remaining private
link and use both private links for traffic between themselves. The low priority link
continues carrying the heartbeat signal only. No jeopardy condition is in effect
because two links remain to determine system failure.
Public Network
Systems 0, 1, and 3 recognize that System2 has faulted. The cluster is reformed.
Systems 0, 1, and 3 are in a regular membership. The service groups on System2
that are configured for failover on system fault are attempted to be brought online
on another target system, if one exists.
Public Network
Note: An exception to this is if the cluster uses fencing along with Cluster File
Systems (CFS) or Oracle Real Application Clusters (RAC).
The reason for this is that low priority links are usually shared public network
links. In the case where the main cluster interconnects fail, and the low priority
link was the only remaining link, large amounts of data would be moved to the
low priority link. This would potentially slow down the public network to
unacceptable performance. Without a low priority link configured, membership
arbitration would go into effect in this case, and some systems may be taken
down, but the remaining systems would continue to run without impact to the
public network.
It is not recommended to have a cluster with CFS or RAC without I/O fencing
configured.
■ Disable the console-abort sequence
Most UNIX systems provide a console-abort sequence that enables the
administrator to halt and continue the processor. Continuing operations after
the processor has stopped may corrupt data and is therefore unsupported by
VCS.
When a system is halted with the abort sequence, it stops producing heartbeats.
The other systems in the cluster consider the system failed and take over its
services. If the system is later enabled with another console sequence, it
About communications, membership, and data protection in the cluster 362
Summary of best practices for cluster communications
continues writing to shared storage as before, even though its applications have
been restarted on other systems.
Symantec recommends disabling the console-abort sequence or creating an
alias to force the go command to perform a restart on systems not running I/O
fencing.
■ Symantec recommends at least three coordination points to configure I/O fencing.
You can use coordinator disks, CP servers, or a combination of both.
Select the smallest possible LUNs for use as coordinator disks. No more than
three coordinator disks are needed in any configuration.
■ Do not reconnect the cluster interconnect after a network partition without shutting
down one side of the split cluster.
A common example of this happens during testing, where the administrator may
disconnect the cluster interconnect and create a network partition. Depending
on when the interconnect cables are reconnected, unexpected behavior can
occur.
Chapter 10
Administering I/O fencing
This chapter includes the following topics:
vxfendisk Generates the list of paths of disks in the disk group. This utility
requires that Veritas Volume Manager (VxVM) is installed and
configured.
The I/O fencing commands reside in the /opt/VRTS/bin|grep -i vxfen folder. Make
sure you added this folder path to the PATH environment variable.
Refer to the corresponding manual page for more information on the commands.
Caution: The tests overwrite and destroy data on the disks, unless you use the
-r option.
■ The two nodes must have SSH (default) or rsh communication. If you use rsh,
launch the vxfentsthdw utility with the -n option.
Administering I/O fencing 365
About the vxfentsthdw utility
After completing the testing process, you can remove permissions for
communication and restore public network connections.
■ To ensure both systems are connected to the same disk during the testing, you
can use the vxfenadm -i diskpath command to verify a disk’s serial number.
See “Verifying that the nodes see the same disk” on page 377.
■ For disk arrays with many disks, use the -m option to sample a few disks before
creating a disk group and using the -g option to test them all.
■ The utility indicates a disk can be used for I/O fencing with a message
resembling:
If the utility does not show a message stating a disk is ready, verification has
failed.
■ The -o option overrides disk size-related errors and the utility proceeds with
other tests, however, the disk may not setup correctly as the size may be smaller
than the supported size. The supported disk size for data disks is 256 MB and
for coordinator disks is 128 MB.
■ If the disk you intend to test has existing SCSI-3 registration keys, the test issues
a warning before proceeding.
-f filename Utility tests system and device For testing several disks.
combinations listed in a text file.
See “Testing the shared disks
Can be used with -r and -t listed in a file using the
options. vxfentsthdw -f option”
on page 371.
Administering I/O fencing 367
About the vxfentsthdw utility
-g disk_group Utility tests all disk devices in a For testing many disks and
specified disk group. arrays of disks. Disk groups may
be temporarily created for testing
Can be used with -r and -t
purposes and destroyed
options.
(ungrouped) after testing.
Note: To test the coordinator disk group using the vxfentsthdw utility, the utility
requires that the coordinator disk group, vxfencoorddg, be accessible from two
nodes.
Administering I/O fencing 368
About the vxfentsthdw utility
# vxfentsthdw -c vxfencoorddg
2 Enter the nodes you are using to test the coordinator disks:
3 Review the output of the testing process for both nodes for all disks in the
coordinator disk group. Each disk should display output that resembles:
4 After you test all disks in the disk group, the vxfencoorddg disk group is ready
for use.
# vxfentsthdw -rm
When invoked with the -r option, the utility does not use tests that write to the
disks. Therefore, it does not test the disks for all of the usual conditions of use.
If the failure is due to a bad disk, remove and replace it. The vxfentsthdw utility
indicates a disk can be used for I/O fencing with a message resembling:
Note: For A/P arrays, run the vxfentsthdw command only on active enabled paths.
# vxfentsthdw [-n]
3 After reviewing the overview and warning that the tests overwrite data on the
disks, confirm to continue the process and enter the node names.
4 Enter the names of the disks you are checking. For each node, the disk may
be known by the same name:
If the serial numbers of the disks are not identical, then the test terminates.
Administering I/O fencing 371
About the vxfentsthdw utility
5 Review the output as the utility performs the checks and report its activities.
6 If a disk is ready for I/O fencing on each node, the utility reports success:
7 Run the vxfentsthdw utility for each disk you intend to verify.
Testing the shared disks listed in a file using the vxfentsthdw -f option
Use the -f option to test disks that are listed in a text file. Review the following
example procedure.
To test the shared disks listed in a file
1 Create a text file disks_test to test two disks shared by systems sys1 and
sys2 that might resemble:
where the first disk is listed in the first line and is seen by sys1 as /dev/sdz
and by sys2 as /dev/sdy. The other disk, in the second line, is seen as
/dev/sdu from sys1 and /dev/sdw from sys2. Typically, the list of disks could
be extensive.
2 To test the disks, enter the following command:
# vxfentsthdw -f disks_test
The utility reports the test results one disk at a time, just as for the -m option.
Testing all the disks in a disk group using the vxfentsthdw -g option
Use the -g option to test all disks within a disk group. For example, you create a
temporary disk group consisting of all disks in a disk array and test the group.
Note: Do not import the test disk group as shared; that is, do not use the -s option
with the vxdg import command.
Administering I/O fencing 372
About the vxfentsthdw utility
After testing, destroy the disk group and put the disks into disk groups as you need.
To test all the disks in a disk group
1 Create a disk group for the disks that you want to test.
2 Enter the following command to test the disk group test_disks_dg:
# vxfentsthdw -g test_disks_dg
There are Veritas I/O fencing keys on the disk. Please make sure
that I/O fencing is shut down on all nodes of the cluster before
continuing.
THIS SCRIPT CAN ONLY BE USED IF THERE ARE NO OTHER ACTIVE NODES
IN THE CLUSTER! VERIFY ALL OTHER NODES ARE POWERED OFF OR
INCAPABLE OF ACCESSING SHARED STORAGE.
The utility prompts you with a warning before proceeding. You may continue as
long as I/O fencing is not yet configured.
-s read the keys on a disk and display the keys in numeric, character, and
node format
Note: The -g and -G options are deprecated. Use the -s option.
-r read reservations
-x remove registrations
Refer to the vxfenadm(1M) manual page for a complete list of the command options.
Byte 0 1 2 3 4 5 6 7
where:
■ VF is the unique identifier that carves out a namespace for the keys (consumes
two bytes)
■ cID 0x is the LLT cluster ID in hexadecimal (consumes four bytes)
■ nID 0x is the LLT node ID in hexadecimal (consumes two bytes)
The vxfen driver uses this key format in both sybase mode of I/O fencing.
Administering I/O fencing 374
About the vxfenadm utility
The key format of the data disks that are configured as failover disk groups under
VCS is as follows:
Byte 0 1 2 3 4 5 6 7
Value A+nID V C S
Byte 0 1 2 3 4 5 6 7
where DGcount is the count of disk groups in the configuration (consumes four
bytes).
By default, CVM uses a unique fencing key for each disk group. However, some
arrays have a restriction on the total number of unique keys that can be registered.
In such cases, you can use the same_key_for_alldgs tunable parameter to change
the default behavior. The default value of the parameter is off. If your configuration
hits the storage array limit on total number of unique keys, you can change the
value to on using the vxdefault command as follows:
If the tunable is changed to on, all subsequent keys that the CVM generates on
disk group imports or creates have '0000' as their last four bytes (DGcount is 0).
You must deport and re-import all the disk groups that are already imported for the
changed value of the same_key_for_alldgs tunable to take effect.
The variables such as disk_7, disk_8, and disk_9 in the following procedure
represent the disk names in your setup.
To display the I/O fencing registration keys
1 To display the key for the disks, run the following command:
# vxfenadm -s disk_name
For example:
■ To display the key for the coordinator disk /dev/sdx from the system with
node ID 1, enter the following command:
# vxfenadm -s /dev/sdx
key[1]:
[Numeric Format]: 86,70,68,69,69,68,48,48
[Character Format]: VFDEED00
* [Node Format]: Cluster ID: 57069 Node ID: 0 Node Name: sys1
The -s option of vxfenadm displays all eight bytes of a key value in three
formats. In the numeric format,
■ The first two bytes, represent the identifier VF, contains the ASCII value
86, 70.
■ The next four bytes contain the ASCII value of the cluster ID 57069
encoded in hex (0xDEED) which are 68, 69, 69, 68.
■ The remaining bytes contain the ASCII value of the node ID 0 (0x00)
which are 48, 48. Node ID 1 would be 01 and node ID 10 would be 0A.
An asterisk before the Node Format indicates that the vxfenadm command
is run from the node of a cluster where LLT is configured and is running.
■ To display the keys on a CVM parallel disk group:
# vxfenadm -s /dev/vx/rdmp/disk_7
■ To display the keys on a Symantec Cluster Server (VCS) failover disk group:
Administering I/O fencing 376
About the vxfenadm utility
# vxfenadm -s /dev/vx/rdmp/disk_8
2 To display the keys that are registered in all the disks specified in a disk file:
For example:
To display all the keys on coordinator disks:
You can verify the cluster ID using the lltstat -C command, and the node
ID using the lltstat -N command. For example:
# lltstat -C
57069
If the disk has keys that do not belong to a specific cluster, then the vxfenadm
command cannot look up the node name for the node ID, and hence prints the
node name as unknown. For example:
For disks with arbitrary format of keys, the vxfenadm command prints all the
fields as unknown. For example:
# vxfenadm -i /dev/sdr
Vendor id : EMC
Product id : SYMMETRIX
Revision : 5567
Serial Number : 42031000a
The same serial number information should appear when you enter the
equivalent command on node B using the /dev/sdt path.
On a disk from another manufacturer, Hitachi Data Systems, the output is
different and may resemble:
# vxfenadm -i /dev/sdt
Vendor id : HITACHI
Product id : OPEN-3
Revision : 0117
Serial Number : 0401EB6F0002
You can also use this procedure to remove the registration and reservation keys
created by another node from a disk.
To clear keys after split-brain
1 Stop VCS on all nodes.
# hastop -all
2 Make sure that the port h is closed on all the nodes. Run the following command
on each node to verify that the port h is closed:
# gabconfig -a
# /etc/init.d/vxfen stop
4 If you have any applications that run outside of VCS control that have access
to the shared storage, then shut down all other nodes in the cluster that have
access to the shared storage. This prevents data corruption.
5 Start the vxfenclearpre script:
# /opt/VRTSvcs/vxfen/bin/vxfenclearpre
Administering I/O fencing 380
About the vxfenclearpre utility
6 Read the script’s introduction and warning. Then, you can choose to let the
script run.
The script cleans up the disks and displays the following status messages.
# /etc/init.d/vxfen start
# hastart
Administering I/O fencing 381
About the vxfenswap utility
Warning: The cluster might panic if any node leaves the cluster membership before
the vxfenswap script replaces the set of coordinator disks.
3 Estimate the number of coordination points you plan to use as part of the
fencing configuration.
4 Set the value of the FaultTolerance attribute to 0.
Note: It is necessary to set the value to 0 because later in the procedure you
need to reset the value of this attribute to a value that is lower than the number
of coordination points. This ensures that the Coordpoint Agent does not fault.
Administering I/O fencing 383
About the vxfenswap utility
Note: Make a note of the attribute value before you proceed to the next step.
After migration, when you re-enable the attribute you want to set it to the same
value.
You can also run the hares -display coordpoint to find out whether the
LevelTwoMonitorFreq value is set.
# vxfenadm -d
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or
more disks is not accessible.
-C specifies that any import locks are removed.
Administering I/O fencing 384
About the vxfenswap utility
9 If your setup uses VRTSvxvm version, then skip to step 10. You need not set
coordinator=off to add or remove disks. For other VxVM versions, perform
this step:
Where version is the specific release version.
Turn off the coordinator attribute value for the coordinator disk group.
10 To remove disks from the coordinator disk group, use the VxVM disk
administrator utility vxdiskadm.
11 Perform the following steps to add new disks to the coordinator disk group:
■ Add new disks to the node.
■ Initialize the new disks as VxVM disks.
■ Check the disks for I/O fencing compliance.
■ Add the new disks to the coordinator disk group and set the coordinator
attribute value as "on" for the coordinator disk group.
See the Symantec Cluster Server Installation Guide for detailed instructions.
Note that though the disk group content changes, the I/O fencing remains in
the same state.
12 From one node, start the vxfenswap utility. You must specify the disk group to
the utility.
The utility performs the following tasks:
■ Backs up the existing /etc/vxfentab file.
■ Creates a test file /etc/vxfentab.test for the disk group that is modified
on each node.
■ Reads the disk group you specified in the vxfenswap command and adds
the disk group to the /etc/vxfentab.test file on each node.
■ Verifies that the serial number of the new disks are identical on all the nodes.
The script terminates if the check fails.
■ Verifies that the new disks can support I/O fencing on each node.
13 If the disk verification passes, the utility reports success and asks if you want
to commit the new set of coordinator disks.
Administering I/O fencing 385
About the vxfenswap utility
14 Confirm whether you want to clear the keys on the coordination points and
proceed with the vxfenswap operation.
15 Review the message that the utility displays and confirm that you want to
commit the new set of coordinator disks. Else skip to step 16.
3 Estimate the number of coordination points you plan to use as part of the
fencing configuration.
4 Set the value of the FaultTolerance attribute to 0.
Note: It is necessary to set the value to 0 because later in the procedure you
need to reset the value of this attribute to a value that is lower than the number
of coordination points. This ensures that the Coordpoint Agent does not fault.
Note: Make a note of the attribute value before you proceed to the next step.
After migration, when you re-enable the attribute you want to set it to the same
value.
# haconf -makerw
# vxfenadm -d
8 Find the name of the current coordinator disk group (typically vxfencoorddg)
that is in the /etc/vxfendg file.
# cat /etc/vxfendg
vxfencoorddg
9 Find the alternative disk groups available to replace the current coordinator
disk group.
10 Validate the new disk group for I/O fencing compliance. Run the following
command:
# vxfentsthdw -c vxfendg
See “Testing the coordinator disk group using the -c option of vxfentsthdw”
on page 367.
Administering I/O fencing 388
About the vxfenswap utility
11 If the new disk group is not already deported, run the following command to
deport the disk group:
14 If the disk verification passes, the utility reports success and asks if you want
to replace the coordinator disk group.
15 Confirm whether you want to clear the keys on the coordination points and
proceed with the vxfenswap operation.
16 Review the message that the utility displays and confirm that you want to
replace the coordinator disk group. Else skip to step 21.
18 Set the coordinator attribute value as "on" for the new coordinator disk group.
# vxdg -g vxfendg set coordinator=on
Set the coordinator attribute value as "off" for the old disk group.
The swap operation for the coordinator disk group is complete now.
21 If you do not want to replace the coordinator disk group, answer n at the prompt.
The vxfenswap utility rolls back any changes to the coordinator disk group.
Administering I/O fencing 390
About the vxfenswap utility
# haconf -makerw
# vxfenadm -d
■ Edit the existing the /etc/vxfenmode with new fencing mode and disk policy
information and remove any preexisting /etc/vxfenmode.test file.
Note that the format of the /etc/vxfenmode.test file and the /etc/vxfenmode file
is the same.
# cat /etc/vxfenmode
vxfen_mode=scsi3
scsi3_disk_policy=raw
# vxfenadm -d
* 0 (sys3)
1 (sys4)
2 (sys5)
3 (sys6)
disks. You can use the vxfenswap utility to add these disks to the coordinator disk
group.
See “About I/O fencing in campus clusters” on page 625.
To add new disks from a recovered site to the coordinator disk group
1 Make sure system-to-system communication is functioning properly.
2 Make sure that the cluster is online.
# vxfenadm -d
# cat /etc/vxfendg
vxfencoorddg
# vxfenconfig -l
I/O Fencing Configuration Information:
======================================
Count : 1
Disk List
Disk Name Major Minor Serial Number Policy
6 When the primary site comes online, start the vxfenswap utility on any node
in the cluster:
# vxfenconfig -l
I/O Fencing Configuration Information:
======================================
Single Disk Flag : 0
Count : 3
Disk List
Disk Name Major Minor Serial Number Policy
# vxfenadm -d
3 Run the following command to view the coordinator disks that do not have
keys:
5 On any node, run the following command to start the vxfenswap utility:
6 Verify that the keys are atomically placed on the coordinator disks.
add_cluster – ✓
rm_clus – ✓
add_node ✓ ✓
rm_node ✓ ✓
add_user – ✓
rm_user – ✓
add_clus_to_user – ✓
rm_clus_from_user – ✓
reg_node ✓ ✓
Administering I/O fencing 396
About administering the coordination point server
unreg_node ✓ ✓
preempt_node ✓ ✓
list_membership ✓ ✓
list_nodes ✓ ✓
list_users ✓ ✓
halt_cps – ✓
db_snapshot – ✓
ping_cps ✓ ✓
client_preupgrade ✓ ✓
server_preupgrade ✓ ✓
list_protocols ✓ ✓
list_version ✓ ✓
list_ports – ✓
add_port – ✓
rm_port – ✓
host Hostname
■ To remove a user
Type the following command:
domain_type The domain type, for example vx, unixpwd, nis, etc.
Preempting a node
Use the following command to preempt a node.
Administering I/O fencing 399
About administering the coordination point server
To preempt a node
◆ Type the following command:
■ To unregister a node
Type the following command:
domain_type The domain type, for example vx, unixpwd, nis, etc.
# /opt/VRTScps/bin/vxcpserv
To add and remove virtual IP addresses and ports for CP servers at run-time
1 To list all the ports that the CP server is configured to listen on, run the following
command:
If the CP server has not been able to successfully listen on a given port at least
once, then the Connect History in the output shows never. If the IP addresses
are down when the vxcpserv process starts, vxcpserv binds to the IP addresses
when the addresses come up later. For example:
CP server does not actively monitor port health. If the CP server successfully
listens on any IP:port at least once, then the Connect History for that IP:port
shows once even if the port goes down later during CP server's lifetime. You
can obtain the latest status of the IP address from the corresponding IP resource
state that is configured under VCS.
2 To add a new port (IP:port) for the CP server without restarting the CP server,
run the following command:
For example:
3 To stop the CP server from listening on a port (IP:port) without restarting the
CP server, run the following command:
For example:
Where, DATE is the snapshot creation date, and TIME is the snapshot creation
time.
# hastop -all
2 Stop fencing on all the VCS cluster nodes of all the clusters.
# /etc/init.d/vxfen stop
3 Stop all the CP servers using the following command on each CP server:
6 After the mount resource comes online, move the credentials directory from
the default location to shared storage.
# mv /var/VRTSvcs/vcsauth/data/CPSERVER /etc/VRTSvcs/db/
# ln -s /etc/VRTScps/db/CPSERVER \
/var/VRTSvcs/vcsauth/data/CPSERVER
Note: If multiple clusters share the same CP server, you must perform this
replacement procedure in each cluster.
You can use the vxfenswap utility to replace coordination points when fencing is
running in customized mode in an online cluster, with vxfen_mechanism=cps. The
utility also supports migration from server-based fencing (vxfen_mode=customized)
to disk-based fencing (vxfen_mode=scsi3) and vice versa in an online cluster.
However, if the VCS cluster has fencing disabled (vxfen_mode=disabled), then you
must take the cluster offline to configure disk-based or server-based fencing.
See “Deployment and migration scenarios for CP server” on page 409.
You can cancel the coordination point replacement operation at any time using the
vxfenswap -a cancel command.
If the VCS cluster nodes are not present here, prepare the new CP server(s)
for use by the VCS cluster.
See the Symantec Cluster Server Installation Guide for instructions.
2 Ensure that fencing is running on the cluster using the old set of coordination
points and in customized mode.
For example, enter the following command:
# vxfenadm -d
3 Create a new /etc/vxfenmode.test file on each VCS cluster node with the
fencing configuration changes such as the CP server information.
Review and if necessary, update the vxfenmode parameters for security, the
coordination points, and if applicable to your configuration, vxfendg.
Refer to the text information within the vxfenmode file for additional information
about these parameters and their new possible values.
4 From one of the nodes of the cluster, run the vxfenswap utility.
The vxfenswap utility requires secure ssh connection to all the cluster nodes.
Use -n to use rsh instead of default ssh.
# vxfenswap [-n]
Administering I/O fencing 407
About administering the coordination point server
5 Review the message that the utility displays and confirm whether you want to
commit the change.
■ If you do not want to commit the new fencing configuration changes, press
Enter or answer n at the prompt.
# vxfenconfig -l
To refresh the registration keys on the coordination points for server-based fencing
1 Ensure that the VCS cluster nodes and users have been added to the new CP
server(s). Run the following commands:
If the VCS cluster nodes are not present here, prepare the new CP server(s)
for use by the VCS cluster.
See the Symantec Cluster Server Installation Guide for instructions.
2 Ensure that fencing is running on the cluster in customized mode using the
coordination points mentioned in the /etc/vxfenmode file.
If the /etc/vxfenmode.test file exists, ensure that the information in it and the
/etc/vxfenmode file are the same. Otherwise, vxfenswap utility uses information
listed in /etc/vxfenmode.test file.
For example, enter the following command:
# vxfenadm -d
================================
Fencing Protocol Version: 201
Fencing Mode: CUSTOMIZED
Cluster Members:
* 0 (sys1)
1 (sys2)
RFSM State Information:
node 0 in state 8 (running)
node 1 in state 8 (running)
# vxfenconfig -l
5 Run the vxfenswap utility from one of the nodes of the cluster.
The vxfenswap utility requires secure ssh connection to all the cluster nodes.
Use -n to use rsh instead of default ssh.
For example:
# vxfenswap [-n]
6 You are then prompted to commit the change. Enter y for yes.
The command returns a confirmation of successful coordination point
replacement.
7 Confirm the successful execution of the vxfenswap utility. If CP agent is
configured, it should report ONLINE as it succeeds to find the registrations on
coordination points. The registrations on the CP server and coordinator disks
can be viewed using the cpsadm and vxfenadm utilities respectively.
Note that a running online coordination point refreshment operation can be
canceled at any time using the command:
# vxfenswap -a cancel
Setup of CP server New CP server New VCS cluster On the designated CP server, perform the following
for a VCS cluster using CP server tasks:
for the first time as coordination
1 Prepare to configure the new CP server.
point
2 Configure the new CP server.
3 Prepare the new CP server for use by the VCS cluster.
Add a new VCS Existing and New VCS cluster On the VCS cluster nodes, configure server-based I/O
cluster to an operational CP fencing.
existing and server
See the Symantec Cluster Server Installation Guide for the
operational CP
procedures.
server
Replace the New CP server Existing VCS On the designated CP server, perform the following
coordination point cluster using CP tasks:
from an existing server as
1 Prepare to configure the new CP server.
CP server to a new coordination
CP server point 2 Configure the new CP server.
Replace the Operational CP Existing VCS On the designated CP server, prepare to configure the new
coordination point server cluster using CP CP server manually.
from an existing server as
See the Symantec Cluster Server Installation Guide for the
CP server to an coordination
procedures.
operational CP point
server coordination On a node in the VCS cluster, run the vxfenswap command
point to move to replace the CP server:
# hastop -local
# /etc/init.d/vxfen stop
# hastop -local
# /etc/init.d/vxfen stop
Enabling fencing in New CP server Existing VCS On the designated CP server, perform the following
a VCS cluster with cluster with tasks:
a new CP server fencing
1 Prepare to configure the new CP server.
coordination point configured in
scsi3 mode 2 Configure the new CP server
# hastop -local
# /etc/init.d/vxfen stop
Enabling fencing in Operational CP Existing VCS On the designated CP server, prepare to configure the new
a VCS cluster with server cluster with CP server.
an operational CP fencing
See the Symantec Cluster Server Installation Guide for this
server coordination configured in
procedure.
point disabled mode
Based on whether the cluster is online or offline, perform
the following procedures:
# hastop -local
# /etc/init.d/vxfen stop
Refreshing Operational CP Existing VCS On the VCS cluster run the vxfenswap command to refresh
registrations of server cluster using the the keys on the CP server:
VCS cluster nodes CP server as
See “Refreshing registration keys on the coordination points
on coordination coordination
for server-based fencing” on page 407.
points (CP servers/ point
coordinator disks)
without incurring
application
downtime
Administering I/O fencing 415
About migrating between disk-based and server-based fencing configurations
Warning: The cluster might panic if any node leaves the cluster membership before
the coordination points migration operation completes.
Migrating manually
Administering I/O fencing 416
About migrating between disk-based and server-based fencing configurations
Warning: The cluster might panic if any node leaves the cluster membership before
the coordination points migration operation completes.
Migrating manually
3 Make sure that the VCS cluster is online and uses either disk-based or
server-based fencing.
# vxfenadm -d
4 Copy the response file to one of the cluster systems where you want to
configure I/O fencing.
Review the sample files to reconfigure I/O fencing.
See “Sample response file to migrate from disk-based to server-based fencing”
on page 418.
See “Sample response file to migrate from server-based fencing to disk-based
fencing” on page 419.
See “Sample response file to migrate from single CP server-based fencing to
server-based fencing” on page 419.
5 Edit the values of the response file variables as necessary.
See “Response file variables to migrate between fencing configurations”
on page 419.
6 Start the I/O fencing reconfiguration from the system to which you copied the
response file. For example:
$CFG{disks_to_remove}=[ qw(emc_clariion0_62) ];
$CFG{fencing_cps}=[ qw(10.198.89.251)];
$CFG{fencing_cps_ports}{"10.198.89.204"}=14250;
$CFG{fencing_cps_ports}{"10.198.89.251"}=14250;
$CFG{fencing_cps_vips}{"10.198.89.251"}=[ qw(10.198.89.251 10.198.89.204) ];
$CFG{fencing_ncp}=1;
$CFG{fencing_option}=4;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=22462;
$CFG{vcs_clustername}="clus1";
Administering I/O fencing 419
About migrating between disk-based and server-based fencing configurations
$CFG{fencing_disks}=[ qw(emc_clariion0_66) ];
$CFG{fencing_mode}="scsi3";
$CFG{fencing_ncp}=1;
$CFG{fencing_ndisks}=1;
$CFG{fencing_option}=4;
$CFG{opt}{configure}=1;
$CFG{opt}{fencing}=1;
$CFG{prod}="VCS60";
$CFG{servers_to_remove}=[ qw([10.198.89.251]:14250) ];
$CFG{systems}=[ qw(sys1 sys2) ];
$CFG{vcs_clusterid}=42076;
$CFG{vcs_clustername}="clus1";
(Required)
(Required)
CFG {fencing_scsi3_disk_policy} Scalar Specifies the disk policy that the disks
must use.
(Optional)
You can enable preferred fencing to use system-based race policy or group-based
race policy. If you disable preferred fencing, the I/O fencing configuration uses the
default count-based race policy.
See “About preferred fencing” on page 322.
See “How preferred fencing works” on page 326.
To enable preferred fencing for the I/O fencing configuration
1 Make sure that the cluster is running with I/O fencing set up.
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
# haconf -makerw
■ Set the value of the system-level attribute FencingWeight for each node in
the cluster.
For example, in a two-node cluster, where you want to assign sys1 five
times more weight compared to sys2, run the following commands:
# vxfenconfig -a
# haconf -makerw
■ Set the value of the group-level attribute Priority for each service group.
For example, run the following command:
Make sure that you assign a parent service group an equal or lower priority
than its child service group. In case the parent and the child service groups
are hosted in different subclusters, then the subcluster that hosts the child
service group gets higher preference.
■ Save the VCS configuration.
# haconf -makerw
■ Set the value of the site-level attribute Preference for each site.
For example,
# hasite -modify Pune Preference 2
6 To view the fencing node weights that are currently set in the fencing driver,
run the following command:
# vxfenconfig -a
Administering I/O fencing 424
Enabling or disabling the preferred fencing policy
# vxfenadm -d
2 Make sure that the cluster-level attribute UseFence has the value set to SCSI3.
3 To disable preferred fencing and use the default race policy, set the value of
the cluster-level attribute PreferredFencingPolicy as Disabled.
# haconf -makerw
# haclus -modify PreferredFencingPolicy Disabled
# haconf -dump -makero
Chapter 11
Controlling VCS behavior
This chapter includes the following topics:
When resource R2 faults, the fault is propagated up the dependency tree to resource
R1. When the critical resource R1 goes offline, VCS must fault the service group
and fail it over elsewhere in the cluster. VCS takes other resources in the service
group offline in the order of their dependencies. After taking resources R3, R4, and
R5 offline, VCS fails over the service group to another node.
Controlling VCS behavior 427
About controlling VCS behavior at the service group level
When resource R2 faults, the engine propagates the failure up the dependency
tree. Neither resource R1 nor resource R2 are critical, so the fault does not result
in the tree going offline or in service group failover.
Figure 11-4 Scenario 3: Resource with critical parent fails to come online
VCS calls the Clean function for resource R2 and propagates the fault up the
dependency tree. Resource R1 is set to critical, so the service group is taken offline
and failed over to another node in the cluster.
Table 11-1 Possible values of the AutoFailover attribute and their description
AutoFailover Description
attribute value
0 VCS does not fail over the service group when a system or service
group faults.
Table 11-1 Possible values of the AutoFailover attribute and their description
(continued)
AutoFailover Description
attribute value
2 VCS automatically fails over the service group only if another suitable
node exists in the same system zone or sites.
If a suitable node does not exist in the same system zone or sites, VCS
brings the service group offline, and generates an alert for
administrator’s intervention. You can manually bring the group online
using the hagrp -online command.
Note: If SystemZones attribute is not defined, the failover behavior is
similar to AutoFailOver=1.
Table 11-2 Possible values of the FailOverPolicy attribute and their description
FailOverPolicy Description
attribute value
Priority VCS selects the system with the lowest priority as the failover target.
The Priority failover policy is ideal for simple two-node clusters or
small clusters with few service groups.
RoundRobin VCS selects the system running the fewest service groups as the
failover target. This policy is ideal for large clusters running many
service groups with similar server load characteristics (for example,
similar databases or applications)
Controlling VCS behavior 430
About controlling VCS behavior at the service group level
Table 11-2 Possible values of the FailOverPolicy attribute and their description
(continued)
FailOverPolicy Description
attribute value
BiggestAvailable VCS selects a system based on the forecasted available capacity for
all systems in the SystemList. The system with the highest forecasted
available capacity is selected.
This policy can be set only if the cluster attribute Statistics is set to
Enabled. The service group attribute Load is defined in terms of CPU,
Memory, and Swap in absolute units. The unit can be of the following
values:
About AdaptiveHA
When you set FailOverPolicy to BiggestAvailable, AdaptiveHA enables VCS to
dynamically select the cluster node with the most available resources to fail over
an application. VCS monitors and forecasts the unused capacity of systems in terms
of CPU, Memory, and Swap, to select the largest available system.
After you complete the above steps, AdaptiveHA is enabled. The service group
follows the BiggestAvailable policy during a failover.
The following table provides information on various attributes and the values they
can take to enable AdaptiveHA:
To turn off host metering Set the cluster attribute Statistics to Disabled.
To turn on host metering and forecasting Set the cluster attribute Statistics to Enabled.
Controlling VCS behavior 432
About controlling VCS behavior at the service group level
To enable hagrp -forecast CLI option Set the cluster attribute Statistics to Enabled
and also set the service group attribute Load
based on your application’s CPU, Mem or
Swap usage.
To check the meters supported for a given Verify the value of cluster attribute
host HostAvailableMeters.
To enable host metering, forecast, and policy Perform the following actions:
decisions using forecast
■ Set the cluster attribute Statistics to
Enabled
To change metering or forecast frequency Set the MeterInterval and ForecastCycle keys
in the cluster attribute MeterControl
accordingly.
To check the available capacity and its Use the following commands to check values
forecast for available capacity and its forecast:
is not same for these groups, then the engine refers to the cluster attribute
MeterWeight.
Note: The MeterWeight settings enable VCS to decide the target system for a child
service group based on the selected system for the parent service group.
Limitations of AdaptiveHA
AdaptiveHA enables VCS to make dynamic decisions about failing over an
application to the biggest available system. Due to the overcommitting behavior of
the virtualization technologies such as KVM, RHEV, and VMware, a user can create
multiple guest system in a host system with system resources (Memory, CPU or
SWAP) values higher than the actual values available in the host. Due to this
behavior VCS engine considers the guest system with the overcommitted resources
as highly available for failing over an application.
For example, on a host machine with in-built physical memory of 4 GB, if you create
two guest systems each with 4 GB and 8 GB of memory respectively, VCS engine
selects the guest system with 8GB of overcommitted memory value for failing over
an application irrespective of the available physical memory in the host.
cluster aha_oracls (
UserNames = { admin = dqrJqlQnrMrrPzrLqo }
Administrators = { admin }
UseFence = SCSI3
HacliUserLevel = COMMANDROOT
Statistics = Enabled
)
■ The Capacity attribute of system and Load attribute of service group are changed
from scalar integer to integer-association (multidimensional) type attributes.
Controlling VCS behavior 434
About controlling VCS behavior at the service group level
For example, if the System has Capacity attribute defined and service group
has Load attribute set in the main.cf file as:
Group Gx (
....
Load = 20
)
System N1 (
....
Capacity = 30
)
To update the main.cf file, update the Capacity values for Load and System
attributes as follows:
Group Gx (
....
Load = { Units = 20 }
)
System N1 (
....
Capacity = { Units = 30 }
)
■ If the cluster attribute HostMonLogLvl is defined in the main.cf file, then replace
it with Statistics and make the appropriate change from the following:
■ Replace HostMonLogLvl = ALL with Statistics = MeterHostOnly.
■ Replace HostMonLogLvl = AgentOnly with Statistics = MeterHostOnly.
About sites
The SiteAware attribute enables you to create sites to use in an initial failover
decision in campus clusters. A service group can failover to another site even though
a failover target is available within the same site. If the SiteAware attribute is set to
1, you cannot configure SystemZones. You can define site dependencies to restrict
dependent applications to fail over within the same site. If the SiteAware attribute
is configured and set to 2, then the service group will failover within the same site.
For example, in a campus cluster with two sites, siteA and siteB, you can define a
site dependency among service groups in a three-tier application infrastructure.
The infrastructure consists of Web, application, and database to restrict the service
group failover within same site.
See “ How VCS campus clusters work” on page 621.
Load-based autostart
VCS provides a method to determine where a service group comes online when
the cluster starts. Setting the AutoStartPolicy to Load instructs the VCS engine,
HAD, to determine the best system on which to start the groups. VCS places service
groups in an AutoStart queue for load-based startup as soon as the groups probe
all running systems. VCS creates a subset of systems that meet all prerequisites
and then chooses the system with the highest AvailableCapacity.
Set AutoStartPolicy = Load and configure the SystemZones attribute to establish
a list of preferred systems on which to initially run a group.
Controlling VCS behavior 436
About controlling VCS behavior at the service group level
Table 11-4 Possible events when the ManageFaults attribute is set to NONE
Table 11-4 Possible events when the ManageFaults attribute is set to NONE
(continued)
This command clears the ADMIN_WAIT state for all resources. If VCS continues
to detect resources that are not in the required state, it resets the resources
to the ADMIN_WAIT state.
3 If resources continue in the ADMIN_WAIT state, repeat step 1 and step 2, or
issue the following command to stop VCS from setting the resource to the
ADMIN_WAIT state:
When a service group has a resource in the ADMIN_WAIT state, the following
service group operations cannot be performed on the resource: online, offline,
switch, and flush. Also, you cannot use the hastop command when resources
are in the ADMIN_WAIT state. When this occurs, you must issue the hastop
command with -force option only.
When resource R2 faults, the Clean function is called and the resource is marked
as faulted. The fault is not propagated up the tree, and the group is not taken offline.
Controlling VCS behavior 440
About controlling VCS behavior at the service group level
Note: You cannot set the ProPCV attribute for parallel service groups and for hybrid
service groups.
You can set the ProPCV attribute when the service group is inactive on all the nodes
or when the group is active (ONLINE, PARTIAL, or STARTING) on one node in the
cluster. You cannot set the ProPCV attribute if the service group is already online
on multiple nodes in the cluster. See “Service group attributes” on page 765.
If ProPCV is set to 1, you cannot bring online processes that are listed in the
MonitorProcesses attribute or the StartProgram attribute of the application resource
on any other node in the cluster. If you try to start a process that is listed in the
MonitorProcesses attribute or StartProgram attribute on any other node, that process
is killed before it starts. Therefore, the service group does not get into concurrency
violation.
In situations where the propcv action agent function times out, you can use the
amfregister command to manually mark a resource as one of the following:
Controlling VCS behavior 441
About controlling VCS behavior at the service group level
Limitations of ProPCV
The following limitations apply:
■ ProPCV feature is supported only when the Mode value for the IMF attribute of
the Application type resource is set to 3 on all nodes in the cluster.
■ The ProPCV feature does not protect against concurrency in the following cases:
■ When you modify the IMFRegList attribute for the resource type.
■ When you modify any value that is part of the IMFRegList attribute for the
resource type.
■ If you configure the application type resource for ProPCV, consider the following:
■ If you run the process with changed order of arguments, the ProPCV feature
does not prevent the execution of the process.
For example, a single command can be run in multiple ways:
/usr/bin/tar -c -f a.tar
/usr/bin/tar -f a.tar -c
The ProPCV feature works only if you run the process the same way as it
is configured in the resource configuration.
■ If there are multiple ways or commands to start a process, ProPCV prevents
the startup of the process only if the process is started in the way specified
in the resource configuration.
■ You can bring processes online outside VCS control on another node when a
failover service group is auto-disabled.
Examples are:
■ When you use the hastop -local command or the hastop -local -force
command on a node.
■ When a node is detected as FAULTED after its ShutdownTimeout value has
elapsed because HAD exited.
In such situations, you can bring processes online outside VCS control on a
node even if the failover service group is online on another node on which VCS
engine is not running.
■ Before you set ProPCV to 1 for a service group, you must ensure that none of
the processes specified in the MonitorProcesses attribute or the StartProgram
attribute of the application resource of the group are running on any node where
the resource is offline. If an application resource lists two processes in its
MonitorProcesses attribute, both processes need to be offline on all nodes in
the cluster. If a node has only one process running and you set ProPCV to 1
Controlling VCS behavior 443
About controlling VCS behavior at the service group level
for the group, you can still start the second process on another node because
the Application agent cannot perform selective offline monitoring or online
monitoring of individual processes for an application resource
■ If a ProPCV-enabled service group has some application resources and some
non-application type resources (that cannot be configured for ProPCV), the
group can still get into concurrency violation for the non-application type
resources. You can bring the non-application type resources online outside VCS
control on a node when the service group is active on another node. In such
cases, the concurrency violation trigger is invoked.
■ When ProPCV is enabled for a group, the AMF driver prevents certain processes
from starting based on the process offline registrations with the AMF driver. If
a process starts whose pathname and arguments match with the registered
event, and if the prevent action is set for this registered event, that process is
prevented from starting. Apart from that, if the arguments match, and even if
only the basename of the starting process matches with the basename of the
pathname of the registered event, AMF driver prevents that process from starting
■ Even with ProPCV enabled, the AMF driver can prevent only those processes
from starting whose pathname and arguments match with the events registered
with the AMF driver. If the same process is started in some other manner (for
example, with a totally different pathname), AMF driver does not prevent the
process from starting. This behavior is in line with how AMF driver works for
process offline monitoring.
You can configure the IntentionalOffline attribute with the following possible values:
■ If you set the attribute to 1: When the application is intentionally stopped outside
of VCS control, the resource enters an OFFLINE state. This attribute does not
affect VCS behavior on application failure. VCS continues to fault resources if
managed corresponding applications fail.
■ If you set the attribute to 0: When the application is intentionally stopped outside
of VCS control, the resource enters a FAULTED state.
OnlineGroup If the configured application is started outside of VCS control, VCS brings
the corresponding service group online. If you attempt to start the
application on a frozen node or service group, VCS brings the
corresponding service group online once the node or the service group is
unfrozen.
OfflineGroup If the configured application is stopped outside of VCS control, VCS takes
the corresponding service group offline.
■ If the ToleranceLimit (TL) attribute is set to a non-zero value, the Monitor cycle
returns offline (exit code = 100) for a number of times specified by the
ToleranceLimit and increments the ToleranceCount (TC). When the
ToleranceCount equals the ToleranceLimit (TC = TL), the agent declares the
resource as faulted.
■ If the Monitor routine returns online (exit code = 110) during a monitor cycle,
the agent takes no further action. The ToleranceCount attribute is reset to 0
when the resource is online for a period of time specified by the ConfInterval
attribute.
If the resource is detected as being offline a number of times specified by the
ToleranceLimit before the ToleranceCount is reset (TC = TL), the resource is
considered faulted.
■ After the agent determines the resource is not online, VCS checks the Frozen
attribute for the service group. If the service group is frozen, VCS declares the
resource faulted and calls the resfault trigger. No further action is taken.
■ If the service group is not frozen, VCS checks the ManageFaults attribute. If
ManageFaults=NONE, VCS marks the resource state as ONLINE|ADMIN_WAIT
and calls the resadminwait trigger. If ManageFaults=ALL, VCS calls the Clean
function with the CleanReason set to Unexpected Offline.
■ If the Clean function fails (exit code = 1) the resource remains online with the
state UNABLE TO OFFLINE. VCS fires the resnotoff trigger and monitors the
resource again. The resource enters a cycle of alternating Monitor and Clean
functions until the Clean function succeeds or a user intervenes.
■ If the Clean function is successful, VCS examines the value of the RestartLimit
(RL) attribute. If the attribute is set to a non-zero value, VCS increments the
RestartCount (RC) attribute and invokes the Online function. This continues till
the value of the RestartLimit equals that of the RestartCount. At this point, VCS
attempts to monitor the resource.
■ If the Monitor returns an online status, VCS considers the resource online and
resumes periodic monitoring. If the monitor returns an offline status, the resource
is faulted and VCS takes actions based on the service group configuration.
Controlling VCS behavior 449
About controlling VCS behavior at the resource level
■ If the Online function does not time out, VCS invokes the Monitor function. The
Monitor routine returns an exit code of 110 if the resource is online. Otherwise,
the Monitor routine returns an exit code of 100.
■ VCS examines the value of the OnlineWaitLimit (OWL) attribute. This attribute
defines how many monitor cycles can return an offline status before the agent
framework declares the resource faulted. Each successive Monitor cycle
increments the OnlineWaitCount (OWC) attribute. When OWL= OWC (or if
OWL= 0), VCS determines the resource has faulted.
■ VCS then examines the value of the ManageFaults attribute. If the ManageFaults
is set to NONE, the resource state changes to OFFLINE|ADMIN_WAIT.
If the ManageFaults is set to ALL, VCS calls the Clean function with the
CleanReason set to Online Ineffective.
■ If the Clean function is not successful (exit code = 1), the agent monitors the
resource. It determines the resource is offline, and calls the Clean function with
the Clean Reason set to Online Ineffective. This cycle continues till the Clean
function is successful, after which VCS resets the OnlineWaitCount value.
■ If the OnlineRetryLimit (ORL) is set to a non-zero value, VCS increments the
OnlineRetryCount (ORC) and invokes the Online function. This starts the cycle
all over again. If ORL = ORC, or if ORL = 0, VCS assumes that the Online
operation has failed and declares the resource as faulted.
Controlling VCS behavior 451
About controlling VCS behavior at the resource level
A Resource
Offline
Online. Resource
Waiting to Online
Online YES
Timeout?
NO
OWL> YES
OWC? Resource
Offline | Admin_Wait
Manage NONE resadminwait Trigger
NO
Faults
ALL ORC=
ORC+1
Clean. Clean.
“Online Ineffective “Online Hung
NO
Clean
Success?
YES
Reset OWC
ORL > NO
ORC? B
YES
Resource faults.
resfault Trigger
Fault 0
No other resources affected.
Propagation No group failover.
Offline all
resources in
dependent path
YES
Auto 0
Service group offline in
Failover Faulted state.
System NO
Service group offline in
available? Faulted state. Nofailover
trigger.
Failover based on
FailOverPolicy
Note: Disabling a resource is not an option when the entire service group requires
disabling. In that case, set the service group attribute Enabled to 0.
Use the following command to disable the resource when VCS is running:
To have the resource disabled initially when VCS is started, set the resource’s
Enabled attribute to 0 in main.cf.
tree is also cleared. When the PolicyIntention value of 2 for the parent service
group is cleared, the PolicyIntention value of all its child service groups in
dependency tree is also cleared.
This section shows how a service group containing disabled resources is brought
online.
Figure 11-8 shows Resource_3 is disabled. When the service group is brought
online, the only resources brought online by VCS are Resource_1 and Resource_2
(Resource_2 is brought online first) because VCS recognizes Resource_3 is
disabled. In accordance with online logic, the transaction is not propagated to the
disabled resource.
Resource_1
lin
on
g
in
Go
Resource_2 Resource_3
Resource_3 is disabled.
Resource_4
Resource_4 is offline.
Resource_5
Resource_5 is offline.
Figure 11-9, shows that Resource_2 is disabled. When the service group is brought
online, resources 1, 3, 4 are also brought online (Resource_4 is brought online
first). Note Resource_3, the child of the disabled resource, is brought online because
Resource_1 is enabled and is dependent on it.
Resource_1 Resource_2
Resource_2 is disabled.
Resource_3
Going onling
Resource_4
Controlling VCS behavior 457
Changing agent file paths and binaries
Table 11-6 Disk group state and the failover attribute define VCS behavior
No failover.
No failover.
No failover.
I/O Value of
No 0 Log DG
PanicSyst No failover
fencing emOnDG error
Loss?
Yes
1, 2
Value of 1, 2 Panic system
PanicSyst (dump, crash, and
emOnDG
halt processes)
Loss?
Clean Entry
0 Point
When a service group is brought online, its load is subtracted from the system’s
capacity to determine available capacity. VCS maintains this info in the attribute
AvailableCapacity.
When a failover occurs, VCS determines which system has the highest available
capacity and starts the service group on that system. During a failover involving
multiple service groups, VCS makes failover decisions serially to facilitate a proper
load-based choice.
System capacity is a soft restriction; in some situations, value of the Capacity
attribute could be less than zero. During some operations, including cascading
failures, the value of the AvailableCapacity attribute could be negative.
LoadTimeCounter, which determines how many seconds system load has been
above LoadWarningLevel, is reset.
Prerequisites = { GroupWeight = 1 }
Limits = { GroupWeight = 1 }
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel = 90
LoadTimeThreshold = 600
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel=70
LoadTimeThreshold=300
)
system MedServer1 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
system MedServer2 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G1 (
SystemList = { LargeServer1 = 0, LargeServer2 = 1,
MedServer1 = 2 , MedServer2 = 3 }
SystemZones = { LargeServer1=0, LargeServer2=0,
Controlling VCS behavior 465
Sample configurations depicting workload management
MedServer1=1, MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
include "types.cf"
cluster SGWM-demo
system Server1 (
Capacity = 100
)
system Server2 (
Capacity = 100
)
system Server3 (
Capacity = 100
)
system Server4 (
Capacity = 100
)
group G1 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 20
)
group G2 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
Controlling VCS behavior 466
Sample configurations depicting workload management
FailOverPolicy = Load
Load = 40
)
group G3 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 30
)
group G4 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 10
)
group G5 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 50
)
group G6 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 30
)
group G7 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 20
)
Controlling VCS behavior 467
Sample configurations depicting workload management
group G8 (
SystemList = { Server1, Server2, Server3, Server4 }
AutoStartPolicy = Load
AutoStartList = { Server1, Server2, Server3, Server4 }
FailOverPolicy = Load
Load = 40
)
Server1 80 G1
Server2 60 G2
Server3 70 G3
Server4 90 G4
As the next groups come online, group G5 starts on Server4 because this server
has the highest AvailableCapacity value. Group G6 then starts on Server1 with
AvailableCapacity of 80. Group G7 comes online on Server3 with AvailableCapacity
of 70 and G8 comes online on Server2 with AvailableCapacity of 60.
Table 11-8 shows the Autostart cluster configuration for a basic four-node cluster
with the other service groups online.
Server1 50 G1 and G6
Controlling VCS behavior 468
Sample configurations depicting workload management
Server2 20 G2 and G8
Server3 50 G3 and G7
Server4 40 G4 and G5
In this configuration, Server2 fires the loadwarning trigger after 600 seconds because
it is at the default LoadWarningLevel of 80 percent.
Server2 20 G2 and G8
In this configuration, Server3 fires the loadwarning trigger to notify that the server
is overloaded. An administrator can then switch group G7 to Server1 to balance
the load across groups G1 and G3. When Server4 is repaired, it rejoins the cluster
with an AvailableCapacity value of 100, making it the most eligible target for a
failover group.
Table 11-10 Cascading failure scenario for a basic four node cluster
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel = 90
LoadTimeThreshold = 600
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=20, Semaphores=10, Processors=12 }
LoadWarningLevel=70
LoadTimeThreshold=300
)
system MedServer1 (
Capacity = 100
Limits = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
system MedServer2 (
Capacity = 100
Controlling VCS behavior 470
Sample configurations depicting workload management
group G1 (
SystemList = { LargeServer1, LargeServer2, MedServer1,
MedServer2 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2 }
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G2 (
SystemList = { LargeServer1, LargeServer2, MedServer1,
MedServer2 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2 }
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=10, Semaphores=5, Processors=6 }
)
group G3 (
SystemList = { LargeServer1, LargeServer2, MedServer1, MedServer2 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
FailOverPolicy = Load
Load = 30
)
group G4 (
SystemList = { LargeServer1, LargeServer2, MedServer1, MedServer2 }
SystemZones = { LargeServer1=0, LargeServer2=0, MedServer1=1,
MedServer2=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2 }
Controlling VCS behavior 471
Sample configurations depicting workload management
FailOverPolicy = Load
Load = 20
)
Semaphores=5
Processors=6
Semaphores=5
Processors=6
MedServer1 70 ShrMemSeg=10 G3
Semaphores=5
Processors=6
Controlling VCS behavior 472
Sample configurations depicting workload management
MedServer2 80 ShrMemSeg=10 G4
Semaphores=5
Processors=6
Semaphores=0
Processors=0
MedServer1 70 ShrMemSeg=10 G3
Semaphores=5
Processors=6
MedServer2 80 ShrMemSeg=10 G4
Semaphores=5
Processors=6
include "types.cf"
cluster SGWM-demo (
)
system LargeServer1 (
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel = 80
LoadTimeThreshold = 900
)
system LargeServer2 (
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel=80
LoadTimeThreshold=900
)
Controlling VCS behavior 474
Sample configurations depicting workload management
system LargeServer3 (
Capacity = 200
Limits = { ShrMemSeg=15, Semaphores=30, Processors=18 }
LoadWarningLevel=80
LoadTimeThreshold=900
)
system MedServer1 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer2 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer3 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer4 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
system MedServer5 (
Capacity = 100
Limits = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Database1 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2, LargeServer3 }
Controlling VCS behavior 475
Sample configurations depicting workload management
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Database2 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2, LargeServer3 }
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Database3 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { LargeServer1, LargeServer2, LargeServer3 }
FailOverPolicy = Load
Load = 100
Prerequisites = { ShrMemSeg=5, Semaphores=10, Processors=6 }
)
group Application1 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
Controlling VCS behavior 476
Sample configurations depicting workload management
group Application2 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = 50
)
group Application3 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = 50
Controlling VCS behavior 477
Sample configurations depicting workload management
group Application4 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = 50
)
group Application5 (
SystemList = { LargeServer1, LargeServer2, LargeServer3,
MedServer1, MedServer2, MedServer3, MedServer4,
MedServer5 }
SystemZones = { LargeServer1=0, LargeServer2=0,
LargeServer3=0,
MedServer1=1, MedServer2=1, MedServer3=1,
MedServer4=1,
MedServer5=1 }
AutoStartPolicy = Load
AutoStartList = { MedServer1, MedServer2, MedServer3,
MedServer4,
MedServer5 }
FailOverPolicy = Load
Load = 50
)
Database1 LargeServer1
Database2 LargeServer2
Controlling VCS behavior 478
Sample configurations depicting workload management
Database3 LargeServer3
Application1 MedServer1
Application2 MedServer2
Application3 MedServer3
Application4 MedServer4
Application5 MedServer5
Semaphores=20
Processors=12
Semaphores=20
Processors=12
Semaphores=20
Processors=12
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Controlling VCS behavior 479
Sample configurations depicting workload management
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Semaphores=10
Processors=6
Table 11-14 Failure scenario for a complex eight-node cluster running multiple
applications and large databases
Semaphores=10
Processors=6
Semaphores=20
Processors=12
Controlling VCS behavior 480
Sample configurations depicting workload management
In this scenario, further failure of either system can be tolerated because each has
sufficient Limits available to accommodate the additional service group.
Dependency categories
Dependency categories determine the relationship of the parent group with the
state of the child group.
Table 12-1 shows dependency categories and relationships between parent and
child service groups.
Online group The parent group must wait for the child group to be brought online
dependency before it can start.
Offline group The parent group can be started only if the child group is offline and
dependency vice versa. This behavior prevents conflicting applications from running
on the same system.
Dependency location
The relative location of the parent and child service groups determines whether the
dependency between them is a local, global, remote or site.
Table 12-2 shows the dependency locations for local, global, remote and site
dependencies.
The role of service group dependencies 483
About service group dependencies
Local dependency The parent group depends on the child group being online or offline on
the same system.
Global dependency An instance of the parent group depends on one or more instances of
the child group being online on any system in the cluster.
Site dependency An instance of the parent group depends on one or more instances of
the child group being online on any system in the same site.
Dependency rigidity
The type of dependency defines the rigidity of the link between parent and child
groups. A soft dependency means minimum constraints, whereas a hard dependency
means maximum constraints.
Table 12-3 shows dependency rigidity and associated constraints.
Soft dependency Specifies the minimum constraints while bringing parent and child groups
online. The only constraint is that the child group must be online before
the parent group is brought online.
■ If the child group faults, VCS does not immediately take the parent
offline. If the child group cannot fail over, the parent remains online.
■ When both groups are online, either group, child or parent, may be
taken offline while the other remains online.
■ If the parent group faults, the child group remains online.
■ When the link is created, the child group need not be online if the
parent is online. However, when both groups are online, their online
state must not conflict with the type of link.
The role of service group dependencies 484
About service group dependencies
Firm dependency Imposes more constraints when VCS brings the parent or child groups
online or takes them offline. In addition to the constraint that the child
group must be online before the parent group is brought online, the
constraints include:
■ If the child group faults, the parent is taken offline. If the parent is
frozen at the time of the fault, the parent remains in its original state.
If the child cannot fail over to another system, the parent remains
offline.
■ If the parent group faults, the child group may remain online.
■ The child group cannot be taken offline if the parent group is online.
The parent group can be taken offline while the child is online.
■ When the link is created, the parent group must be offline. However,
if both groups are online, their online state must not conflict with the
type of link.
Hard dependency Imposes the maximum constraints when VCS brings the parent of child
service groups online or takes them offline. For example:
■ If a child group faults, the parent is taken offline before the child
group is taken offline. If the child group fails over, the parent fails
over to another system (or the same system for a local dependency).
If the child group cannot fail over, the parent group remains offline.
■ If the parent faults, the child is taken offline. If the child fails over,
the parent fails over. If the child group cannot fail over, the parent
group remains offline.
Note: When the child faults, if the parent group is frozen, the parent
remains online. The faulted child does not fail over.
■ You cannot link two service groups whose current states violate the relationship.
For example, all link requests are accepted if all instances of parent group are
offline.
All link requests are rejected if parent group is online and child group is offline,
except in offline dependencies and in soft dependencies.
All online global link requests, online remote link requests, and online site link
requests to link two parallel groups are rejected.
All online local link requests to link a parallel parent group to a failover child
group are rejected.
■ Linking service groups using site dependencies:
■ If the service groups to be linked are online on different sites, you cannot
use site dependency to link them.
■ All link requests to link parallel or hybrid parent groups to a failover or hybrid
child service group are rejected.
■ If two service groups are already linked using a local, site, remote, or global
dependency, you must unlink the existing dependency and use site
dependency. However, you can configure site dependency with other online
dependencies in multiple child or multiple parent configurations.
online local soft Failover Child Child is online Parent stays Child stays
online on same on same online. online.
system. system.
If Child fails over
to another
system, Parent
migrates to the
same system.
If Child cannot
fail over, Parent
remains online.
online local firm Failover Child Child is online Parent taken Child stays
online on same on same offline. online.
system. system.
If Child fails over
to another
system, Parent
migrates to the
same system.
If Child cannot
fail over, Parent
remains offline.
online local hard Failover Child Child is online Parents taken Child taken
online on same on same offline before offline.
system. system. Child is taken
If Child fails
offline.
over, Parent
If Child fails over migrates to the
to another same system.
system, Parent
If Child cannot
migrates to the
fail over, Parent
same system.
remains offline.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 487
Service group dependency configurations
online global soft Failover Child Child is online Parent stays Child stays
online somewhere in online. online.
somewhere in the cluster.
If Child fails over Parent fails over
the cluster.
to another to any available
system, Parent system.
remains online.
If no failover
If Child cannot target system is
fail over, Parent available, Parent
remains online. remains offline.
online global firm Failover Child Child is online Parent taken Child stays
online somewhere in offline after online.
somewhere in the cluster. Child is taken
Parent fails over
the cluster. offline.
to any available
If Child fails over system.
to another
If no failover
system, Parent
target system is
is brought online
available, Parent
on any system.
remains offline.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 488
Service group dependency configurations
online remote soft Failover Child Child is online If Child fails over Child stays
online on on another to the system on online.
another system system in the which Parent
Parent fails over
in the cluster. cluster. was online,
to a system
Parent migrates
where Child is
to another
not online.
system.
If the only
If Child fails over
system available
to another
is where Child is
system, Parent
online, Parent is
continues to run
not brought
on original
online.
system.
If no failover
If Child cannot
target system is
fail over, Parent
available, Child
remains online.
remains online.
online remote firm Failover Child Child is online If Child fails over Parent fails over
online on on another to the system on to a system
another system system in the which Parent where Child is
in the cluster. cluster. was online, not online.
Parent switches
If the only
to another
system available
system.
is where Child is
If Child fails over online, Parent is
to another not brought
system, Parent online.
restarts on
If no failover
original system.
target system is
If Child cannot available, Child
fail over, VCS remains online.
takes the parent
offline.
The role of service group dependencies 489
Service group dependency configurations
online site soft Failover Child Child is online in Parent stays Child remains
online on same the same site. online. online.
site.
If another Child Parent fails over
instance is to another
online or Child system in the
fails over to a same site
system within maintaining
the same site, dependency on
Parent stays Child instances
online. in the same site.
online site firm Failover Child Child is online in Parent taken Child remains
online in the the same site. offline. online.
same site
If another Parent fails over
instance of child to another
is online in the system in same
same site or site maintaining
child fails over to dependence on
another system Child instances
in the same site, in the same site.
Parent migrates
If Parent cannot
to a system in
failover to a
the same site.
system within
If no Child same site, then
instance is Parent fails over
online or Child to a system in
cannot fail over, another site
Parent remains where at least
offline. one instance of
child is online.
The role of service group dependencies 491
Service group dependency configurations
offline local Failover Child Child is offline If Child fails over Parent fails over
offline on the on the same to the system on to system on
same system system. which parent in which Child is
not running, not online.
parent continues
If no failover
running.
target system is
If child fails over available, Child
to system on remains online.
which parent is
running, parent
switches to
another system,
if available.
If no failover
target system is
available for
Child to fail over
to, Parent
continues
running.
online local soft Instance of Instance of Child If Child instance Parent fails over
parallel Child is online on fails over to to other system
group on same same system. another system, and depends on
system. the Parent also Child instance
fails over to the there.
same system.
Child Instance
If Child instance remains online
cannot failover where the
to another Parent faulted.
system, Parent
remains online.
online local firm Instance of Instance of Child Parent is taken Parent fails over
parallel Child is online on offline. Parent to other system
group on same same system. fails over to and depends on
system. other system Child instance
and depends on there.
Child instance
Child Instance
there.
remains online
where Parent
faulted.
online global soft All instances of At least one Parent remains Parent fails over
parallel Child instance of Child online if Child to another
group online in group is online faults on any system,
the cluster. somewhere in system. maintaining
the cluster. dependence on
If Child cannot
all Child
fail over to
instances.
another system,
Parent remains
online.
The role of service group dependencies 493
Service group dependency configurations
online global firm One or more An instance of Parent is taken Parent fails over
instances of Child group is offline. to another
parallel Child online system,
If another Child
group remaining somewhere in maintaining
instance is
online the cluster. dependence on
online or Child
all Child
fails over,
instances.
Parent fails over
to another
system or the
same system.
If no Child
instance is
online or Child
cannot fail over,
Parent remains
offline.
online remote soft One or more One or more Parent remains Parent fails over
instances instances of online. to another
parallel Child Child group are system,
If Child fails over
group remaining online on other maintaining
to the system on
online on other systems. dependence on
which Parent is
systems. the Child
online, Parent
instances.
fails over to
another system.
The role of service group dependencies 494
Service group dependency configurations
online remote firm All instances All instances of Parent is taken Parent fails over
parallel Child Child group are offline. to another
group remaining online on other system,
If Child fails over
online on other systems. maintaining
to the system on
systems. dependence on
which Parent is
all Child
online, Parent
instances.
fails over to
another system.
online site soft One or more At least one Parent stays Child stays
instances of instance of Child online if child is online.
parallel Child is online in same online on any
Parents fails
group in the site. system in the
over to a system
same site. same site.
with Child online
If Child fails over in the same site.
to a system in
If the parent
another site,
group cannot
Parent stays in
failover, child
the same site.
group remains
If Child cannot online
fail over, Parent
remains online.
The role of service group dependencies 495
Service group dependency configurations
online site firm One or more At least one Parent stays Child stays
instances of instance of Child online if any online.
parallel Child is online in same instance of child
Parents fails
group in the site. is online in the
over to a system
same site. same site.
with Child online
If Child fails over in the same site.
to another
If the parent
system, Parent
group cannot
migrates to a
failover, child
system in the
group remains
same site.
online.
If Child cannot
fail over, Parent
remains offline.
online global soft Failover Child Failover Child is Parent remains Child remains
group online online online. online
somewhere in somewhere in
the cluster. the cluster.
online global firm Failover Child Failover Child is All instances of Child stays
group online Parent taken online.
somewhere in somewhere in offline.
the cluster. the cluster.
After Child fails
over, Parent
instances are
failed over or
restarted on the
same systems.
online remote soft Failover Child Failover Child is If Child fails over Child remains
group on online on to system on online. Parent
another system. another system. which Parent is tries to fail over
online, Parent to another
fails over to system where
other systems. child is not
online.
If Child fails over
to another
system, Parent
remains online.
The role of service group dependencies 497
Service group dependency configurations
online remote firm Failover Child Failover Child is All instances of Child remains
group on online on Parent taken online. Parent
another system. another system. offline. tries to fail over
to another
If Child fails over
system where
to system on
child is not
which Parent
online.
was online,
Parent fails over
to other
systems.
offline local Failover Child Failover Child is Parent remains Child remains
offline on same not online on online if Child online.
system. same system. fails over to
another system.
Table 12-7 shows service group dependency configurations for parallel parent /
parallel child.
online local soft Parallel Child Parallel Child If Child fails over Child instance
instance online instance is to another stays online.
on same online on same system, Parent
Parent instance
system. system. migrates to the
can fail over
same system as
only to system
the Child.
where Child
If Child cannot instance is
fail over, Parent running and
remains online. other instance of
Parent is not
running.
online local firm Parallel Child Parallel Child Parent taken Child stays
instance online instance is offline. online.
on same online on same
If Child fails over Parent instance
system. system.
to another can fail over
system, VCS only to system
brings an where Child
instance of the instance is
Parent online on running and
the same other instance of
system as Child. Parent is not
running.
If Child cannot
fail over, Parent
remains offline.
The role of service group dependencies 499
Frequently asked questions about group dependencies
Online local Can child group be taken offline when parent group is online?
Online global Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Soft=Yes Firm=Yes.
Soft=Yes Firm=No
Online remote Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Offline local Can parent group be brought online when child group is offline?
Yes.
Yes.
Online site Can child group be taken offline when parent group is online?
Soft=Yes Firm=No.
Can parent group be switched to a system in the same site while child
group is running?
Soft=Yes Firm=Yes.
Soft=Yes Firm=No
The role of service group dependencies 501
About linking service groups
Some child groups can be online local soft and other child groups can be online
local firm.
■ A combination of local, global, remote, and site dependencies
One child group can be online local soft, another child group can be online
remote soft, and yet another child group can be online global firm.
■ For a parallel child group linked online local with failover/parallel parent, multiple
instances of child group online are acceptable.
■ For a parallel child group linked online remote with failover parent, multiple
instances of child group online are acceptable, as long as child group does not
go online on the system where parent is online.
■ For a parallel child group linked offline local with failover/parallel parent, multiple
instances of child group online are acceptable, as long as child group does not
go online on the system where parent is online.
is equal to or greater than the level specified. For example, if you configure recipients
for notifications and specify the severity level as Warning, VCS notifies the recipients
about events with the severity levels Warning, Error, and SevereError but not about
events with the severity level Information.
See “About attributes and their definitions” on page 740.
Figure 13-1 shows the severity levels of VCS events.
SevereError Critical errors that can lead to data loss or corruption; SevereError is
the highest severity level.
Error Faults
Information Important events that exhibit normal behavior; Information is the lowest
severity level.
SNMP
SMTP
SNMP
SMTP Error
SevereError
Information
notifier
HAD HAD
System A System B
SNMP traps are forwarded to the SNMP console. Typically, traps are predefined
for events such as service group or resource faults. You can use the hanotify utility
to send additional traps.
VCS event notification 507
About VCS event notification
one of the recipients, it deletes the message from its queue. For example, if two
SNMP consoles and two email recipients are designated, notifier sends an
acknowledgement to HAD even if the message reached only one of the four
recipients. If HAD does not get acknowledgement for some messages, it keeps on
sending these notifications to notifier after every 180 seconds till it gets an
acknowledgement of delivery from notifier. An error message is printed to the log
file when a delivery error occurs.
HAD deletes messages under the following conditions too:
■ The message has been in the queue for time (in seconds) specified in
MessageExpiryInterval attribute (default value: one hour) and notifier is unable
to deliver the message to the recipient.
■ The message queue is full and to make room for the latest message, the earliest
message is deleted.
addresses. You can specify more than one manager or server, and the severity
level of messages that are sent to each.
Note: If you start the notifier outside of VCS control, use the absolute path of the
notifier in the command. VCS cannot monitor the notifier process if it is started
outside of VCS control using a relative path.
/opt/VRTSvcs/bin/notifier -s m=north -s
m=south,p=2000,l=Error,c=your_company
-t m=north,e="abc@your_company.com",l=SevereError
notifier
hanotify
had had
System A System B
Remote cluster is in RUNNING state. Information Local cluster has complete snapshot
of the remote cluster, indicating the
(Global Cluster Option)
remote cluster is in the RUNNING
state.
User has logged on to VCS. Information A user log on has been recognized
because a user logged on by Cluster
Manager, or because a haxxx
command was invoked.
Agent is faulted. Warning The agent has faulted on one node in the
cluster.
Resource monitoring has timed out. Warning Monitoring mechanism for the
resource has timed out.
Resource is not going offline. Warning VCS cannot take the resource
offline.
Resource went online by itself. Warning (not The resource was brought
for first online on its own.
probe)
The health of cluster resource improved. Information Used by agents to give extra
information about state of
resource. An improvement in
the health of the resource was
identified during monitoring.
VCS event notification 513
About VCS events and traps
Resource monitor time has changed. Warning This trap is generated when
statistical analysis for the time
taken by the monitor function of
an agent is enabled for the
agent.
VCS is up on the first node in the cluster. Information VCS is up on the first
node.
A node running VCS has joined cluster. Information The cluster has a new
node that runs VCS.
CPU usage exceeded threshold on the system. Warning The system’s CPU usage
exceeded the Warning
threshold level set in the
CPUThreshold attribute.
Swap usage exceeded threshold on the system. Warning The system’s swap usage
exceeded the Warning
threshold level set in the
SwapThreshold attribute.
Service group has faulted and cannot be SevereError Specified service group faulted
failed over anywhere. on all nodes where group
could be brought online. There
are no nodes to which the
group can fail over.
Attributes for global service groups are Error The attributes ClusterList,
mismatched. AutoFailOver, and Parallel are
mismatched for the same
(Global Cluster Option)
global service group on
different clusters.
SNMP-specific files
VCS includes two SNMP-specific files: vcs.mib and vcs_trapd, which are created
in:
/etc/VRTSvcs/snmp
The file vcs.mib is the textual MIB for built-in traps that are supported by VCS. Load
this MIB into your SNMP console to add it to the list of recognized traps.
VCS event notification 516
About VCS events and traps
The file vcs_trapd is specific to the HP OpenView Network Node Manager (NNM)
SNMP console. The file includes sample events configured for the built-in SNMP
traps supported by VCS. To merge these events with those configured for SNMP
traps:
When you merge events, the SNMP traps sent by VCS by way of notifier are
displayed in the HP OpenView NNM SNMP console.
About severityId
This variable indicates the severity of the trap being sent.
Table 13-7 shows the values that the variable severityId can take.
Information 0
Warning 1
Error 2
A fault
VCS event notification 517
About VCS events and traps
Severe Error 3
Critical error that can lead to data loss or corruption
Group String
Heartbeat String
VCS String
GCO String
About entityState
This variable describes the state of the entity.
Table 13-9 shows the the various states.
Entity States
Entity States
GCO heartbeat states ■ Cluster has lost heartbeat with remote cluster
■ Heartbeat with remote cluster is alive
VCS event notification 519
About monitoring aggregate events
Note: You must configure appropriate severity for the notifier to receive these
notifications. To receive VCS notifications, the minimum acceptable severity level
is Information.
See “Setting up VCS event notification by using the Notifier wizard” on page 165.
Chapter 14
VCS event triggers
This chapter includes the following topics:
VCS does not wait for the trigger to complete execution. VCS calls the trigger and
continues normal operation.
VCS invokes event triggers on the system where the event occurred, with the
following exceptions:
■ VCS invokes the sysoffline and nofailover event triggers on the lowest-numbered
system in the RUNNING state.
■ VCS invokes the violation event trigger on all systems on which the service
group was brought partially or fully online.
VCS event triggers 522
Using event triggers
By default, the hatrigger script invokes the trigger script(s) from the default path
$VCS_HOME/bin/triggers. You can customize the trigger path by using the
TriggerPath attribute.
See “Resource attributes” on page 741.
See “Service group attributes” on page 765.
The same path is used on all nodes in the cluster. The trigger path must exist on
all the cluster nodes. On each cluster node, the trigger scripts must be installed in
the trigger path.
Note: As a good practice, ensure that one script does not affect the functioning of
another script. If script2 takes the output of script1 as an input, script2 must be
capable of handling any exceptions that arise out of the behavior of script1.
VCS event triggers 523
List of event triggers
Description The dumptunables trigger is invoked when HAD goes into the RUNNING state.
When this trigger is invoked, it uses the HAD environment variables that it
inherited, and other environment variables to process the event. Depending on
the value of the to_log parameter, the trigger then redirects the environment
variables to either stdout or the engine log.
Description On the system having lowest NodeId in the cluster, VCS periodically broadcasts
an update of GlobalCounter. If a node does not receive the broadcast for an
interval greater than CounterMissTolerance, it invokes the
globalcounter_not_updated trigger if CounterMissAction is set to Trigger. This
event is considered critical since it indicates a problem with underlying cluster
communications or cluster interconnects. Use this trigger to notify administrators
of the critical events.
VCS event triggers 524
List of event triggers
Description Invoked when a system becomes overloaded because the load of the
system’s online groups exceeds the system’s LoadWarningLevel
attribute for an interval exceeding the LoadTimeThreshold attribute.
Use this trigger to notify the administrator of the critical event. The
administrator can then switch some service groups to another system,
ensuring that no one system is overloaded.
Description This event trigger is invoked on the system where the group went offline
from a partial or fully online state. This trigger is invoked when the group
faults, or is taken offline manually.
Description This event trigger is invoked on the system where the group went online
from an offline state.
Description Indicates when the HAD should call a user-defined script before bringing
a service group online in response to the hagrp -online command
or a fault.
If the trigger does not exist, VCS continues to bring the group online.
If the script returns 0 without an exit code, VCS runs the hagrp
-online -nopre command, with the -checkpartial option if
appropriate.
If you do want to bring the group online, define the trigger to take no
action. This event trigger is configurable.
To enable the Set the PreOnline attribute in the service group definition to 1.
trigger
You can set a local (per-system) value for the attribute to control
behavior on each node in the cluster.
To disable the Set the PreOnline attribute in the service group definition to 0.
trigger
You can set a local (per-system) value for the attribute to control
behavior on each node in the cluster.
0 = The offline function did not complete within the expected time.
2 = The online function did not complete within the expected time.
Description Invoked on the system where a resource has faulted. Note that when
a resource is faulted, resources within the upward path of the faulted
resource are also brought down.
To enable the To invoke the trigger when a resource faults, set the TriggerResFault
trigger attribute to 1.
Description This trigger is invoked when a resource is restarted by an agent because resource
faulted and RestartLimit was greater than 0.
VCS event triggers 530
List of event triggers
To enable This event trigger is not enabled by default. You must enable resrestart by setting
the trigger the attribute TriggerResRestart to 1 in the main.cf file, or by issuing the command:
To enable the This event trigger is not enabled by default. You must enable
trigger resstatechange by setting the attribute TriggerResStateChange to 1 in
the main.cf file, or by issuing the command:
Description The sysup trigger is invoked when the first node joins the cluster.
Description The sysjoin trigger is invoked when a peer node joins the cluster.
Description This trigger is invoked when an agent faults more than a predetermined
number of times with in an hour. When this occurs, VCS gives up trying
to restart the agent. VCS invokes this trigger on the node where the
agent faults.
You can use this trigger to notify the administrators that an agent has
faulted, and that VCS is unable to restart the agent. The administrator
can then take corrective action.
To disable the Remove the files associated with the trigger from the
trigger $VCS_HOME/bin/triggers directory.
VCS event triggers 533
List of event triggers
Usage -unable_to_restart_had
Description This trigger is invoked only on the system that caused the concurrency
violation. Specifically, it takes the service group offline on the system
where the trigger was invoked. Note that this trigger applies to failover
groups only. The default trigger takes the service group offline on the
system that caused the concurrency violation.
■ Start Virtual Business Services from the Veritas Operations Manager console:
When a virtual business service starts, its associated service groups are brought
online.
■ Stop Virtual Business Services from the Veritas Operations Manager console:
When a virtual business service stops, its associated service groups are taken
offline.
■ Applications that are under the control of Symantec ApplicationHA can be part
of a virtual business service. Symantec ApplicationHA enables starting, stopping,
and monitoring of an application within a virtual machine.If applications are
hosted on VMware virtual machines, you can configure the virtual machines to
automatically start or stop when you start or stop the virtual business service,
provided the vCenter for those virtual machines has been configured in Veritas
Operations Manager.
■ Define dependencies between service groups within a virtual business service:
The dependencies define the order in which service groups are brought online
and taken offline. Setting the correct order of service group dependency is critical
to achieve business continuity and high availability. You can define the
dependency types to control how a tier reacts to high availability events in the
underlying tier. The configured reaction could be execution of a predefined policy
or a custom script.
■ Manage the virtual business service from Veritas Operations Manager or from
the clusters participating in the virtual business service.
■ Recover the entire virtual business service to a remote site when a disaster
occurs.
Each tier can have its own high availability mechanism. For example, you can
use Symantec Cluster Server for the databases and middleware applications,
and Symantec ApplicationHA for the Web servers.
Each time you start the Finance business application, typically you need to bring
the components online in the following order – Oracle database, WebSphere,
Apache and IIS. In addition, you must bring the virtual machines online before you
start the Web tier. To stop the Finance application, you must take the components
offline in the reverse order. From the business perspective, the Finance service is
unavailable if any of the tiers becomes unavailable.
Virtual Business Services 537
About choosing between VCS and VBS level dependencies
When you configure the Finance application as a virtual business service, you can
specify that the Oracle database must start first, followed by WebSphere and the
Web servers. The reverse order automatically applies when you stop the virtual
business service. When you start or stop the virtual business service, the
components of the service are started or stopped in the defined order.
For more information about Virtual Business Services, refer to the Virtual Business
Service–Availability User’s Guide.
■ Chapter 17. Administering global clusters from Cluster Manager (Java console)
Public Clients
Cluster A Network Redirected Cluster B
Application
Failover
Oracle Oracle
Group Group
Replicated
Data
Separate Separate
Storage Storage
wac wac
Process Process
Cluster 1 Cluster 2
The wac process runs on one system in each cluster and connects with peers in
remote clusters. It receives and transmits information about the status of the cluster,
service groups, and systems. This communication enables VCS to create a
consolidated view of the status of all the clusters configured as part of the global
cluster. The process also manages wide-area heartbeating to determine the health
of remote clusters. The process also transmits commands between clusters and
returns the result to the originating cluster.
VCS provides the option of securing the communication between the wide-area
connectors.
See “ Secure communication in global clusters” on page 548.
Connecting clusters–Creating global clusters 543
VCS global clusters: The building blocks
Heartbeat Icmp (
ClusterList = {priclus
Arguments @Cpriclus =
{"10.209.134.1"
)
command enables you to fail over an application to another cluster when a disaster
occurs.
Note: A cluster assuming authority for a group does not guarantee the group will
be brought online on the cluster. The attribute merely specifies the right to attempt
bringing the service group online in the cluster. The presence of Authority does not
override group settings like frozen, autodisabled, non-probed, and so on, that prevent
service groups from going online.
DNS agent The DNS agent updates the canonical name-mapping in the
domain name server after a wide-area failover.
VCS agents for VVR You can use the following VCS agents for VVR in a VCS
global cluster setup:
■ RVG agent
The RVG agent manages the Replicated Volume Group
(RVG). Specifically, it brings the RVG online, monitors
read-write access to the RVG, and takes the RVG offline.
Use this agent when using VVR for replication.
■ RVGPrimary agent
The RVGPrimary agent attempts to migrate or take over
a Secondary site to a Primary site following an application
failover. The agent has no actions associated with the
offline and monitor routines.
■ RVGShared agent
The RVGShared agent monitors the RVG in a shared
environment. This is a parallel resource. The RVGShared
agent enables you to configure parallel applications to
use an RVG in a cluster. The RVGShared agent monitors
the RVG in a shared disk group environment.
■ RVGSharedPri agent
The RVGSharedPri agent enables migration and takeover
of a VVR replicated data set in parallel groups in a VCS
environment. Bringing a resource of type RVGSharedPri
online causes the RVG on the local host to become a
primary if it is not already.
■ RVGLogowner agent
The RVGLogowner agent assigns and unassigns a node
as the logowner in the CVM cluster; this is a failover
resource. The RVGLogowner agent assigns or unassigns
a node as a logowner in the cluster. In a shared disk group
environment, only one node, that is, the logowner, can
replicate data to the Secondary.
■ RVGSnapshot agent
The RVGSnapshot agent, used in fire drill service groups,
takes space-optimized snapshots so that applications can
be mounted at secondary sites during a fire drill operation.
See the Symantec Storage Foundation and High
Availability Solutions Replication Administrator's Guide
for more information.
VCS agents for third-party VCS provides agents for other third-party array-based or
replication technologies application-based replication solutions. These agents are
available in the Symantec High Availability Agent Pack
software.
Cluster A Cluster B
Steward
When all communication links between any two clusters are lost, each cluster
contacts the Steward with an inquiry message. The Steward sends an ICMP ping
to the cluster in question and responds with a negative inquiry if the cluster is running
or with positive inquiry if the cluster is down. The Steward can also be used in
Connecting clusters–Creating global clusters 548
VCS global clusters: The building blocks
configurations with more than two clusters. VCS provides the option of securing
communication between the Steward process and the wide-area connectors.
See “ Secure communication in global clusters” on page 548.
In non-secure configurations, you can configure the steward process on a platform
that is different to that of the global cluster nodes. Secure configurations have not
been tested for running the steward process on a different platform.
For example, you can run the steward process on a Windows system for a global
cluster running on Linux systems. However, the VCS release for Linux contains the
steward binary for Linux only. You must copy the steward binary for Windows from
the VCS installation directory on a Windows cluster, typically C:\Program
Files\VERITAS\Cluster Server.
A Steward is effective only if there are independent paths from each cluster to the
host that runs the Steward. If there is only one path between the two clusters, you
must prevent split-brain by confirming manually via telephone or some messaging
system with administrators at the remote site if a failure has occurred. By default,
VCS global clusters fail over an application across cluster boundaries with
administrator confirmation. You can configure automatic failover by setting the
ClusterFailOverPolicy attribute to Auto.
The default port for the steward is 14156.
name = WAC
domain = VCS_SERVICES@cluster_uuid
failover occurred, except for some downtime while the administrator initiates or
confirms the failover.
See the Symantec High Availability agent pack documentation for a list of replication
technologies that VCS supports.
A0. VCS is already installed configured and application VCS documentation &
is set up for HA on s1 Application documentation
A2. Set up data replication (VVR or Third-party) VVR: SFHA Installation Guide &
VVR Administrator’s Guide
on s1 and s2 Third-party: Replication documentation
C2. For VVR: Set up replication resources on s1 and s2 VVR: VCS Agents for VVR Configuration Guide
For Third-party: Install & configure the replication agent Third-party: VCS Agent for <Replication Solution>
Installation and Configuration Guide
on s1 and s2
D1. Make the service groups global on s1 and s2 VCS Administrator’s Guide
Table 16-1 lists the high-level tasks to set up VCS global clusters.
Task Reference
Task A See “Configuring application and replication for global cluster setup”
on page 553.
Task B See “Configuring clusters for global cluster setup” on page 554.
Task C See “Configuring service groups for global cluster setup” on page 561.
Task D See “Configuring a service group as a global service group” on page 565.
See “Installing and configuring VCS at the secondary site” on page 556.
Run the GCO Configuration wizard to create or update the ClusterService group.
The wizard verifies your configuration and validates it for a global cluster setup.
You must have installed the required licenses on all nodes in the cluster.
See “About installing a VCS license” on page 179.
To configure global cluster components at the primary site
1 Start the GCO Configuration wizard.
# gcoconfig
The wizard discovers the NIC devices on the local system and prompts you to
enter the device to be used for the global cluster.
2 Specify the name of the device and press Enter.
3 If you do not have NIC resources in your configuration, the wizard asks you
whether the specified NIC will be the public NIC used by all systems.
Enter y if it is the public NIC; otherwise enter n. If you entered n, the wizard
prompts you to enter the names of NICs on all systems.
4 Enter the virtual IP to be used for the global cluster.
You must use either IPv4 or IPv6 address. VCS does not support configuring
clusters that use different Internet Protocol versions in a global cluster.
5 If you do not have IP resources in your configuration, the wizard does the
following:
■ For IPv4 address:
The wizard prompts you for the netmask associated with the virtual IP. The
wizard detects the netmask; you can accept the suggested value or enter
another value.
■ For IPv6 address:
The wizard prompts you for the prefix associated with the virtual IP.
For more information, see the Symantec Cluster Server Installation Guide.
2 Establish trust between the clusters.
For example in a VCS global cluster environment with two clusters, perform
the following steps to establish trust between the clusters:
■ On each node of the first cluster, enter the following command:
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC;
/opt/VRTSvcs/bin/vcsat setuptrust -b
IP_address_of_any_node_from_the_second_cluster:14149 -s high
The command obtains and displays the security certificate and other details
of the root broker of the second cluster.
If the details are correct, enter y at the command prompt to establish trust.
For example:
Connecting clusters–Creating global clusters 557
Setting up a global cluster
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
/opt/VRTSvcs/bin/vcsat setuptrust -b
IP_address_of_any_node_from_the_first_cluster:14149 -s high
The command obtains and displays the security certificate and other details
of the root broker of the first cluster.
If the details are correct, enter y at the command prompt to establish trust.
Alternatively, if you have passwordless communication set up on the cluster,
you can use the installvcs -securitytrust option to set up trust with
a remote cluster.
3 ■ Skip the remaining steps in this procedure if you used the installvcs
-security command after the global cluster was set up.
■ Complete the remaining steps in this procedure if you had a secure cluster
and then used the gcoconfig command.
On each cluster, take the wac resource offline on the node where the wac
resource is online. For each cluster, run the following command:
# haconf -makerw
hares -modify wac StartProgram \
"/opt/VRTSvcs/bin/wacstart -secure"
hares -modify wac MonitorProcesses \
"/opt/VRTSvcs/bin/wac -secure"
haconf -dump -makero
5 On each cluster, bring the wac resource online. For each cluster, run the
following command on any node:
To configure the Steward process for clusters not running in secure mode
1 Identify a system that will host the Steward process.
2 Make sure that both clusters can connect to the system through a ping
command.
3 Copy the file steward from a node in the cluster to the Steward system. The
file resides at the following path:
/opt/VRTSvcs/bin/
4 In both clusters, set the Stewards attribute to the IP address of the system
running the Steward process.
For example:
cluster cluster1938 (
UserNames = { admin = gNOgNInKOjOOmWOiNL }
ClusterAddress = "10.182.147.19"
Administrators = { admin }
CredRenewFrequency = 0
CounterInterval = 5
Stewards = {"10.212.100.165"}
}
5 On the system designated to host the Steward, start the Steward process:
# steward -start
■ On the system that is designated to run the Steward process, run the
installvcs -securityonenode command.
The installer prompts for a confirmation if VCS is not configured or if VCS
is not running on all nodes of the cluster. Enter y when the installer prompts
whether you want to continue configuring security.
For more information about the -securityonenode option, see the Symantec
Cluster Server Installation Guide.
# unset EAT_DATA_DIR
# unset EAT_HOME_DIR
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat createpd -d
VCS_SERVICES -t ab
# /opt/VRTSvcs/bin/vcsauth/vcsauthserver/bin/vssat addprpl -t ab
-d VCS_SERVICES -p STEWARD -s password
# mkdir -p /var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/WAC
# export EAT_DATA_DIR=/var/VRTSvcs/vcsauth/data/STEWARD
8 On the system designated to run the Steward, start the Steward process:
# steward -stop
To stop the Steward process running in secure mode, open a new command
window and run the following command:
3 If your setup uses BIND DNS, add a resource of type DNS to the application
service group at each site.
Refer to the Symantec Cluster Server Bundled Agent’s Reference Guide for
more details.
4 At each site, perform the following depending on the replication technology
you have set up:
■ Volume Replicator
Configure VCS to manage and monitor VVR Replicated Volume Groups
(RVGs).
See “Configuring VCS service group for VVR-based replication” on page 562.
■ A supported third-party replication technology
Install and configure the corresponding VCS agent for replication.
See the Installation and Configuration Guide for the corresponding VCS
replication agent for instructions.
6 Click Next.
7 Enter or review the connection details for each cluster.
Click the Configure icon to review the remote cluster information for each
cluster.
8 Enter the IP address of the remote cluster, the IP address of a cluster system,
or the host name of a cluster system.
9 Enter the user name and the password for the remote cluster and click OK.
10 Click Next.
11 Click Finish.
12 Save the configuration.
The appgroup service group is now a global group and can be failed over
between clusters.
Connecting clusters–Creating global clusters 566
About cluster faults
To switch the service group when the primary site has failed and the secondary did
a takeover
1 In the Service Groups tab of the configuration tree, right-click the resource.
2 Click Actions.
3 Specify the details of the action:
■ From the Action list, choose fbsync.
■ Click the system on which to execute the action.
■ Click OK.
This begins a fast-failback of the replicated data set. You can monitor the value
of the ResourceInfo attribute for the RVG resource to determine when the
resynchronization has completed.
4 Once the resynchronization completes, switch the service group to the primary
cluster.
■ In the Service Groups tab of the Cluster Explorer configuration tree,
right-click the service group.
■ Click Switch To, and click Remote switch.
■ In the Switch global group dialog box, click the cluster to switch the group.
Click the specific system, or click Any System, and click OK.
Before you perform a fire drill in a disaster recovery setup that uses VVR, perform
the following steps:
■ Configure the fire drill service group.
See “About creating and configuring the fire drill service group manually”
on page 568.
See “About configuring the fire drill service group using the Fire Drill Setup
wizard” on page 571.
If you configure the fire drill service group manually using the command line or
the Cluster Manager (Java Console), set an offline local dependency between
the fire drill service group and the application service group to make sure a fire
drill does not block an application failover in case a disaster strikes the primary
site. If you use the Fire Drill Setup (fdsetup) wizard, the wizard creates this
dependency.
■ Set the value of the ReuseMntPt attribute to 1 for all Mount resources.
■ After the fire drill service group is taken offline, reset the value of the ReuseMntPt
attribute to 0 for all Mount resources.
VCS also supports HA fire drills to verify a resource can fail over to another node
in the cluster.
See “About testing resource failover by using HA fire drills” on page 295.
Note: You can conduct fire drills only on regular VxVM volumes; volume sets (vset)
are not supported.
VCS provides hardware replication agents for array-based solutions, such as Hitachi
Truecopy, EMC SRDF, and so on . If you are using hardware replication agents to
monitor the replicated data clusters, refer to the VCS replication agent documentation
for details on setting up and configuring fire drill.
About creating and configuring the fire drill service group manually
You can create the fire drill service group using the command line or Cluster
Manager (Java Console.) The fire drill service group uses the duplicated copy of
the application data.
Creating and configuring the fire drill service group involves the following tasks:
■ See “Creating the fire drill service group” on page 569.
■ See “Linking the fire drill and replication service groups” on page 569.
■ See “Adding resources to the fire drill service group” on page 570.
■ See “Configuring the fire drill service group” on page 570.
Connecting clusters–Creating global clusters 569
About setting up a disaster recovery fire drill
About configuring the fire drill service group using the Fire Drill Setup
wizard
Use the Fire Drill Setup Wizard to set up the fire drill configuration.
The wizard performs the following specific tasks:
■ Creates a Cache object to store changed blocks during the fire drill, which
minimizes disk space and disk spindles required to perform the fire drill.
■ Configures a VCS service group that resembles the real application group.
The wizard works only with application groups that contain one disk group. The
wizard sets up the first RVG in an application. If the application has more than one
Connecting clusters–Creating global clusters 572
About setting up a disaster recovery fire drill
RVG, you must create space-optimized snapshots and configure VCS manually,
using the first RVG as reference.
You can schedule the fire drill for the service group using the fdsched script.
See “Scheduling a fire drill” on page 573.
# /opt/VRTSvcs/bin/fdsetup
2 Read the information on the Welcome screen and press the Enter key.
3 The wizard identifies the global service groups. Enter the name of the service
group for the fire drill.
4 Review the list of volumes in disk group that could be used for a
space-optimized snapshot. Enter the volumes to be selected for the snapshot.
Typically, all volumes used by the application, whether replicated or not, should
be prepared, otherwise a snapshot might not succeed.
Press the Enter key when prompted.
5 Enter the cache size to store writes when the snapshot exists. The size of the
cache must be large enough to store the expected number of changed blocks
during the fire drill. However, the cache is configured to grow automatically if
it fills up. Enter disks on which to create the cache.
Press the Enter key when prompted.
6 The wizard starts running commands to create the fire drill setup.
Press the Enter key when prompted.
The wizard creates the application group with its associated resources. It also
creates a fire drill group with resources for the application (Oracle, for example),
the Mount, and the RVGSnapshot types.
The application resources in both service groups define the same application,
the same database in this example. The wizard sets the FireDrill attribute for
the application resource to 1 to prevent the agent from reporting a concurrency
violation when the actual application instance and the fire drill service group
are online at the same time.
Connecting clusters–Creating global clusters 573
About setting up a disaster recovery fire drill
Stockton Denver
Just as a two-tier, two-site environment is possible, you can also tie a three-tier
environment together.
Figure 16-6 represents a two-site, three-tier environment. The application cluster,
which is globally clustered between L.A. and Denver, has cluster dependencies up
and down the tiers. Cluster 1 (C1), depends on the RemoteGroup resource on the
DB tier for cluster 3 (C3), and then on the remote service group for cluster 5 (C5).
The stack for C2, C4, and C6 functions the same.
Connecting clusters–Creating global clusters 575
Test scenario for a multi-tiered environment
Stockton Denver
Site A Site B
include "types.cf"
cluster C1 (
ClusterAddress = "10.182.10.145"
)
remotecluster C2 (
ClusterAddress = "10.182.10.146"
)
heartbeat Icmp (
ClusterList = { C2 }
AYATimeout = 30
Arguments @C2 = { "10.182.10.146" }
Connecting clusters–Creating global clusters 577
Test scenario for a multi-tiered environment
system sysA (
)
system sysB (
)
group LSG (
SystemList = { sysA = 0, sysB = 1 }
ClusterList = { C2 = 0, C1 = 1 }
AutoStartList = { sysA, sysB }
ClusterFailOverPolicy = Auto
)
FileOnOff filec1 (
PathName = "/tmp/c1"
)
RemoteGroup RGR (
IpAddress = "10.182.6.152"
// The above IPAddress is the highly available address of C3—
// the same address that the wac uses
Username = root
Password = xxxyyy
GroupName = RSG
VCSSysName = ANY
ControlMode = OnOff
)
include "types.cf"
cluster C2 (
ClusterAddress = "10.182.10.146"
)
remotecluster C1 (
ClusterAddress = "10.182.10.145"
Connecting clusters–Creating global clusters 578
Test scenario for a multi-tiered environment
heartbeat Icmp (
ClusterList = { C1 }
AYATimeout = 30
Arguments @C1 = { "10.182.10.145" }
)
system sysC (
)
system sysD (
)
group LSG (
SystemList = { sysC = 0, sysD = 1 }
ClusterList = { C2 = 0, C1 = 1 }
Authority = 1
AutoStartList = { sysC, sysD }
ClusterFailOverPolicy = Auto
)
FileOnOff filec2 (
PathName = filec2
)
RemoteGroup RGR (
IpAddress = "10.182.6.154"
// The above IPAddress is the highly available address of C4—
// the same address that the wac uses
Username = root
Password = vvvyyy
GroupName = RSG
VCSSysName = ANY
ControlMode = OnOff
)
include "types.cf"
cluster C3 (
ClusterAddress = "10.182.6.152"
)
remotecluster C4 (
ClusterAddress = "10.182.6.154"
)
heartbeat Icmp (
ClusterList = { C4 }
AYATimeout = 30
Arguments @C4 = { "10.182.6.154" }
)
system sysW (
)
system sysX (
)
group RSG (
SystemList = { sysW = 0, sysX = 1 }
ClusterList = { C3 = 1, C4 = 0 }
AutoStartList = { sysW, sysX }
ClusterFailOverPolicy = Auto
)
FileOnOff filec3 (
PathName = "/tmp/filec3"
)
include "types.cf"
cluster C4 (
ClusterAddress = "10.182.6.154"
)
Connecting clusters–Creating global clusters 580
Test scenario for a multi-tiered environment
remotecluster C3 (
ClusterAddress = "10.182.6.152"
)
heartbeat Icmp (
ClusterList = { C3 }
AYATimeout = 30
Arguments @C3 = { "10.182.6.152" }
)
system sysY (
)
system sysZ (
)
group RSG (
SystemList = { sysY = 0, sysZ = 1 }
ClusterList = { C3 = 1, C4 = 0 }
Authority = 1
AutoStartList = { sysY, sysZ }
ClusterFailOverPolicy = Auto
)
FileOnOff filec4 (
PathName = "/tmp/filec4"
)
Chapter 17
Administering global
clusters from Cluster
Manager (Java console)
This chapter includes the following topics:
Note: Symantec does not support adding a cluster that is already part of a global
cluster environment. To merge the clusters of one global cluster environment (for
example, cluster A and cluster B) with the clusters of another global environment
(for example, cluster C and cluster D), separate cluster C and cluster D into
standalone clusters and add them one by one to the environment containing cluster
A and cluster B.
Administering global clusters from Cluster Manager (Java console) 583
Adding a remote cluster
5 Enter the details of the existing remote clusters; this information on administrator
rights enables the wizard to connect to all the clusters and make changes to
the configuration.
■ Click OK.
7 Click Next.
8 Click Finish. After running the wizard, the configurations on all the relevant
clusters are opened and changed; the wizard does not close the configurations.
To add a remote cluster to a global cluster environment in Command Center
1 Click Commands > Configuration > Cluster Objects > Add Remote Cluster.
2 Enter the name of the cluster.
3 Enter the IP address of the cluster.
4 Click Apply.
Note: Command Center enables you to perform operations on the local cluster;
this does not affect the overall global cluster configuration.
Note: You cannot delete a remote cluster if the cluster is part of a cluster list for
global service groups or global heartbeats, or if the cluster is in the RUNNING,
BUILD, INQUIRY, EXITING, or TRANSITIONING states.
Administering global clusters from Cluster Manager (Java console) 587
Deleting a remote cluster
4 Enter or review the connection details for each cluster. Click the Configure
icon to review the remote cluster information for each cluster.
If the cluster is not running in secure mode, do the following:
■ Enter the IP address of the remote cluster, the IP address of a cluster
system, or the host name of a cluster system.
■ Verify the port number.
■ Enter the user name.
■ Enter the password.
■ Click OK.
If the cluster is running in secure mode, do the following:
■ Enter the IP address of the remote cluster, the IP address of a cluster
system, or the host name of a cluster system.
■ Verify the port number.
Administering global clusters from Cluster Manager (Java console) 588
Deleting a remote cluster
5 Click Next.
6 Click Finish.
To delete a remote cluster from the local cluster
1 Do one of the following:
From Cluster Explorer, click Add/Delete Remote Cluster on the Edit menu.
or
From the Cluster Explorer configuration tree, right-click the cluster name, and
click Add/Delete Remote Clusters.
2 Review the required information for the Remote Cluster Configuration Wizard
and click Next.
3 In the Wizard Options dialog box, click Delete Cluster and click Next:
4 In the Delete Cluster dialog box, click the name of the remote cluster to delete,
and then click Next:
5 Review the connection details for each cluster. Click the Configure icon to
review the remote cluster information for each cluster.
6 Click Finish.
■ Click the name of the service group that will be converted from a local group
to a global group, or vice versa.
■ From the Available Clusters box, click the clusters on which the group
can come online. Click the right arrow to move the cluster name to the
Clusters for Service Group box; for global to local cluster conversion,
click the left arrow to move the cluster name back to the Available Clusters
box. A priority number (starting with 0) indicates the cluster in which the
group will attempt to come online. If necessary, double-click the entry in
the Priority column to enter a new value.
■ Select one of the following policies for cluster failover:
■ Manual prevents a group from automatically failing over to another
cluster.
■ Auto enables a group to automatically fail over to another cluster if it is
unable to fail over within the cluster, or if the entire cluster faults.
■ Connected enables a group to automatically fail over to another cluster
if it is unable to fail over within the cluster.
■ Click Next.
Administering global clusters from Cluster Manager (Java console) 591
Administering global service groups
Click the Configure icon to review the remote cluster information for each
cluster.
If the cluster is not running in secure mode, do the following:
■ Enter the IP address of the remote cluster, the IP address of a cluster
system, or the host name of a cluster system.
■ Verify the port number.
■ Enter the user name and password.
■ Click OK.
Repeat these steps for each cluster in the global environment.
If the cluster is running in secure mode, do the following:
■ Enter the IP address of the remote cluster, the IP address of a cluster
system, or the host name of a cluster system.
■ Verify the port number.
■ Choose to connect to the remote cluster with the credentials used for the
current cluster connection, or enter new credentials, including the user
name, password, and the domain.
If you have connected to the remote cluster using the wizard earlier, you
can use the credentials from the previous connection.
■ Click OK.
Repeat these steps for each cluster in the global environment.
5 In the Remote cluster information dialog box, click Next.
6 Click Finish.
Administering global clusters from Cluster Manager (Java console) 592
Administering global service groups
■ Click the specific system, or click Any System, to switch the group.
If you specify a system to switch the group and if the PreSwitch attribute
value is set to 1, the VCS engine invokes the PreSwitch actions for the
resources that support the action. If you want to skip these actions, you
must temporarily set the PreSwitch attribute value to 0.
See “Service group attributes” on page 765.
■ Click OK.
■ Click OK on the Heartbeat Configuration dialog box.
The option -clus displays the attribute value on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
If the attribute has local scope, you must specify the system name, except
when querying the attribute on the system from which you run the command.
Administering global clusters from the command line 598
About global querying in a global cluster setup
The option -clus displays the state of all service groups on a cluster designated
by the variable cluster; the option -localclus specifies the local cluster.
To display service group information across clusters
◆ Use the following command to display service group information across clusters:
The option -clus applies to global groups only. If the group is local, the cluster
name must be the local cluster name, otherwise no information is displayed.
To display service groups in a cluster
◆ Use the following command to display service groups in a cluster:
The option -clus lists all service groups on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
To display usage for the service group command
◆ Use the following command to display usage for the service group command:
The option -clus displays the attribute value on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
If the attribute has local scope, you must specify the system name, except
when querying the attribute on the system from which you run the command.
To display the state of a resource across clusters
◆ Use the following command to display the state of a resource across clusters:
The option -clus displays the state of all resources on the specified cluster;
the option -localclus specifies the local cluster. Specifying a system displays
resource state on a particular system.
To display resource information across clusters
◆ Use the following command to display resource information across clusters:
The option -clus lists all service groups on the cluster designated by the
variable cluster; the option -localclus specifies the local cluster.
To display a list of resources across clusters
◆ Use the following command to display a list of resources across clusters:
The option -clus lists all resources that meet the specified conditions in global
service groups on a cluster as designated by the variable cluster.
To display usage for the resource command
◆ Use the following command to display usage for the resource command:
Querying systems
This topic describes how to perform queries on systems:
To display system attribute values across clusters
◆ Use the following command to display system attribute values across clusters:
The option -clus displays the values of a system attribute in the cluster as
designated by the variable cluster; the option -localclus specifies the local
cluster.
To display the state of a system across clusters
◆ Use the following command to display the state of a system across clusters:
Displays the current state of the specified system. The option -clus displays
the state in a cluster designated by the variable cluster; the option -localclus
specifies the local cluster. If you do not specify a system, the command displays
the states of all systems.
For information about each system across clusters
◆ Use the following command to display information about each system across
clusters:
The option -clus displays the attribute values on systems (if specified) in a
cluster designated by the variable cluster; the option -localclus specifies the
local cluster.
For a list of systems across clusters
◆ Use the following command to display a list of systems across clusters:
Displays a list of systems whose values match the given conditional statements.
The option -clus displays the systems in a cluster designated by the variable
cluster; the option -localclus specifies the local cluster.
Querying clusters
This topic describes how to perform queries on clusters:
Administering global clusters from the command line 601
About global querying in a global cluster setup
The attribute must be specified in this command. If you do not specify the
cluster name, the command displays the attribute value on the local cluster.
To display the state of a local or remote cluster
◆ Use the following command to display the state of a local or remote cluster:
The variable cluster represents the cluster. If a cluster is not specified, the state
of the local cluster and the state of all remote cluster objects as seen by the
local cluster are displayed.
For information on the state of a local or remote cluster
◆ Use the following command for information on the state of a local or remote
cluster:
Lists the clusters that meet the specified conditions, beginning with the local
cluster.
To display usage for the cluster command
◆ Use the following command to display usage for the cluster command:
Querying status
This topic describes how to perform queries on status of remote and local clusters:
For the status of local and remote clusters
◆ Use the following command to obtain the status of local and remote clusters:
hastatus
Querying heartbeats
The hahb command is used to manage WAN heartbeats that emanate from the
local cluster. Administrators can monitor the "health of the remote cluster via
heartbeat commands and mechanisms such as Internet, satellites, or storage
replication technologies. Heartbeat commands are applicable only on the cluster
from which they are issued.
Note: You must have Cluster Administrator privileges to add, delete, and modify
heartbeats.
The variable conditionals represents the conditions that must be met for the
heartbeat to be listed.
Administering global clusters from the command line 603
About global querying in a global cluster setup
For example, to get the state of heartbeat Icmp from the local cluster to the
remote cluster phoenix:
The -value option provides the value of a single attribute for a specific
heartbeat. The cluster name must be specified for cluster-specific attribute
values, but not for global.
For example, to display the value of the ClusterList attribute for heartbeat Icmp:
If the -modify option is specified, the usage for the hahb -modify option is
displayed.
Administering global clusters from the command line 604
Administering global service groups in a global cluster setup
The option -nopre indicates that the VCS engine must switch
the service group regardless of the value of the PreSwitch service
group attribute.
The -any option specifies that the VCS engine switches a service
group to the best possible system on which it is currently not online,
based on the value of the group's FailOverPolicy attribute. The
VCS engine switches a global service group from a system to
another system in the local cluster or a remote cluster.
If you do not specify the -clus option, the VCS engine by default
assumes -localclus option and selects an available system
within the local cluster.
The option -clus identifies the remote cluster to which the service
group will be switched. The VCS engine then selects the target
system on which to switch the service group.
To change the cluster See “Changing the cluster name in a global cluster setup”
name on page 608.
haalert -display For each alert, the command displays the following
information:
■ alert ID
■ time when alert occurred
■ cluster on which alert occurred
■ object name for which alert occurred
■ (cluster name, group name, and so on).
■ informative message about alert
haalert -list For each alert, the command displays the following
information:
haalert -delete Deletes a specific alert. You must enter a text message
alert_id -notes within quotes describing the reason for deleting the
"description" alert. The comment is written to the engine log as well
as sent to any connected GUI clients.
2 Remove the remote cluster from the ClusterList of all the global groups using
the following command:
3 Remove the remote cluster from the ClusterList of all the heartbeats using the
following command:
If the attribute is local, that is, it has a separate value for each
remote cluster in the ClusterList attribute, the option -clus
cluster must be specified. Use -delete -keys to clear the
value of any list attributes.
Note: VVR supports multiple replication secondary targets for any given primary.
However, RDC for VCS supports only one replication secondary for a primary.
Note: You must use dual dedicated LLT links between the replicated nodes.
Public Clients
Zone 0 Network Redirected Zone 1
Private Network
Service Service
Group Group
Application
Failover
Replicated
Data
Separate Separate
Storage Storage
In the event of a system or application failure, VCS attempts to fail over the
application service group to another system within the same RDC site. However,
in the event that VCS fails to find a failover target node within the primary RDC site,
VCS switches the service group to a node in the current secondary RDC site (site
1).
Application Group
Application
Mount DNS
RVGPrimary IP
NIC
DiskGroup IP
NIC
6 Set the SystemZones attribute of the child group, oragrp_rep, such that all
nodes in the primary RDC zone are in system zone 0 and all nodes in the
secondary RDC zone are in system zone 1.
To configure the application service group
1 In the original Oracle service group (oragroup), delete the DiskGroup resource.
2 Add an RVGPrimary resource and configure its attributes.
Set the value of the RvgResourceName attribute to the name of the RVG type
resource that will be promoted and demoted by the RVGPrimary agent.
Set the AutoTakeover and AutoResync attributes from their defaults as desired.
3 Set resource dependencies such that all Mount resources depend on the
RVGPrimary resource. If there are a lot of Mount resources, you can set the
TypeDependencies attribute for the group to denote that the Mount resource
type depends on the RVGPRimary resource type.
4 Set the SystemZones attribute of the Oracle service group such that all nodes
in the primary RDC zone are in system zone 0 and all nodes in the secondary
RDC zone are in zone 1. The SystemZones attribute of both the parent and
the child group must be identical.
5 If your setup uses BIND DNS, add a resource of type DNS to the oragroup
service group. Set the Hostname attribute to the canonical name of the host
or virtual IP address that the application uses on that cluster. This ensures
DNS updates to the site when the group is brought online. A DNS resource
would be necessary only if the nodes in the primary and the secondary RDC
zones are in different IP subnets.
Setting up replicated data clusters 616
About migrating a service group
You must enable the HA/DR license if you want to manually control a service
group failover across sites or system zones.
■ You must have a single VCS cluster with at least one node in each of the two
sites, where the sites are separated by a physical distance of no more than 80
kilometers. When the sites are separated more than 80 kilometers, you can run
Global Cluster Option (GCO) configuration.
■ You must have redundant network connections between nodes. All paths to
storage must also be redundant.
Symantec recommends the following in a campus cluster setup:
■ A common cross-site physical infrastructure for storage and LLT private
networks.
Symantec recommends a common cross-site physical infrastructure for
storage and LLT private networks
■ Technologies such as Dense Wavelength Division Multiplexing (DWDM) for
network and I/O traffic across sites. Use redundant links to minimize the
impact of network failure.
■ You must install Veritas Volume Manager with the FMR license and the Site
Awareness license.
■ Symantec recommends that you configure I/O fencing to prevent data corruption
in the event of link failures.
See the Symantec Cluster Server Installation Guide for more details.
■ You must configure storage to meet site-based allocation and site-consistency
requirements for VxVM.
■ All the nodes in the site must be tagged with the appropriate VxVM site
names.
■ All the disks must be tagged with the appropriate VxVM site names.
■ The VxVM site names of both the sites in the campus cluster must be added
to the disk groups.
■ The allsites attribute for each volume in the disk group must be set to on.
(By default, the value is set to on.)
■ The siteconsistent attribute for the disk groups must be set to on.
Public network
Private network
Switch Switch
Site C
Switch
Disk array
Coordination point
■ The volumes that are required for the application have mirrors on both the sites.
■ All nodes in the cluster are tagged with the VxVM site name. All disks that belong
to a site are tagged with the corresponding VxVM site name.
■ The disk group is configured in VCS as a resource of type DiskGroup and is
mounted using the Mount resource type.
0 VCS does not fail over the service group or the node.
Setting up campus clusters 622
How VCS campus clusters work
1 VCS fails over the service group to another suitable node. For hybrid
service group, VCS chooses to fail over the service group within the
same site before choosing a node in the other site. For failover service
group, VCS chooses a system in another site before choosing a system
in same site.
2 VCS fails over the service group if another suitable node exists in the
same site. Otherwise, VCS waits for administrator intervention to initiate
the service group failover to a suitable node in the other site.
Sample definition for these service group attributes in the VCS main.cf is as follows:
cluster VCS_CLUS (
PreferredFencingPolicy = Site
SiteAware = 1
)
site MTV (
SystemList = { sys1, sys2 }
)
site SFO (
Preference = 2
SystemList = { sys3, sys4 }
)
Table 20-1 lists the possible failure scenarios and how VCS campus cluster recovers
from these failures.
Setting up campus clusters 623
How VCS campus clusters work
Storage failure - VCS does not fail over the service group when such a storage failure
one or more disks occurs.
at a site fails
VxVM detaches the site from the disk group if any volume in that disk
group does not have at least one valid plex at the site where the disks
failed.
VxVM does not detach the site from the disk group in the following
cases:
If only some of the disks that failed come online and if the vxrelocd
daemon is running, VxVM relocates the remaining failed disks to any
available disks. Then, VxVM automatically reattaches the site to the
disk group and resynchronizes the plexes to recover the volumes.
If all the disks that failed come online, VxVM automatically reattaches
the site to the disk group and resynchronizes the plexes to recover the
volumes.
Storage failure - all VCS acts based on the DiskGroup agent's PanicSystemOnDGLoss
disks at both sites attribute value.
fail
See the Symantec Cluster Server Bundled Agents Reference Guide
for more information.
Setting up campus clusters 624
How VCS campus clusters work
■ If the value is set to 1, VCS fails over the service group to a system
in the other site that is defined in the SystemZones attribute or by
Veritas Operations Manager.
■ If the value is set to 2, VCS requires administrator intervention to
initiate the service group failover to a system in the other site.
Because the storage at the failed site is inaccessible, VCS imports the
disk group in the application service group with all devices at the failed
site marked as NODEVICE.
When the storage at the failed site comes online, VxVM automatically
reattaches the site to the disk group and resynchronizes the plexes to
recover the volumes.
Network failure Nodes at each site lose connectivity to the nodes at the other site
(LLT interconnect
The failure of all private interconnects between the nodes can result in
failure)
split brain scenario and cause data corruption.
Review the details on other possible causes of split brain and how I/O
fencing protects shared data from corruption.
Network failure Nodes at each site lose connectivity to the storage and the nodes at
(LLT and storage the other site
interconnect Symantec recommends that you configure I/O fencing to prevent split
failure) brain and serial split brain conditions.
Note: You can also configure VxVM disk groups for remote mirroring using Veritas
Operations Manager.
Setting up campus clusters 628
About setting up a campus cluster configuration
The site name is stored in the /etc/vx/volboot file. Use the following command
to display the site names:
# vxdisk listtag
5 Configure site-based allocation on the disk group that you created for each
site that is registered to the disk group.
With the Site Awareness license installed on all hosts, the volume that you
create has the following characteristics by default:
■ The allsites attribute is set to on; the volumes have at least one plex at
each site.
Setting up campus clusters 629
Fire drill in campus clusters
2 Set up the system zones or sites. Configure the SystemZones attribute for the
service group. Skip this step when sites are configured through Veritas
Operations Manager.
3 Set up the group fail over policy. Set the value of the AutoFailOver attribute
for the service group.
4 For the disk group you created for campus clusters, add a DiskGroup resource
to the VCS service group app_sg.
5 Configure the application and other related resources to the app_sg service
group.
6 Bring the service group online.
The process involves creating a fire drill service group, which is similar to the original
application service group. Bringing the fire drill service group online on the remote
node demonstrates the ability of the application service group to fail over and come
online at the site, should the need arise.
Fire drill service groups do not interact with outside clients or with other instances
of resources, so they can safely come online even when the application service
group is online. Conduct a fire drill only at the remote site; do not bring the fire drill
service group online on the node hosting the original application.
Note: To perform fire drill, the application service group must be online at the primary
site.
Note: For the applications for which you want to perform fire drill, you must set the
value of the FireDrill attribute for those application resource types to 1. After you
complete fire drill, reset the value to 0.
Warning: You must take the fire drill service group offline after you complete
the fire drill so that the failover behavior of the application service group is not
impacted. Otherwise, when a disaster strikes at the primary site, the application
service group cannot fail over to the secondary site due to resource conflicts.
3 After you complete the fire drill, take the fire drill service group offline.
4 Reset the FireDrill attribute for the application resource type to 0.
Section 6
Troubleshooting and
performance
The number of clients connected to VCS can affect performance if several events
occur simultaneously. For example, if five GUI processes are connected to VCS,
VCS sends state updates to all five. Maintaining fewer client connections to VCS
reduces this overhead.
VCS performance considerations 636
How cluster components affect performance
You can also adjust how often VCS monitors various functions by modifying their
associated attributes. The attributes MonitorTimeout, OnlineTimeOut, and
OfflineTimeout indicate the maximum time (in seconds) within which the monitor,
online, and offline functions must complete or else be terminated. The default for
the MonitorTimeout attribute is 60 seconds. The defaults for the OnlineTimeout and
OfflineTimeout attributes is 300 seconds. For best results, Symantec recommends
measuring the time it takes to bring a resource online, take it offline, and monitor
before modifying the defaults. Issue an online or offline command to measure the
time it takes for each action. To measure how long it takes to monitor a resource,
fault the resource and issue a probe, or bring the resource online outside of VCS
control and issue a probe.
Agents typically run with normal priority. When you develop agents, consider the
following:
■ If you write a custom agent, write the monitor function using C or C++. If you
write a script-based monitor, VCS must invoke a new process each time with
the monitor. This can be costly if you have many resources of that type.
■ If monitoring the resources is proving costly, you can divide it into cursory, or
shallow monitoring, and the more extensive deep (or in-depth) monitoring.
Whether to use shallow or deep monitoring depends on your configuration
requirements.
As an additional consideration for agents, properly configure the attribute SystemList
for your service group. For example, if you know that a service group can go online
on SystemA and SystemB only, do not include other systems in the SystemList.
This saves additional agent processes and monitoring overhead.
A cluster system boots See “ VCS performance consideration when booting a cluster
system” on page 638.
VCS performance considerations 638
How cluster operations affect performance
A resource goes offline See “ VCS performance consideration when a resource goes
offline” on page 639.
A service group comes online See “VCS performance consideration when a service group
comes online” on page 639.
A service group goes offline See “VCS performance consideration when a service group
goes offline” on page 640.
A network link fails See “ VCS performance consideration when a network link
fails” on page 642.
A service group switches over See “ VCS performance consideration when a service group
switches over” on page 645.
A service group fails over See “ VCS performance consideration when a service group
fails over” on page 645.
Note: Bringing service groups online as part of AutoStart occurs after VCS transitions
to RUNNING mode.
complex service group trees do not allow much parallelism and serializes the group
online operation.
set-timer peerinact:1200
Note: After modifying the peer inactive timeout, you must unconfigure, then restart
LLT before the change is implemented. To unconfigure LLT, type lltconfig -U.
To restart LLT, type lltconfig -c.
gabconfig -t timeout_value_milliseconds
Though this can be done, we do not recommend changing the values of the LLT
peer inactive timeout and GAB stable timeout.
If a system boots, it becomes unavailable until the reboot is complete. The reboot
process kills all processes, including HAD. When the VCS process is killed, other
systems in the cluster mark all service groups that can go online on the rebooted
system as autodisabled. The AutoDisabled flag is cleared when the system goes
offline. As long as the system goes offline within the interval specified in the
ShutdownTimeout value, VCS treats this as a system reboot. You can modify the
default value of the ShutdownTimeout attribute.
See System attributes on page 790.
VCS performance considerations 642
How cluster operations affect performance
set-timer peerinact:1200
Note: After modifying the peer inactive timeout, you must unconfigure, then restart
LLT before the change is implemented. To unconfigure LLT, type lltconfig -U.
To restart LLT, type lltconfig -c.
HAD heartbeats with GAB at regular intervals. It registers with GAB for a heartbeat
timeout of 30 seconds (default value). You can configure the VCS environment
variables, VCS_GAB_TIMEOUT_SECS andVCS_GAB_PEAKLOAD_TIMEOUT_SECS, to ensure
that GAB exhibits a dynamic behavior to determine the load average of a node (per
CPU load). Using the variable values and the average system load, GAB decides
the grace period after which it kills HAD.
If the average load on the node is minimum and HAD hangs in the kernel such that
it cannot heartbeat with GAB within the VCS_GAB_TIMEOUT_SECS timeout, GAB tries
to kill HAD by sending a SIGABRT signal. Upon an unsuccessful attempt, GAB
retries till the number of retries reaches the gab_kill_ntries-1 value. In case GAB
cannot kill HAD with a SIGABRT signal, GAB sends a SIGKILL and closes the port.
When the average load is minimum, GAB does not dynamically adapt to the load
and hence does not consider the VCS_GAB_PEAKLOAD_TIMEOUT_SECS timeout value
to determine the grace period to keep HAD alive.
If the average load on the node is high, HAD cannot communicate with GAB because
of CPU load or delays in its I/O path with file systems. Depending on the average
load, the operating system sends a load average number to GAB. The load average
number ranges from 5 (minimum load) through 10 (maximum load). GAB uses the
load average number to compute a grace period that adapts exponentially based
on the load within the user specified bounds of the VCS_GAB_TIMEOUT_SECS and
VCS_GAB_PEAKLOAD_TIMEOUT_SECS variables. GAB waits for HAD to send heartbeats
during the grace period after which it kills HAD by sending a SIGABRT signal. Even
after a SIGABRT signal, if GAB does not succeed, it sends a SIGKILL and closes
the port.
Tunables considered by GAB to calculate the timeout period for HAD:
■ GAB considers the VCS_GAB_TIMEOUT_SECS timeout to calculate the timeout
period for HAD if both the VCS_GAB_TIMEOUT_SECS and VCS_GAB_TIMEOUT
timeouts are set.
■ GAB considers the VCS_GAB_TIMEOUT timeout if the VCS_GAB_TIMEOUT_SECS
timeout is not set.
■ GAB cannot exponentially adapt to determine the grace period for HAD if the
VCS_GAB_PEAKLOAD_TIMEOUT_SECS timeout is not set or if its value is the same
as the VCS_GAB_TIMEOUT_SECS timeout.
By default, GAB tries to kill HAD five times before closing the port. The number of
times GAB tries to kill HAD is a kernel tunable parameter, gab_kill_ntries, and is
configurable. The minimum value for this tunable is 3 and the maximum is 10.
Port closure is an indication to other nodes that HAD on this node has been killed.
Should HAD recover from its stuck state, it first processes pending signals. Here it
receive the SIGKILL first and get killed.
VCS performance considerations 644
How cluster operations affect performance
After GAB sends a SIGKILL signal, it waits for a specific amount of time for HAD
to get killed. If HAD survives beyond this time limit, GAB panics the system. This
time limit is a kernel tunable parameter, gab_isolate_time, and is configurable. The
minimum value for this timer is 16 seconds and maximum is 4 minutes.
VCS_GAB_RMACTION=panic
In this configuration, killing the HAD and hashadow processes results in a panic
unless you start HAD within the registration monitoring timeout interval.
■ To configure GAB to log a message in this situation, set:
VCS_GAB_RMACTION=SYSLOG
The default value of this parameter is SYSLOG, which configures GAB to log a
message when HAD does not reconnect after the specified time interval.
In this scenario, you can choose to restart HAD (using hastart) or unconfigure
GAB (using gabconfig -U).
When you enable registration monitoring, GAB takes no action if the HAD process
unregisters with GAB normally, that is if you stop HAD using the hastop command.
VCS performance considerations 645
How cluster operations affect performance
■ The time it takes to offline the dependent service groups on other running
systems
■ The time it takes for the VCS policy module to select target system
■ The time it takes to bring the service group online on target system
The time it takes the VCS policy module to determine the target system is negligible
in comparison to the other factors.
If you have a firm group dependency and the child group faults, VCS offlines all
immediate and non-immediate parent groups before bringing the child group online
on the target system. Therefore, the time it takes a parent group to be brought
online also depends on the time it takes the child group to be brought online.
Max: 99
When priority is set to "" (empty string), VCS converts the priority to a default value.
For RT, the default priority equals two less than the strongest priority supported by
the RealTime class. So, if the strongest priority supported by the RealTime class
is 59, the default priority for the RT class is 57.
Note: For standard configurations, Symantec recommends using the default values
for scheduling unless specific configuration requirements dictate otherwise.
Note that the default priority value is platform-specific. When priority is set to ""
(empty string), VCS converts the priority to a value specific to the platform on which
the system is running. For TS, the default priority equals the strongest priority
supported by the TimeSharing class. For RT, the default priority equals two less
than the strongest priority supported by the RealTime class. So, if the strongest
priority supported by the RealTime class is 59, the default priority for the RT class
is 57.
VCS uses the following parameters to compute the average monitor time and to
detect increasing trends in monitor cycle times:
■ Frequency: The number of monitor cycles after which the monitor time average
is computed and sent to the VCS engine.
For example, if Frequency is set to 10, VCS computes the average monitor time
after every 10 monitor cycles.
■ ExpectedValue: The expected monitor time (in milliseconds) for a resource.
VCS sends a notification if the actual monitor time exceeds the expected monitor
time by the ValueThreshold. So, if you set this attribute to 5000 for a FileOnOff
resource, and if ValueThreshold is set to 40%, VCS will send a notification only
when the monitor cycle for the FileOnOff resource exceeds the expected time
by over 40%, that is 7000 milliseconds.
■ ValueThreshold: The maximum permissible deviation (in percent) from the
expected monitor time. When the time for a monitor cycle exceeds this limit,
VCS sends a notification about the sudden increase or decrease in monitor
time.
For example, a value of 100 means that VCS sends a notification if the actual
monitor time deviates from the expected time by over 100%.
VCS sends these notifications conservatively. If 12 consecutive monitor cycles
exceed the threshold limit, VCS sends a notification for the first spike, and then
a collective notification for the next 10 consecutive spikes.
■ AvgThreshold: The threshold value (in percent) for increase in the average
monitor cycle time for a resource.
VCS maintains a running average of the time taken by the monitor cycles of a
resource. The first such computed running average is used as a benchmark
average. If the current running average for a resource differs from the benchmark
average by more than this threshold value, VCS regards this as a sign of gradual
increase or decrease in monitor cycle times and sends a notification about it for
the resource. Whenever such an event occurs, VCS resets the internally
maintained benchmark average to this new average. VCS sends notifications
regardless of whether the deviation is an increase or decrease in the monitor
cycle time.
For example, a value of 25 means that if the actual average monitor time is 25%
more than the benchmark monitor time average, VCS sends a notification.
VCS marks gradual changes in monitor times by comparing the benchmark average
and the moving average of monitor cycle times. VCS computes the benchmark
average after a certain number of monitor cycles and computes the moving average
after every monitor cycle. If the current moving average exceeds the benchmark
average by more than the AvgThreshold, VCS sends a notification about this gradual
change in the monitor cycle time.
MonitorTimeStats Stores the average time taken by a number of monitor cycles specified
by the Frequency attribute along with a timestamp value of when the
average was computed.
ComputeStats A flag that specifies whether VCS keeps track of the monitor times for
the resource.
boolean ComputeStats = 0
The value 0 indicates that VCS will not keep track of the time taken by
the monitor routine for the resource. The value 1 indicates that VCS
keeps track of the monitor time for the resource.
Warning: Do not adjust the VCS tunable parameters for kernel modules such as
VXFEN without assistance from Symantec support personnel.
peerinact LLT marks a link of a peer 1600 ■ Change this value for The timer value should
node as “inactive," if it does delaying or speeding up always be higher than the
not receive any packet on that node/link inactive peertrouble timer value.
link for this timer interval. notification mechanism as
Once a link is marked as per client’s notification
"inactive," LLT will not send processing logic.
any data on that link. ■ Increase the value for
planned replacement of
faulty network cable
/switch.
■ In some circumstances,
when the private networks
links are very slow or the
network traffic becomes
very bursty, increase this
value so as to avoid false
notifications of peer death.
Set the value to a high
value for planned
replacement of faulty
network cable or faulty
switch.
rpeerinact Mark RDMA channel of a 700 Decrease the value of this This timer value should
RDMA link as "inactive", if the tunable for speeding up the always be greater than
node does not receive any RDMA link failure recovery. If peertrouble timer value and
packet on that link for this the links are unstable, and less than peerinact value.
timer interval. Once RDMA they are going up and down
channel is marked as frequently then do not
"inactive", LLT does not send decrease this value.
any data on the RDMA
channel of that link, however,
it may continue to send data
over non-RDMA channel of
that link until peerinact
expires. You can view the
status of the RDMA channel
of a RDMA link using lltstat
-nvv -r command. This
parameter is supported only
on selected versions of Linux.
VCS performance considerations 652
About VCS tunable parameters
peertrouble LLT marks a high-pri link of a 200 ■ In some circumstances, This timer value should
peer node as "troubled", if it when the private networks always be lower than
does not receive any packet links are very slow or peerinact timer value. Also, It
on that link for this timer nodes in the cluster are should be close to its default
interval. Once a link is very busy, increase the value.
marked as "troubled", LLT will value.
not send any data on that link ■ Increase the value for
till the link is up. planned replacement of
faulty network cable /faulty
switch.
peertroublelo LLT marks a low-pri link of a 400 ■ In some circumstances, This timer value should
peer node as "troubled", if it when the private networks always be lower than
does not receive any packet links are very slow or peerinact timer value. Also, It
on that link for this timer nodes in the cluster are should be close to its default
interval. Once a link is very busy, increase the value.
marked as "troubled", LLT will value.
not send any data on that link ■ Increase the value for
till the link is available. planned replacement of
faulty network cable /faulty
switch.
heartbeat LLT sends heartbeat packets 50 In some circumstances, when This timer value should be
repeatedly to peer nodes after the private networks links are lower than peertrouble timer
every heartbeat timer interval very slow (or congested) or value. Also, it should not be
on each highpri link. nodes in the cluster are very close to peertrouble timer
busy, increase the value. value.
heartbeatlo LLT sends heartbeat packets 100 In some circumstances, when This timer value should be
repeatedly to peer nodes after the networks links are very lower than peertroublelo timer
every heartbeatlo timer slow or nodes in the cluster value. Also, it should not be
interval on each low pri link. are very busy, increase the close to peertroublelo timer
value. value.
VCS performance considerations 653
About VCS tunable parameters
timetoreqhb If LLT does not receive any 1400 Decrease the value of this This timer is set to ‘peerinact
packet from the peer node on tunable for speeding up - 200’ automatically every
a particular link for node/link inactive notification time when the peerinact timer
"timetoreqhb" time period, it mechanism as per client’s is changed.
attempts to request notification processing logic.
heartbeats (sends 5 special
Disable the request heartbeat
heartbeat requests (hbreqs)
mechanism by setting the
to the peer node on the same
value of this timer to 0 for
link) from the peer node. If the
planned replacement of faulty
peer node does not respond
network cable /switch.
to the special heartbeat
requests, LLT marks the link In some circumstances, when
as “expired” for that peer the private networks links are
node. The value can be set very slow or the network
from the range of 0 to traffic becomes very bursty,
(peerinact -200). The value 0 don’t change the value of this
disables the request timer tunable.
heartbeat mechanism.
reqhbtime This value specifies the time 40 Symantec does not Not applicable
interval between two recommend to change this
successive special heartbeat value
requests. See the
timetoreqhb parameter for
more information on special
heartbeat requests.
timetosendhb LLT sends out of timer 200 Disable the out of timer This timer value should not
context heartbeats to keep context heart-beating be more than peerinact timer
the node alive when LLT mechanism by setting the value. Also, it should not be
timer does not run at regular value of this timer to 0 for close to the peerinact timer
interval. This option specifies planned replacement of faulty value.
the amount of time to wait network cable /switch.
before sending a heartbeat in
In some circumstances, when
case of timer not running.
the private networks links are
If this timer tunable is set to very slow or nodes in the
0, the out of timer context cluster are very busy,
heartbeating mechanism is increase the value
disabled.
VCS performance considerations 654
About VCS tunable parameters
oos If the out-of-sequence timer 10 Do not change this value for Not applicable
has expired for a node, LLT performance reasons.
sends an appropriate NAK to Lowering the value can result
that node. LLT does not send in unnecessary
a NAK as soon as it receives retransmissions/negative
an oos packet. It waits for the acknowledgement traffic.
oos timer value before
You can increase the value
sending the NAK.
of oos if the round trip time is
large in the cluster (for
example, campus cluster).
retrans LLT retransmits a packet if it 10 Do not change this value. Not applicable
does not receive its Lowering the value can result
acknowledgement for this in unnecessary
timer interval value. retransmissions.
service LLT calls its service routine 100 Do not change this value for Not applicable
(which delivers messages to performance reasons.
LLT clients) after every
service timer interval.
arp LLT flushes stored address 0 This feature is disabled by Not applicable
of peer nodes when this timer default.
expires and relearns the
addresses.
arpreq LLT sends an arp request 3000 Do not change this value for Not applicable
when this timer expires to performance reasons.
detect other peer nodes in the
cluster.
VCS performance considerations 655
About VCS tunable parameters
highwater When the number of packets 200 If a client generates data in This flow control value should
in transmit queue for a node bursty manner, increase this always be higher than the
reaches highwater, LLT is value to match the incoming lowwater flow control value.
flow controlled. data rate. Note that
increasing the value means
more memory consumption
so set an appropriate value
to avoid wasting memory
unnecessarily.
lowwater When LLT has flow controlled 100 Symantec does not This flow control value should
the client, it will not start recommend to change this be lower than the highwater
accepting packets again till tunable. flow control value. The value
the number of packets in the should not be close the
port transmit queue for a highwater flow control value.
node drops to lowwater.
rporthighwater When the number of packets 200 If a client generates data in This flow control value should
in the receive queue for a port bursty manner, increase this always be higher than the
reaches highwater, LLT is value to match the incoming rportlowwater flow control
flow controlled. data rate. Note that value.
increasing the value means
more memory consumption
so set an appropriate value
to avoid wasting memory
unnecessarily.
rportlowwater When LLT has flow controlled 100 Symantec does not This flow control value should
the client on peer node, it will recommend to change this be lower than the
not start accepting packets tunable. rpothighwater flow control
for that client again till the value. The value should not
number of packets in the port be close the rporthighwater
receive queue for the port flow control value.
drops to rportlowwater.
window This is the maximum number 50 Change the value as per the This flow control value should
of un-ACKed packets LLT will private networks speed. not be higher than the
put in flight. Lowering the value difference between the
irrespective of network speed highwater flow control value
may result in unnecessary and the lowwater flow control
retransmission of out of value.
window sequence packets.
The value of this parameter
(window) should be aligned
with the value of the
bandwidth delay product.
linkburst It represents the number of 32 For performance reasons, its This flow control value should
back-to-back packets that value should be either 0 or at not be higher than the
LLT sends on a link before least 32. difference between the
the next link is chosen. highwater flow control value
and the lowwater flow control
value.
ackval LLT sends acknowledgement 10 Do not change this value for Not applicable
of a packet by piggybacking performance reasons.
an ACK packet on the next Increasing the value can
outbound data packet to the result in unnecessary
sender node. If there are no retransmissions.
data packets on which to
piggyback the ACK packet,
LLT waits for ackval number
of packets before sending an
explicit ACK to the sender.
sws To avoid Silly Window 40 For performance reason, its Its value should be lower than
Syndrome, LLT transmits value should be changed that of window. Its value
more packets only when the whenever the value of the should be close to the value
count of un-acked packet window tunable is changed of window tunable.
goes to below of this tunable as per the formula given
value. below: sws = window *4/5.
VCS performance considerations 657
About VCS tunable parameters
largepktlen When LLT has packets to 1024 Symantec does not Not applicable
delivers to multiple ports, LLT recommend to change this
delivers one large packet or tunable.
up to five small packets to a
port at a time. This parameter
specifies the size of the large
packet.
# lltconfig -T query
# lltconfig -F query
Range: 1-64
Range: 1-32
Range:
8100-65400
gabconfig -c -n4
After adding the option, the /etc/gabtab file looks similar to the following:
gabconfig -c -n4 -k
Table 21-6 describes the GAB dynamic tunable parameters as seen with the
gabconfig -l command, and specifies the command to modify them.
VCS performance considerations 661
About VCS tunable parameters
Control port seed This option defines the minimum number of nodes that can form the
cluster. This option controls the forming of the cluster. If the number of
nodes in the cluster is less than the number specified in the gabtab
file, then the cluster will not form. For example: if you type gabconfig
-c -n4, then the cluster will not form until all four nodes join the cluster.
If this option is enabled using the gabconfig -x command then the
node will join the cluster even if the other nodes in the cluster are not
yet part of the membership.
Use the following command to set the number of nodes that can form
the cluster:
gabconfig -n count
Use the following command to enable control port seed. Node can form
the cluster without waiting for other nodes for membership:
gabconfig -x
gabconfig -p
gabconfig -P
VCS performance considerations 662
About VCS tunable parameters
This GAB option controls whether GAB can panic the node or not when
the VCS engine or the vxconfigd daemon miss to heartbeat with GAB.
If the VCS engine experiences a hang and is unable to heartbeat with
GAB, then GAB will NOT PANIC the system immediately. GAB will first
try to abort the process by sending SIGABRT (kill_ntries - default value
5 times) times after an interval of "iofence_timeout" (default value 15
seconds). If this fails, then GAB will wait for the "isolate timeout" period
which is controlled by a global tunable called isolate_time (default value
2 minutes). If the process is still alive, then GAB will PANIC the system.
If this option is enabled GAB will immediately HALT the system in case
of missed heartbeat from client.
gabconfig -b
gabconfig -B
VCS performance considerations 663
About VCS tunable parameters
When GAB has kernel clients (such as fencing, VxVM, or VxFS), then
the node will always PANIC when it rejoins the cluster after a network
partition. The PANIC is mandatory since this is the only way GAB can
clear ports and remove old messages.
gabconfig -j
gabconfig -J
gabconfig -k
VCS performance considerations 664
About VCS tunable parameters
gabconfig -q
gabconfig -d
GAB queue limit option controls the number of pending message before
which GAB sets flow. Send queue limit controls the number of pending
message in GAB send queue. Once GAB reaches this limit it will set
flow control for the sender process of the GAB client. GAB receive
queue limit controls the number of pending message in GAB receive
queue before GAB send flow control for the receive side.
gabconfig -Q sendq:value
gabconfig -Q recvq:value
This parameter specifies the timeout (in milliseconds) for which GAB
will wait for the clients to respond to an IOFENCE message before
taking next action. Based on the value of kill_ntries , GAB will attempt
to kill client process by sending SIGABRT signal. If the client process
is still registered after GAB attempted to kill client process for the value
of kill_ntries times, GAB will halt the system after waiting for additional
isolate_timeout value.
gabconfig -f value
VCS performance considerations 665
About VCS tunable parameters
gabconfig -t stable
This tunable specifies the timeout value for which GAB will wait for
client process to unregister in response to GAB sending SIGKILL signal.
If the process still exists after isolate timeout GAB will halt the system
gabconfig -S isolate_time:value
Kill_ntries Default: 5
This tunable specifies the number of attempts GAB will make to kill the
process by sending SIGABRT signal.
gabconfig -S kill_ntries:value
Driver state This parameter shows whether GAB is configured. GAB may not have
seeded and formed any membership yet.
Partition arbitration This parameter shows whether GAB is asked to specifically ignore
jeopardy.
See the gabconfig (1M) manual page for details on the -s flag.
■ Values
Default: 60
Minimum: 1
Maximum: 600
■ Values
Default: 1
Minimum: 1
Maximum: 600
vxfen_vxfnd_tmt Specifies the time in seconds that the I/O fencing driver VxFEN
waits for the I/O fencing daemon VXFEND to return after
completing a given task.
■ Values
Default: 60
Minimum: 10
Maximum: 600
VCS performance considerations 667
About VCS tunable parameters
panic_timeout_offst Specifies the time in seconds based on which the I/O fencing
driver VxFEN computes the delay to pass to the GAB module to
wait until fencing completes its arbitration before GAB implements
its decision in the event of a split-brain. You can set this parameter
in the vxfenmode file and use the vxfenadm command to check
the value. Depending on the vxfen_mode, the GAB delay is
calculated as follows:
In the event of a network partition, the smaller sub-cluster delays before racing for
the coordinator disks. The time delay allows a larger sub-cluster to win the race for
the coordinator disks. The vxfen_max_delay and vxfen_min_delay parameters
define the delay in seconds.
Note: You must restart the VXFEN module to put any parameter change into effect.
# hastop -local
VCS performance considerations 668
About VCS tunable parameters
# /etc/init.d/vxfen stop
vxfen_min_delay=1
to:
vxfen_min_delay=30
# /etc/init.d/vxfen start
6 Start all the applications that are not configured under VCS. Use native
application commands to start the applications.
# amfconfig -T tunable_name=tunable_value,
tunable_name=tunable_value...
Table 21-8 lists the possible tunable parameters for the AMF kernel:
The parameter values that you update are reflected after you reconfigure AMF
driver. Note that if you unload the module, the updated values are lost. You must
unconfigure the module using the amfconfig -U or equivalent command and then
reconfigure using the amfconfig -c command for the updated tunables to be
effective. If you want to set the tunables at module load time, you can write these
amfconfig commands in the amftab file.
■ Troubleshooting resources
■ Troubleshooting sites
■ Troubleshooting notification
■ Troubleshooting licensing
Note that the logs on all nodes may not be identical because
■ VCS logs local events on the local nodes.
■ All nodes may not be running when an event occurs.
VCS prints the warning and error messages to STDERR.
If the VCS engine, Command Server, or any of the VCS agents encounter some
problem, then First Failure Data Capture (FFDC) logs are generated and dumped
along with other core dumps and stack traces to the following location:
■ For VCS engine: $VCS_DIAG/diag/had
Troubleshooting and recovery for VCS 672
VCS message logging
/opt/VRTSgab/gabread_ffdc-kernel_version binary_logs_files_location
You can change the values of the following environment variables that control the
GAB binary log files:
■ GAB_FFDC_MAX_INDX: Defines the maximum number of GAB binary log files
The GAB logging daemon collects the defined number of log files each of eight
MB size. The default value is 20, and the files are named gablog.1 through
gablog.20. At any point in time, the most recent file is the gablog.1 file.
■ GAB_FFDC_LOGDIR: Defines the log directory location for GAB binary log files
The default location is:
/var/log/gab_ffdc
Note that the gablog daemon writes its log to the glgd_A.log and glgd_B.log
files in the same directory.
You can either define these variables in the following GAB startup file or use the
export command. You must restart GAB for the changes to take effect.
/etc/sysconfig/gab
Troubleshooting and recovery for VCS 673
VCS message logging
# haconf -makerw
2 Enable logging and set the desired log levels. Use the following command
syntax:
The following example shows the command line for the IPMultiNIC resource
type.
If DBG_AGDEBUG is set, the agent framework logs for an instance of the agent
appear in the agent log on the node on which the agent is running.
4 For CVMvxconfigd agent, you do not have to enable any additional debug logs.
5 For AMF driver in-memory trace buffer:
If you had enabled AMF driver in-memory trace buffer, you can view the
additional logs using the amfconfig -p dbglog command.
# export VCS_DEBUG_LOG_TAGS="DBG_IPM"
# hagrp -list
Note: Debug log messages are verbose. If you enable debug logs, log files might
fill up quickly.
DBG_AGDEBUG
DBG_AGINFO
DBG_HBFW_INFO
Troubleshooting and recovery for VCS 676
VCS message logging
# /opt/VRTSvcs/bin/hagetcf
The command prompts you to specify an output directory for the gzip file. You
may save the gzip file to either the default/tmp directory or a different directory.
Troubleshoot and fix the issue.
See “Troubleshooting the VCS engine” on page 680.
See “Troubleshooting VCS startup” on page 687.
See “Troubleshooting service groups” on page 692.
See “Troubleshooting resources” on page 698.
See “Troubleshooting notification” on page 714.
See “Troubleshooting and recovery for global clusters” on page 714.
See “Troubleshooting the steward process” on page 718.
If the issue cannot be fixed, then contact Symantec technical support with the
file that the hagetcf command generates.
# /opt/VRTSvcs/bin/vcsstatlog --dump\
/var/VRTSvcs/stats/copied_vcs_host_stats
■ To get the forecasted available capacity for CPU, Mem, and Swap for a
system in cluster, run the following command on the system on which you
copied the statlog database:
# /opt/VRTSgab/getcomms
The script uses rsh by default. Make sure that you have configured
passwordless rsh. If you have passwordless ssh between the cluster nodes,
you can use the -ssh option. To gather information on the node that you run
the command, use the -local option.
Troubleshoot and fix the issue.
See “Troubleshooting Low Latency Transport (LLT)” on page 682.
See “Troubleshooting Group Membership Services/Atomic Broadcast (GAB)”
on page 686.
If the issue cannot be fixed, then contact Symantec technical support with the
file /tmp/commslog.<time_stamp>.tar that the getcomms script generates.
# /opt/VRTSamf/bin/getimf
Message catalogs
VCS includes multilingual support for message catalogs. These binary message
catalogs (BMCs), are stored in the following default locations. The variable language
represents a two-letter abbreviation.
/opt/VRTS/messages/<language>/module_name
Troubleshooting and recovery for VCS 680
Troubleshooting the VCS engine
HAD diagnostics
When the VCS engine HAD dumps core, the core is written to the directory
$VCS_DIAG/diag/had. The default value for variable $VCS_DIAG is /var/VRTSvcs/.
When HAD core dumps, review the contents of the $VCS_DIAG/diag/had directory.
See the following logs for more information:
Troubleshooting and recovery for VCS 681
Troubleshooting the VCS engine
of systems in the cluster, or the cluster is forced to seed with the gabconfig -x
command, it is likely that this check will not match. In this case, the fencing module
will detect a possible split-brain condition, print an error, and HAD will not start.
It is recommended to let the cluster automatically seed when all members of the
cluster can exchange heartbeat signals to each other. In this case, all systems
perform the I/O fencing key placement after they are already in the GAB
membership.
Preonline IP check
You can enable a preonline check of a failover IP address to protect against network
partitioning. The check pings a service group's configured IP address to verify that
it is not already in use. If it is, the service group is not brought online.
A second check verifies that the system is connected to its public network and
private network. If the system receives no response from a broadcast ping to the
public network and a check of the private networks, it determines the system is
isolated and does not bring the service group online.
To enable the preonline IP check, do one of the following:
■ If preonline trigger script is not already present, copy the preonline trigger script
from the sample triggers directory into the triggers directory:
# cp /opt/VRTSvcs/bin/sample_triggers/VRTSvcs/preonline_ipc
/opt/VRTSvcs/bin/triggers/preonline
Recommended action: Ensure that all systems on the network have unique
clusterid-nodeid pair. You can use the lltdump -f device -D command to get
the list of unique clusterid-nodeid pairs connected to the network. This utility is
available only for LLT-over-ethernet.
LLT INFO V-14-1-10205 This message implies that LLT did not receive any heartbeats
link 1 (link_name) node 1 in trouble on the indicated link from the indicated peer node for LLT
peertrouble time. The default LLT peertrouble time is 2s for
hipri links and 4s for lo-pri links.
Recommended action: If these messages sporadically appear
in the syslog, you can ignore them. If these messages flood
the syslog, then perform one of the following:
lltconfig -T peertrouble:<value>
for hipri link
lltconfig -T peertroublelo:<value>
for lopri links.
LLT INFO V-14-1-10024 This message implies that LLT started seeing heartbeats on
link 0 (link_name) node 1 active this link from that node.
LLT INFO V-14-1-10032 This message implies that LLT did not receive any heartbeats
link 1 (link_name) node 1 inactive 5 on the indicated link from the indicated peer node for the
sec (510) indicated amount of time.
If the peer node has not actually gone down, check for the
following:
LLT INFO V-14-1-10510 sent hbreq This message implies that LLT did not receive any heartbeats
(NULL) on link 1 (link_name) node 1. on the indicated link from the indicated peer node for more
4 more to go. than LLT peerinact time. LLT attempts to request heartbeats
LLT INFO V-14-1-10510 sent hbreq (sends 5 hbreqs to the peer node) and if the peer node does
(NULL) on link 1 (link_name) node 1. not respond, LLT marks this link as “expired” for that peer
3 more to go. node.
LLT INFO V-14-1-10510 sent hbreq
Recommended action: If the peer node has not actually gone
(NULL) on link 1 (link_name) node 1.
down, check for the following:
2 more to go.
LLT INFO V-14-1-10032 link 1 ■ Check if the link has got physically disconnected from the
(link_name) node 1 inactive 6 sec system or switch.
(510) ■ Check for the link health and replace the link if necessary.
LLT INFO V-14-1-10510 sent hbreq
See “Adding and removing LLT links” on page 182.
(NULL) on link 1 (link_name) node 1.
1 more to go.
LLT INFO V-14-1-10510 sent hbreq
(NULL) on link 1 (link_name) node 1.
0 more to go.
LLT INFO V-14-1-10032 link 1
(link_name) node 1 inactive 7 sec
(510)
LLT INFO V-14-1-10509 link 1
(link_name) node 1 expired
LLT INFO V-14-1-10499 recvarpreq This message is logged when LLT learns the peer node’s
link 0 for node 1 addr change from address.
00:00:00:00:00:00 to
Recommended action: No action is required. This message
00:18:8B:E4:DE:27
is informational.
Troubleshooting and recovery for VCS 686
Troubleshooting Group Membership Services/Atomic Broadcast (GAB)
On peer nodes:
Recommended Action: If this issue occurs during a GAB reconfiguration, and does
not recur, the issue is benign. If the issue persists, collect commslog from each
node, and contact Symantec support.
Troubleshooting and recovery for VCS 687
Troubleshooting VCS startup
GABs attempt (five retries) to kill the VCS daemon fails if VCS daemon is stuck in
the kernel in an uninterruptible state or the system is heavily loaded that the VCS
daemon cannot die with a SIGKILL.
Recommended Action:
■ In case of performance issues, increase the value of the VCS_GAB_TIMEOUT
environment variable to allow VCS more time to heartbeat.
See “ VCS environment variables” on page 74.
■ In case of a kernel problem, configure GAB to not panic but continue to attempt
killing the VCS daemon.
Do the following:
■ Run the following command on each node:
gabconfig -k
■ Add the “-k” option to the gabconfig command in the /etc/gabtab file:
gabconfig -c -k -n 6
■ In case the problem persists, collect sar or similar output, collect crash dumps,
run the Symantec Operations and Readiness Tools (SORT) data collector on
all nodes, and contact Symantec Technical Support.
Another method is to install the configuration file on the local system and force VCS
to reread the configuration file. If the file appears valid, verify that is not an earlier
version.
Type the following commands to verify the configuration:
# cd /etc/VRTSvcs/conf/config
# hacf -verify .
# cd /etc/VRTSvcs/conf/config
# hacf -verify .
GAB can become unregistered if LLT is set up incorrectly. Verify that the
configuration is correct in /etc/llttab. If the LLT configuration is incorrect, make the
appropriate changes and reboot.
Intelligent resource If the system is busy even after intelligent resource monitoring is enabled, troubleshoot as
monitoring has not follows:
reduced system
■ Check the agent log file to see whether the imf_init agent function has failed.
utilization
If the imf_init agent function has failed, then do the following:
■ Make sure that the AMF_START environment variable value is set to 1.
See “Environment variables to start and stop VCS modules” on page 78.
■ Make sure that the AMF module is loaded.
See “Administering the AMF kernel driver” on page 186.
■ Make sure that the IMF attribute values are set correctly for the following attribute keys:
■ The value of the Mode key of the IMF attribute must be set to 1, 2, or 3.
■ The value of the MonitorFreq key of the IMF attribute must be be set to either 0 or a
value greater than 0.
For example, the value of the MonitorFreq key can be set to 0 for the Process agent.
Refer to the appropriate agent documentation for configuration recommendations
corresponding to the IMF-aware agent.
Note that the IMF attribute can be overridden. So, if the attribute is set for individual
resource, then check the value for individual resource.
Enabling the agent's The actual intelligent monitoring for a resource starts only after a steady state is achieved.
intelligent monitoring So, it takes some time before you can see positive performance effect after you enable IMF.
does not provide This behavior is expected.
immediate performance
For more information on when a steady state is reached, see the following topic:
results
See “How intelligent resource monitoring works” on page 43.
Troubleshooting and recovery for VCS 691
Troubleshooting Intelligent Monitoring Framework (IMF)
Agent does not perform For the agents that use AMF driver for IMF notification, if intelligent resource monitoring has
intelligent monitoring not taken effect, do the following:
despite setting the IMF
■ Make sure that IMF attribute's Mode key value is set to three (3).
mode to 3
See “Resource type attributes” on page 750.
■ Review the the agent log to confirm that imf_init() agent registration with AMF has
succeeded. AMF driver must be loaded before the agent starts because the agent
registers with AMF at the agent startup time. If this was not the case, start the AMF
module and restart the agent.
See “Administering the AMF kernel driver” on page 186.
AMF module fails to Even after you change the value of the Mode key to zero, the agent still continues to have
unload despite changing a hold on the AMF driver until you kill the agent. To unload the AMF module, all holds on it
the IMF mode to 0 must get released.
If the AMF module fails to unload after changing the IMF mode value to zero, do the following:
■ Run the amfconfig -Uof command. This command forcefully removes all holds on
the module and unconfigures it.
■ Then, unload AMF.
See “Administering the AMF kernel driver” on page 186.
When you try to enable A few possible reasons for this behavior are as follows:
IMF for an agent, the
■ The agent might require some manual steps to make it IMF-aware. Refer to the agent
haimfconfig
documentation for these manual steps.
-enable -agent
■ The agent is a custom agent and is not IMF-aware. For information on how to make a
<agent_name>
custom agent IMF-aware, see the Symantec Cluster Server Agent Developer’s Guide.
command returns a
■ If the preceding steps do not resolve the issue, contact Symantec technical support.
message that IMF is
enabled for the agent.
However, when VCS
and the respective agent
is running, the
haimfconfig
-display command
shows the status for
agent_name as
DISABLED.
Troubleshooting and recovery for VCS 692
Troubleshooting service groups
Warning: To bring a group online manually after VCS has autodisabled the group,
make sure that the group is not fully or partially active on any system that has the
AutoDisabled attribute set to 1 by VCS. Specifically, verify that all resources that
may be corrupted by being active on multiple systems are brought down on the
designated systems. Then, clear the AutoDisabled attribute for each system: #
hagrp -autoenable service_group -sys system
engine and agent logs in /var/VRTSvcs/log for information on why the resource is
unable to be brought online or be taken offline.
To clear this state, make sure all resources waiting to go online/offline do not bring
themselves online/offline. Use the hagrp -flush command or the hagrp -flush
-force command to clear the internal state of VCS. You can then bring the service
group online or take it offline on another system.
For more information on the hagrp -flush and hagrp -flush -force commands.
Warning: Exercise caution when you use the -force option. It can lead to situations
where a resource status is unintentionally returned as FAULTED. In the time interval
that a resource transitions from ‘waiting to go offline’ to ‘not waiting’, if the agent
has not completed the offline agent function, the agent may return the state of the
resource as OFFLINE. VCS considers such unexpected offline of the resource as
FAULT and starts recovery action that was not intended.
Service group does not fail over to the BiggestAvailable system even
if FailOverPolicy is set to BiggestAvailable
Sometimes, a service group might not fail over to the biggest available system even
when FailOverPolicy is set to BiggestAvailable.
To troubleshoot this issue, check the engine log located in
/var/VRTSvcs/log/engine_A.log to find out the reasons for not failing over to the
biggest available system. This may be due to the following reasons:
■ If , one or more of the systems in the service group’s SystemList did not have
forecasted available capacity, you see the following message in the engine log:
One of the systems in SystemList of group group_name, system
system_name does not have forecasted available capacity updated
■ If the hautil –sys command does not list forecasted available capacity for the
systems, you see the following message in the engine log:
Failed to forecast due to insufficient data
This message is displayed due to insufficient recent data to be used for
forecasting the available capacity.
The default value for the MeterInterval key of the cluster attribute MeterControl
is 120 seconds. There will be enough recent data available for forecasting after
3 metering intervals (6 minutes) from time the VCS engine was started on the
system. After this, the forecasted values are updated every ForecastCycle *
MeterInterval seconds. The ForecastCycle and MeterInterval values are specified
in the cluster attribute MeterControl.
■ If one or more of the systems in the service group’s SystemList have stale
forecasted available capacity, you can see the following message in the engine
log:
System system_name has not updated forecasted available capacity since
last 2 forecast cycles
This issue is caused when the HostMonitor agent stops functioning. Check if
HostMonitor agent process is running by issuing one of the following commands
on the system which has stale forecasted values:
Troubleshooting and recovery for VCS 696
Troubleshooting service groups
■ # ps –aef|grep HostMonitor
Even if HostMonitor agent is running and you see the above message in the
engine log, it means that the HostMonitor agent is not able to forecast, and
it will log error messages in the HostMonitor_A.log file in the
/var/VRTSvcs/log/ directory.
# rm /var/VRTSvcs/stats/.vcs_host_stats.data\
/var/VRTSvcs/stats/.vcs_host_stats.index
# cp /var/VRTSvcs/stats/.vcs_host_stats_bkup.data\
/var/VRTSvcs/stats/.vcs_host_stats.data
# cp /var/VRTSvcs/stats/.vcs_host_stats_bkup.index\
/var/VRTSvcs/stats/.vcs_host_stats.index
Troubleshooting and recovery for VCS 697
Troubleshooting service groups
# /opt/VRTSvcs/bin/vcsstatlog --setprop\
/var/VRTSvcs/stats/.vcs_host_stats rate 120
# /opt/VRTSvcs/bin/vcsstatlog --setprop\
/var/VRTSvcs/stats/.vcs_host_stats compressmode avg
# /opt/VRTSvcs/bin/vcsstatlog --setprop\
/var/VRTSvcs/stats/.vcs_host_stats compressfreq 24h
# rm /var/VRTSvcs/stats/.vcs_host_stats.data\
/var/VRTSvcs/stats/.vcs_host_stats.index
Troubleshooting resources
This topic cites the most common problems associated with bringing resources
online and taking them offline. Bold text provides a description of the problem.
Recommended action is also included, where applicable.
The Monitor entry point of the disk group agent returns ONLINE even
if the disk group is disabled
This is expected agent behavior. VCS assumes that data is being read from or
written to the volumes and does not declare the resource as offline. This prevents
potential data corruption that could be caused by the disk group being imported on
two hosts.
You can deport a disabled disk group when all I/O operations are completed or
when all volumes are closed. You can then reimport the disk group to the same
system. Reimporting a disabled disk group may require a system reboot.
Note: A disk group is disabled if data including the kernel log, configuration copies,
or headers in the private region of a significant number of disks is invalid or
inaccessible. Volumes can perform read-write operations if no changes are required
to the private regions of the disks.
Troubleshooting sites
The following sections discuss troubleshooting the sites. Bold text provides a
description of the problem. Recommended action is also included, where applicable.
Troubleshooting and recovery for VCS 700
Troubleshooting sites
Renaming a Site
Recommended Action: For renaming the site, re-run the Stretch Cluster
Configuration flow by changing the Site names
Troubleshooting and recovery for VCS 701
Troubleshooting I/O fencing
If you see these messages when the new node is booting, the vxfen startup script
on the node makes up to five attempts to join the cluster.
To manually join the node to the cluster when I/O fencing attempts fail
◆ If the vxfen script fails in the attempts to allow the node to join the cluster,
restart vxfen driver with the command:
# /etc/init.d/vxfen stop
# /etc/init.d/vxfen start
The vxfentsthdw utility fails when SCSI TEST UNIT READY command
fails
While running the vxfentsthdw utility, you may see a message that resembles as
follows:
The disk array does not support returning success for a SCSI TEST UNIT READY
command when another host has the disk reserved using SCSI-3 persistent
reservations. This happens with the Hitachi Data Systems 99XX arrays if bit 186
of the system mode option is not enabled.
Troubleshooting and recovery for VCS 702
Troubleshooting I/O fencing
Note: If you want to clear all the pre-existing keys, use the vxfenclearpre utility.
See “About the vxfenclearpre utility” on page 378.
# vi /tmp/disklist
For example:
/dev/sdu
3 If you know on which node the key (say A1) was created, log in to that node
and enter the following command:
# vxfenadm -m -k A2 -f /tmp/disklist
6 Remove the first key from the disk by preempting it with the second key:
/dev/sdu
A node experiences the split-brain condition when it loses the heartbeat with its
peer nodes due to failure of all private interconnects or node hang. Review the
behavior of I/O fencing under different scenarios and the corrective measures to
be taken.
See “How I/O fencing works in different event scenarios” on page 350.
Cluster ID on the I/O fencing key of coordinator disk does not match
the local cluster’s ID
If you accidentally assign coordinator disks of a cluster to another cluster, then the
fencing driver displays an error message similar to the following when you start I/O
fencing:
The warning implies that the local cluster with the cluster ID 57069 has keys.
However, the disk also has keys for cluster with ID 48813 which indicates that nodes
from the cluster with cluster id 48813 potentially use the same coordinator disk.
You can run the following commands to verify whether these disks are used by
another cluster. Run the following commands on one of the nodes in the local
cluster. For example, on sys1:
sys1> # lltstat -C
57069
Where disk_7, disk_8, and disk_9 represent the disk names in your setup.
Recommended action: You must use a unique set of coordinator disks for each
cluster. If the other cluster does not use these coordinator disks, then clear the keys
using the vxfenclearpre command before you use them as coordinator disks in the
local cluster.
See “About the vxfenclearpre utility” on page 378.
Given this conflicting information about system 2, system 1 does not join the cluster
and returns an error from vxfenconfig that resembles:
However, the same error can occur when the private network links are working and
both systems go down, system 1 restarts, and system 2 fails to come back up. From
the view of the cluster from system 1, system 2 may still have the registrations on
the coordination points.
Assume the following situations to understand preexisting split-brain in server-based
fencing:
■ There are three CP servers acting as coordination points. One of the three CP
servers then becomes inaccessible. While in this state, one client node leaves
the cluster, whose registration cannot be removed from the inaccessible CP
server. When the inaccessible CP server restarts, it has a stale registration from
the node which left the VCS cluster. In this case, no new nodes can join the
cluster. Each node that attempts to join the cluster gets a list of registrations
from the CP server. One CP server includes an extra registration (of the node
which left earlier). This makes the joiner node conclude that there exists a
preexisting split-brain between the joiner node and the node which is represented
by the stale registration.
■ All the client nodes have crashed simultaneously, due to which fencing keys
are not cleared from the CP servers. Consequently, when the nodes restart, the
vxfen configuration fails reporting preexisting split brain.
Troubleshooting and recovery for VCS 706
Troubleshooting I/O fencing
These situations are similar to that of preexisting split-brain with coordinator disks,
where you can solve the problem running the vxfenclearpre command. A similar
solution is required in server-based fencing using the cpsadm command.
See “Clearing preexisting split-brain condition” on page 706.
Scenario Solution
2 Clear the keys on the coordinator disks as well as the data disks in all shared disk
groups using the vxfenclearpre command. The command removes SCSI-3
registrations and reservations.
4 Restart system 2.
Troubleshooting and recovery for VCS 707
Troubleshooting I/O fencing
Scenario Solution
2 Clear the keys on the CP servers using the cpsadm command. The cpsadm command
clears a registration on a CP server:
where cp_server is the virtual IP address or virtual hostname on which the CP server
is listening, cluster_name is the VCS name for the VCS cluster, and nodeid specifies
the node id of VCS cluster node. Ensure that fencing is not already running on a node
before clearing its registration on the CP server.
After removing all stale registrations, the joiner node will be able to join the cluster.
4 Restart system 2.
# hastop -all
Make sure that the port h is closed on all the nodes. Run the following command
to verify that the port h is closed:
# gabconfig -a
# /etc/init.d/vxfen stop
4 Import the coordinator disk group. The file /etc/vxfendg includes the name of
the disk group (typically, vxfencoorddg) that contains the coordinator disks, so
use the command:
where:
-t specifies that the disk group is imported only until the node restarts.
-f specifies that the import is to be done forcibly, which is necessary if one or
more disks is not accessible.
-C specifies that any import locks are removed.
5 To remove disks from the disk group, use the VxVM disk administrator utility,
vxdiskadm.
You may also destroy the existing coordinator disk group. For example:
■ Verify whether the coordinator attribute is set to on.
6 Add the new disk to the node and initialize it as a VxVM disk.
Then, add the new disk to the vxfencoorddg disk group:
■ If you destroyed the disk group in step 5, then create the disk group again
and add the new disk to it.
See the Symantec Cluster Server Installation Guide for detailed instructions.
■ If the diskgroup already exists, then add the new disk to it.
7 Test the recreated disk group for SCSI-3 persistent reservations compliance.
See “Testing the coordinator disk group using the -c option of vxfentsthdw”
on page 367.
8 After replacing disks in a coordinator disk group, deport the disk group:
# /etc/init.d/vxfen start
10 Verify that the I/O fencing module has started and is enabled.
# gabconfig -a
Make sure that port b membership exists in the output for all nodes in the
cluster.
# vxfenadm -d
Make sure that I/O fencing mode is not disabled in the output.
11 If necessary, restart VCS on each node:
# hastart
The vxfenswap utility exits if rcp or scp commands are not functional
The vxfenswap utility displays an error message if rcp or scp commands are not
functional.
To recover the vxfenswap utility fault
◆ Verify whether the rcp or scp functions properly.
Make sure that you do not use echo or cat to print messages in the .bashrc
file for the nodes.
If the vxfenswap operation is unsuccessful, use the vxfenswap –a cancel
command if required to roll back any changes that the utility made.
See “About the vxfenswap utility” on page 381.
Troubleshooting CP server
All CP server operations and messages are logged in the /var/VRTScps/log directory
in a detailed and easy to read format. The entries are sorted by date and time. The
logs can be used for troubleshooting purposes or to review for any possible security
issue on the system that hosts the CP server.
The following files contain logs and text files that may be useful in understanding
and troubleshooting a CP server:
■ /var/VRTScps/log/cpserver_[ABC].log
■ /var/VRTSvcs/log/vcsauthserver.log (Security related)
Troubleshooting and recovery for VCS 711
Troubleshooting I/O fencing
■ If the vxcpserv process fails on the CP server, then review the following
diagnostic files:
■ /var/VRTScps/diag/FFDC_CPS_pid_vxcpserv.log
■ /var/VRTScps/diag/stack_pid_vxcpserv.txt
Note: If the vxcpserv process fails on the CP server, these files are present in
addition to a core file. VCS restarts vxcpserv process automatically in such
situations.
cpsadm command on If you receive a connection error message after issuing the cpsadm command on the VCS
the VCS cluster gives cluster, perform the following actions:
connection error
■ Ensure that the CP server is reachable from all the VCS cluster nodes.
■ Check the /etc/vxfenmode file and ensure that the VCS cluster nodes use the correct
CP server virtual IP or virtual hostname and the correct port number.
■ For HTTPS communication, ensure that the virtual IP and ports listed for the server can
listen to HTTPS requests.
Authorization failure Authorization failure occurs when the nodes on the client clusters and or users are not added
in the CP server configuration. Therefore, fencing on the VCS cluster (client cluster) node
is not allowed to access the CP server and register itself on the CP server. Fencing fails to
come up if it fails to register with a majority of the coordination points.
To resolve this issue, add the client cluster node and user in the CP server configuration
and restart fencing.
Troubleshooting and recovery for VCS 713
Troubleshooting I/O fencing
Table 22-4 Fencing startup issues on VCS cluster (client cluster) nodes
(continued)
Authentication failure If you had configured secure communication between the CP server and the VCS cluster
(client cluster) nodes, authentication failure can occur due to the following causes:
■ The client cluster requires its own private key, a signed certificate, and a Certification
Authority's (CA) certificate to establish secure communication with the CP server. If any
of the files are missing or corrupt, communication fails.
■ If the client cluster certificate does not correspond to the client's private key,
communication fails.
■ If the CP server and client cluster do not have a common CA in their certificate chain of
trust, then communication fails.
See “About secure communication between the VCS cluster and CP server” on page 336.
Thus, during vxfenswap, when the vxfenmode file is being changed by the user,
the Coordination Point agent does not move to FAULTED state but continues
monitoring the old set of coordination points.
As long as the changes to vxfenmode file are not committed or the new set of
coordination points are not reflected in vxfenconfig -l output, the Coordination
Point agent continues monitoring the old set of coordination points it read from
vxfenconfig -l output in every monitor cycle.
The status of the Coordination Point agent (either ONLINE or FAULTED) depends
upon the accessibility of the coordination points, the registrations on these
coordination points, and the fault tolerance value.
When the changes to vxfenmode file are committed and reflected in the vxfenconfig
-l output, then the Coordination Point agent reads the new set of coordination
points and proceeds to monitor them in its new monitor cycle.
Troubleshooting notification
Occasionally you may encounter problems when using VCS notification. This section
cites the most common problems and the recommended actions. Bold text provides
a description of the problem.
Disaster declaration
When a cluster in a global cluster transitions to the FAULTED state because it can
no longer be contacted, failover executions depend on whether the cause was due
to a split-brain, temporary outage, or a permanent disaster at the remote cluster.
If you choose to take action on the failure of a cluster in a global cluster, VCS
prompts you to declare the type of failure.
■ Disaster, implying permanent loss of the primary data center
■ Outage, implying the primary may return to its current form in some time
■ Disconnect, implying a split-brain condition; both clusters are up, but the link
between them is broken
■ Replica, implying that data on the takeover target has been made consistent
from a backup source and that the RVGPrimary can initiate a takeover when
the service group is brought online. This option applies to VVR environments
only.
You can select the groups to be failed over to the local cluster, in which case VCS
brings the selected groups online on a node based on the group's FailOverPolicy
attribute. It also marks the groups as being offline in the other cluster. If you do not
select any service groups to fail over, VCS takes no action except implicitly marking
the service groups as offline on the downed cluster.
VCS alerts
VCS alerts are identified by the alert ID, which is comprised of the following
elements:
■ alert_type—The type of the alert
■ object—The name of the VCS object for which this alert was generated. This
could be a cluster or a service group.
Alerts are generated in the following format:
alert_type-cluster-system-object
For example:
GNOFAILA-Cluster1-oracle_grp
This is an alert of type GNOFAILA generated on cluster Cluster1 for the service
group oracle_grp.
Types of alerts
VCS generates the following types of alerts.
■ CFAULT—Indicates that a cluster has faulted
■ GNOFAILA—Indicates that a global group is unable to fail over within the cluster
where it was online. This alert is displayed if the ClusterFailOverPolicy attribute
is set to Manual and the wide-area connector (wac) is properly configured and
running at the time of the fault.
■ GNOFAIL—Indicates that a global group is unable to fail over to any system
within the cluster or in a remote cluster.
Some reasons why a global group may not be able to fail over to a remote
cluster:
■ The ClusterFailOverPolicy is set to either Auto or Connected and VCS is
unable to determine a valid remote cluster to which to automatically fail the
group over.
■ The ClusterFailOverPolicy attribute is set to Connected and the cluster in
which the group has faulted cannot communicate with one ore more remote
clusters in the group's ClusterList.
■ The wide-area connector (wac) is not online or is incorrectly configured in
the cluster in which the group has faulted
Troubleshooting and recovery for VCS 717
Troubleshooting and recovery for global clusters
Managing alerts
Alerts require user intervention. You can respond to an alert in the following ways:
■ If the reason for the alert can be ignored, use the Alerts dialog box in the Java
console or the haalert command to delete the alert. You must provide a
comment as to why you are deleting the alert; VCS logs the comment to engine
log.
■ Take an action on administrative alerts that have actions associated with them.
■ VCS deletes or negates some alerts when a negating event for the alert occurs.
An administrative alert will continue to live if none of the above actions are performed
and the VCS engine (HAD) is running on at least one node in the cluster. If HAD
is not running on any node in the cluster, the administrative alert is lost.
Negating events
VCS deletes a CFAULT alert when the faulted cluster goes back to the running
state
VCS deletes the GNOFAILA and GNOFAIL alerts in response to the following
events:
■ The faulted group's state changes from FAULTED to ONLINE.
■ The group's fault is cleared.
■ The group is deleted from the cluster where alert was generated.
Recommended Action: Verify the state of the service group in each cluster before
making the service group global.
Troubleshooting licensing
This section cites problems you may encounter with VCS licensing. It provides
instructions on how to validate license keys and lists the error messages associated
with licensing.
Troubleshooting and recovery for VCS 719
Troubleshooting licensing
Platform = Linux
Version = 6.1
Tier = Unused
Reserved = 0
Mode = VCS
Global Cluster Option = Enabled
■ Administration matrices
Administration matrices
Review the matrices in the following topics to determine which command options
can be executed within a specific user role. Checkmarks denote the command and
option can be executed. A dash indicates they cannot.
VCS user privileges—administration matrices 724
Administration matrices
Start agent – – – ✓ ✓
Stop agent – – – ✓ ✓
Display info ✓ ✓ ✓ ✓ ✓
List agents ✓ ✓ ✓ ✓ ✓
Add – – – – ✓
Change default – – – – ✓
value
Delete – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
Display ✓ ✓ ✓ ✓ ✓
Modify – – – – ✓
Add – – – – ✓
VCS user privileges—administration matrices 725
Administration matrices
Delete – – – – ✓
Declare – – – ✓ ✓
View state or ✓ ✓ ✓ ✓ ✓
status
Update license – – – – ✓
Make configuration – – ✓ – ✓
read-write
Save configuration – – ✓ – ✓
Make configuration – – ✓ – ✓
read-only
Clear – ✓ ✓ ✓ ✓
Bring online – ✓ ✓ ✓ ✓
Take offline – ✓ ✓ ✓ ✓
View state ✓ ✓ ✓ ✓ ✓
Switch – ✓ ✓ ✓ ✓
Freeze/unfreeze – ✓ ✓ ✓ ✓
Freeze/unfreeze – – ✓ – ✓
persistent
Enable – – ✓ – ✓
VCS user privileges—administration matrices 726
Administration matrices
Disable – – ✓ – ✓
Modify – – ✓ – ✓
Display ✓ ✓ ✓ ✓ ✓
View ✓ ✓ ✓ ✓ ✓
dependencies
View resources ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Enable resources – – ✓ – ✓
Disable resources – – ✓ – ✓
Flush – ✓ ✓ ✓ ✓
Autoenable – ✓ ✓ ✓ ✓
Ignore – ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
Make local – – – – ✓
Make global – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
View state ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
VCS user privileges—administration matrices 727
Administration matrices
Add messages to – – ✓ – ✓
log file
Display ✓ ✓ ✓ ✓ ✓
Add – – ✓ – ✓
Delete – – ✓ – ✓
Make attribute – – ✓ – ✓
local
Make attribute – – ✓ – ✓
global
Clear – ✓ ✓ ✓ ✓
Bring online – ✓ ✓ ✓ ✓
Take offline – ✓ ✓ ✓ ✓
Modify – – ✓ – ✓
View state ✓ ✓ ✓ ✓ ✓
VCS user privileges—administration matrices 728
Administration matrices
Display ✓ ✓ ✓ ✓ ✓
View ✓ ✓ ✓ ✓ ✓
dependencies
List, Value ✓ ✓ ✓ ✓ ✓
Probe – ✓ ✓ ✓ ✓
Override attribute – – ✓ – ✓
Remove overrides – – ✓ – ✓
Run an action – ✓ ✓ ✓ ✓
Refresh info – ✓ ✓ ✓ ✓
Flush info – ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
Freeze and – – – ✓ ✓
unfreeze
Freeze and – – – – ✓
unfreeze persistent
Freeze and – – – – ✓
evacuate
Display ✓ ✓ ✓ ✓ ✓
Start forcibly – – – – ✓
Modify – – – – ✓
VCS user privileges—administration matrices 729
Administration matrices
View state ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Update license – – – – ✓
Add – – – – ✓
Delete – – – – ✓
Display ✓ ✓ ✓ ✓ ✓
View resources ✓ ✓ ✓ ✓ ✓
Modify – – – – ✓
List ✓ ✓ ✓ ✓ ✓
Add – – – – ✓
Delete – – – – ✓
VCS user privileges—administration matrices 730
Administration matrices
Update ✓ ✓ ✓ ✓ ✓
Note: If Note: If Note: If
configuration configuration configuration
is read/write is read/write is read/write
Display ✓ ✓ ✓ ✓ ✓
List ✓ ✓ ✓ ✓ ✓
Modify privileges – – ✓ – ✓
Appendix B
VCS commands: Quick
reference
This appendix includes the following topics:
Table B-2 lists the VCS commands for service group, resource, and site operations.
Table B-2 VCS commands for service group, resource, and site operations
Table B-2 VCS commands for service group, resource, and site operations
(continued)
Table B-3 lists the VCS commands for status and verification.
lltconfig -a list
lltstat
lltstat -nvv
lltconfig -U
VCS commands: Quick reference 734
VCS command line reference
gabconfig -U
■ System states
State Definition
INIT The initial state of the cluster. This is the default state.
Cluster and system states 736
Remote cluster states
State Definition
BUILD The local cluster is receiving the initial snapshot from the remote cluster.
RUNNING Indicates the remote cluster is running and connected to the local
cluster.
LOST_HB The connector process on the local cluster is not receiving heartbeats
from the remote cluster
LOST_CONN The connector process on the local cluster has lost the TCP/IP
connection to the remote cluster.
UNKNOWN The connector process on the local cluster determines the remote
cluster is down, but another remote cluster sends a response indicating
otherwise.
INQUIRY The connector process on the local cluster is querying other clusters
on which heartbeats were lost.
TRANSITIONING The connector process on the remote cluster is failing over to another
node in the cluster.
System states
Whenever the VCS engine is running on a system, it is in one of the states described
in the table below. States indicate a system’s current mode of operation. When the
engine is started on a new system, it identifies the other systems available in the
cluster and their states of operation. If a cluster system is in the state of RUNNING,
the new system retrieves the configuration information from that system. Changes
made to the configuration while it is being retrieved are applied to the new system
before it enters the RUNNING state.
If no other systems are up and in the state of RUNNING or ADMIN_WAIT, and the
new system has a configuration that is not invalid, the engine transitions to the state
LOCAL_BUILD, and builds the configuration from disk. If the configuration is invalid,
the system transitions to the state of STALE_ADMIN_WAIT.
See “Examples of system state transitions” on page 739.
Table C-2 provides a list of VCS system states and their descriptions.
State Definition
ADMIN_WAIT The running configuration was lost. A system transitions into this state
for the following reasons:
CURRENT_ The system has joined the cluster and its configuration file is valid.
DISCOVER_WAIT The system is waiting for information from other systems before it
determines how to transition to another state.
Cluster and system states 738
System states
State Definition
CURRENT_PEER_ The system has a valid configuration file and another system is doing
WAIT a build from disk (LOCAL_BUILD). When its peer finishes the build,
this system transitions to the state REMOTE_BUILD.
EXITING_FORCIBLY An hastop -force command has forced the system to leave the
cluster.
INITING The system has joined the cluster. This is the initial state for all
systems.
LEAVING The system is leaving the cluster gracefully. When the agents have
been stopped, and when the current configuration is written to disk,
the system transitions to EXITING.
LOCAL_BUILD The system is building the running configuration from the disk
configuration.
STALE_ADMIN_WAIT The system has an invalid configuration and there is no other system
in the state of RUNNING from which to retrieve a configuration. If a
system with a valid configuration is started, that system enters the
LOCAL_BUILD state.
STALE_ The system has joined the cluster with an invalid configuration file. It
DISCOVER_WAIT is waiting for information from any of its peers before determining how
to transition to another state.
STALE_PEER_WAIT The system has an invalid configuration file and another system is
doing a build from disk (LOCAL_BUILD). When its peer finishes the
build, this system transitions to the state REMOTE_BUILD.
UNKNOWN The system has not joined the cluster because it does not have a
system entry in the configuration.
Cluster and system states 739
System states
■ Resource attributes
■ System attributes
■ Cluster attributes
■ Site attributes
The values of attributes labelled system use only are set by VCS and are read-only.
They contain important information about the state of the cluster.
The values labeled agent-defined are set by the corresponding agent and are also
read-only.
Attribute values are case-sensitive.
See “About VCS attributes” on page 70.
Resource attributes
Table D-1 lists resource attributes.
Resource Description
attributes
ArgListValues List of arguments passed to the resource’s agent on each system.This attribute is
resource-specific and system-specific, meaning that the list of values passed to the agent depend
(agent-defined)
on which system and resource they are intended.
The number of values in the ArgListValues should not exceed 425. This requirement becomes
a consideration if an attribute in the ArgList is a keylist, a vector, or an association. Such type
of non-scalar attributes can typically take any number of values, and when they appear in the
ArgList, the agent has to compute ArgListValues from the value of such attributes. If the non-scalar
attribute contains many values, it will increase the size of ArgListValues. Hence when developing
an agent, this consideration should be kept in mind when adding a non-scalar attribute in the
ArgList. Users of the agent need to be notified that the attribute should not be configured to be
so large that it pushes that number of values in the ArgListValues attribute to be more than 425.
Resource Description
attributes
AutoStart Indicates if a resource should be brought online as part of a service group online, or if it needs
the hares -online command.
(user-defined)
For example, you have two resources, R1 and R2. R1 and R2 are in group G1. R1 has an
AutoStart value of 0, R2 has an AutoStart value of 1.
Brings only R2 to an ONLINE state. The group state is ONLINE and not a PARTIAL state. R1
remains OFFLINE.
Resources with a value of zero for AutoStart, contribute to the group's state only in their ONLINE
state and not for their OFFLINE state.
ComputeStats Indicates to agent framework whether or not to calculate the resource’s monitor statistics.
ConfidenceLevel Indicates the level of confidence in an online resource. Values range from 0–100. Note that
some VCS agents may not take advantage of this attribute and may always set it to 0. Set the
(agent-defined)
level to 100 if the attribute is not used.
Critical Indicates whether a fault of this resource should trigger a failover of the entire group or not. If
Critical is 0 and no parent above has Critical = 1, then the resource fault will not cause group
(user-defined)
failover.
Resource Description
attributes
(user-defined) If a resource is created dynamically while VCS is running, you must enable the resource before
VCS monitors it. For more information on how to add or enable resources, see the chapters on
administering VCS from the command line and graphical user interfaces.
Flags Provides additional information for the state of a resource. Primarily this attribute raises flags
pertaining to the resource. Values:
(system use only)
ADMIN WAIT—The running configuration of a system is lost.
RESTARTING —The agent is attempting to restart the resource because the resource was
detected as offline in latest monitor cycle unexpectedly. See RestartLimit attribute for more
information.
STATE UNKNOWN—The latest monitor call by the agent could not determine if the resource
was online or offline.
MONITOR TIMEDOUT —The latest monitor call by the agent was terminated because it exceeded
the maximum time specified by the static attribute MonitorTimeout.
UNABLE TO OFFLINE—The agent attempted to offline the resource but the resource did not
go offline. This flag is also set when a resource faults and the clean function completes
successfully, but the subsequent monitor hangs or is unable to determine resource status.
Group String name of the service group to which the resource belongs.
Resource Description
attributes
IState The internal state of a resource. In addition to the State attribute, this attribute shows to which
state the resource is transitioning. Values:
(system use only)
NOT WAITING—Resource is not in transition.
WAITING TO GO ONLINE—Agent notified to bring the resource online but procedure not yet
complete.
WAITING TO GO OFFLINE—Agent notified to take the resource offline but procedure not yet
complete.
WAITING TO GO OFFLINE (path) - Agent notified to take the resource offline but procedure
not yet complete. When the procedure completes, the resource’s children which are a member
of the path in the dependency tree will also be offline.
WAITING FOR PARENT OFFLINE – Resource waiting for parent resource to go offline. When
parent is offline the resource is brought offline.
Note: Although this attribute accepts integer types, the command line indicates the text
representations.
VCS attributes 745
Resource attributes
Resource Description
attributes
■ WAITING FOR OFFLINE VALIDATION (migrate) – This state is applicable for resource on
source system and indicates that migration operation has been accepted and VCS is validating
whether migration is possible.
■ WAITING FOR MIGRATION OFFLINE – This state is applicable for resource on source
system and indicates that migration operation has passed the prerequisite checks and
validations on the source system.
■ WAITING TO COMPLETE MIGRATION – This state is applicable for resource on source
system and indicates that migration process is complete on the source system and the VCS
engine is waiting for the resource to come online on target system.
IStates on the target system for migration operations:
■ WAITING FOR ONLINE VALIDATION (migrate) – This state is applicable for resource on
target system and indicates that migration operations are accepted and VCS is validating
whether migration is possible.
■ WAITING FOR MIGRATION ONLINE – This state is applicable for resource on target system
and indicates that migration operation has passed the prerequisite checks and validations
on the source system.
■ WAITING TO COMPLETE MIGRATION (online) – This state is applicable for resource on
target system and indicates that migration process is complete on the source system and
the VCS engine is waiting for the resource to come online on target system.
LastOnline Indicates the system name on which the resource was last online. This attribute is set by VCS.
MonitorMethod Specifies the monitoring method that the agent uses to monitor the resource:
Default: Traditional
VCS attributes 746
Resource attributes
Resource Description
attributes
MonitorOnly Indicates if the resource can be brought online or taken offline. If set to 0, resource can be
brought online or taken offline. If set to 1, resource can only be monitored.
(system use only)
Note: This attribute can only be affected by the command hagrp -freeze.
MonitorTimeStats Valid keys are Average and TS. Average is the average time taken by the monitor function over
the last Frequency number of monitor cycles. TS is the timestamp indicating when the engine
(system use only)
updated the resource’s Average value.
Path Set to 1 to identify a resource as a member of a path in the dependency tree to be taken offline
on a specific system after a resource faults.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: 0
Probed Indicates whether the state of the resource has been determined by the agent by running the
monitor function.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: 0
ResourceInfo This attribute has three predefined keys: State: values are Valid, Invalid, or Stale. Msg: output
of the info agent function of the resource on stdout by the agent framework. TS: timestamp
(system use only)
indicating when the ResourceInfo attribute was updated by the agent framework
Resource Description
attributes
ResourceOwner This attribute is used for VCS email notification and logging. VCS sends email notification to the
person that is designated in this attribute when events occur that are related to the resource.
(user-defined)
Note that while VCS logs most events, not all events trigger notifications. VCS also logs the
owner name when certain events occur.
Make sure to set the severity level at which you want notifications to be sent to ResourceOwner
or to at least one recipient defined in the SmtpRecipients attribute of the NotifierMngr agent.
ResourceRecipients This attribute is used for VCS email notification. VCS sends email notification to persons
designated in this attribute when events related to the resource occur and when the event's
(user-defined)
severity level is equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to be sent to
ResourceRecipients or to at least one recipient defined in the SmtpRecipients attribute of the
NotifierMngr agent.
Signaled Indicates whether a resource has been traversed. Used when bringing a service group online
or taking it offline.
(system use only)
■ Type and dimension: integer-association
■ Default: Not applicable.
Start Indicates whether a resource was started (the process of bringing it online was initiated) on a
system.
(system use only)
■ Type and dimension: integer -scalar
■ Default: 0
VCS attributes 748
Resource attributes
Resource Description
attributes
State Resource state displays the state of the resource and the flags associated with the resource.
(Flags are also captured by the Flags attribute.) This attribute and Flags present a comprehensive
(system use only)
view of the resource’s current state. Values:
ONLINE
OFFLINE
FAULTED
OFFLINE|MONITOR TIMEDOUT
OFFLINE|STATE UNKNOWN
OFFLINE|ADMIN WAIT
ONLINE|RESTARTING
ONLINE|MONITOR TIMEDOUT
ONLINE|STATE UNKNOWN
ONLINE|UNABLE TO OFFLINE
ONLINE|ADMIN WAIT
FAULTED|MONITOR TIMEDOUT
FAULTED|STATE UNKNOWN
Note: Although this attribute accepts integer types, the command line indicates the text
representations.
Default: 0
Resource Description
attributes
If a trigger is enabled but the trigger path at the service group level and at the resource level is
"" (default), VCS invokes the trigger from the $VCS_HOME/bin/triggers directory.
The TriggerPath value is case-sensitive. VCS does not trim the leading spaces or trailing spaces
in the Trigger Path value. If the path contains leading spaces or trailing spaces, the trigger might
fail to get executed. The path that you specify is relative to $VCS_HOME and the trigger path
defined for the service group.
ServiceGroupTriggerPath/Resource/Trigger
If TriggerPath for service group sg1 is mytriggers/sg1 and TriggerPath for resource res1 is "",
you must store the trigger script in the $VCS_HOME/mytriggers/sg1/res1 directory. For example,
store the resstatechange trigger script in the $VCS_HOME/mytriggers/sg1/res1 directory. Yon
can manage triggers for all resources for a service group more easily.
If TriggerPath for resource res1 is mytriggers/sg1/vip1 in the preceding example, you must store
the trigger script in the $VCS_HOME/mytriggers/sg1/vip1 directory. For example, store the
resstatechange trigger script in the $VCS_HOME/mytriggers/sg1/vip1 directory.
Modification of TriggerPath value at the resource level does not change the TriggerPath value
at the service group level. Likewise, modification of TriggerPath value at the service group level
does not change the TriggerPath value at the resource level.
TriggerResRestart Determines whether or not to invoke the resrestart trigger if resource restarts.
If this attribute is enabled at the group level, the resrestart trigger is invoked irrespective of the
value of this attribute at the resource level.
Resource Description
attributes
TriggerResState Determines whether or not to invoke the resstatechange trigger if the resource changes state.
Change
See “About the resstatechange event trigger” on page 530.
(user-defined)
If this attribute is enabled at the group level, then the resstatechange trigger is invoked irrespective
of the value of this attribute at the resource level.
(user-defined) Triggers are disabled by default. You can enable specific triggers on all nodes or only on selected
nodes. Valid values are RESFAULT, RESNOTOFF, RESSTATECHANGE, RESRESTART, and
RESADMINWAIT.
To enable triggers on a specific node, add trigger keys in the following format:
To enable triggers on all nodes in the cluster, add trigger keys in the following format:
The resadminwait trigger and resnotoff trigger are enabled on all nodes.
■ Type and dimension: string-keylist
■ Default: {}
For information about the AdvDbg attribute, see the Symantec Cluster Server Agent
Developer's Guide.
AgentClass Indicates the scheduling class for the VCS agent process.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
AgentDirectory Complete path of the directory in which the agent binary and scripts are located.
(user-defined) Agents look for binaries and scripts in the following directories:
If none of the above directories exist, the agent does not start.
Use this attribute in conjunction with the AgentFile attribute to specify a different location
or different binary for the agent.
AgentFailedOn A list of systems on which the agent for the resource type has failed.
AgentFile Complete name and path of the binary for an agent. If you do not specify a value for
this attribute, VCS uses the agent binary at the path defined by the AgentDirectory
(user-defined)
attribute.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
Default: 0
AgentReplyTimeout The number of seconds the engine waits to receive a heartbeat from the agent before
restarting the agent.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 130 seconds
AgentStartTimeout The number of seconds after starting the agent that the engine waits for the initial agent
"handshake" before restarting the agent.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 60 seconds
VCS attributes 753
Resource type attributes
AlertOnMonitorTimeouts When a monitor times out as many times as the value or a multiple of the value specified
by this attribute, then VCS sends an SNMP notification to the user. If this attribute is
(user-defined)
set to a value, say N, then after sending the notification at the first monitor timeout,
Note: This attribute can be VCS also sends an SNMP notification at each N-consecutive monitor timeout including
overridden. the first monitor timeout for the second-time notification.
ArgList An ordered list of attributes whose values are passed to the open, close, online, offline,
monitor, clean, info, and action functions.
(user-defined)
■ Type and dimension: string-vector
■ Default: Not applicable.
AttrChangedTimeout Maximum time (in seconds) within which the attr_changed function must complete or
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
CleanRetryLimit Number of times to retry the clean function before moving a resource to ADMIN_WAIT
state. If set to 0, clean is re-tried indefinitely.
(user-defined)
The valid values of this attribute are in the range of 0-1024.
CleanTimeout Maximum time (in seconds) within which the clean function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
VCS attributes 754
Resource type attributes
CloseTimeout Maximum time (in seconds) within which the close function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
ConfInterval When a resource has remained online for the specified time (in seconds), previous
faults and restart attempts are ignored by the agent. (See ToleranceLimit and
(user-defined)
RestartLimit attributes for details.)
Note: This attribute can be
■ Type and dimension: integer-scalar
overridden.
■ Default: 600 seconds
EPClass Enables you to control the scheduling class for the agent functions (entry points) other
than the online entry point whether the entry point is in C or scripts.
(user-defined)
The following values are valid for this attribute:
■ RT (Real Time)
■ TS (Time Sharing)
■ -1—indicates that VCS does not use this attribute to control the scheduling class
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
EPPriority Enables you to control the scheduling priority for the agent functions (entry points)
other than the online entry point. The attribute controls the agent function priority
(user-defined)
whether the entry point is in C or scripts.
The following values are valid for this attribute:
■ 0—indicates the default priority value for the configured scheduling class as given
by the EPClass attribute for the operating system.
■ Greater than 0—indicates a value greater than the default priority for the operating
system. Symantec recommends a value of greater than 0 for this attribute. A system
that has a higher load requires a greater value.
■ -1—indicates that VCS does not use this attribute to control the scheduling priority
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
ExternalStateChange Defines how VCS handles service group state when resources are intentionally brought
online or taken offline outside of VCS control.
(user-defined)
The attribute can take the following values:
Note: This attribute can be
overridden. OnlineGroup: If the configured application is started outside of VCS control, VCS
brings the corresponding service group online.
FaultOnMonitorTimeouts When a monitor times out as many times as the value specified, the corresponding
resource is brought down by calling the clean function. The resource is then marked
(user-defined)
FAULTED, or it is restarted, depending on the value set in the RestartLimit attribute.
Note: This attribute can be
When FaultOnMonitorTimeouts is set to 0, monitor failures are not considered indicative
overridden.
of a resource fault. A low value may lead to spurious resource faults, especially on
heavily loaded systems.
FaultPropagation Specifies if VCS should propagate the fault up to parent resources and take the entire
service group offline when a resource faults.
(user-defined)
The value 1 indicates that when a resource faults, VCS fails over the service group, if
Note: This attribute can be
the group’s AutoFailOver attribute is set to 1. If The value 0 indicates that when a
overridden.
resource faults, VCS does not take other resources offline, regardless of the value of
the Critical attribute. The service group does not fail over on resource fault.
FireDrill Specifies whether or not fire drill is enabled for resource type. If set to 1, fire drill is
enabled. If set to 0, it is disabled.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 0
VCS attributes 757
Resource type attributes
IMF Determines whether the IMF-aware agent must perform intelligent resource monitoring.
You can also override the value of this attribute at resource-level.
(user-defined)
Type and dimension: integer-association
Note: This attribute can be
overridden. This attribute includes the following keys:
■ Mode
Define this attribute to enable or disable intelligent resource monitoring.
Valid values are as follows:
■ 0—Does not perform intelligent resource monitoring
■ 1—Performs intelligent resource monitoring for offline resources and performs
poll-based monitoring for online resources
■ 2—Performs intelligent resource monitoring for online resources and performs
poll-based monitoring for offline resources
■ 3—Performs intelligent resource monitoring for both online and for offline
resources
■ MonitorFreq
This key value specifies the frequency at which the agent invokes the monitor agent
function. The value of this key is an integer.
You can set this attribute to a non-zero value in some cases where the agent
requires to perform poll-based resource monitoring in addition to the intelligent
resource monitoring. See the Symantec Cluster Server Bundled Agents Reference
Guide for agent-specific recommendations.
After the resource registers with the IMF notification module, the agent calls the
monitor agent function as follows:
■ After every (MonitorFreq x MonitorInterval) number of seconds for online
resources
■ After every (MonitorFreq x OfflineMonitorInterval) number of seconds for offline
resources
■ RegisterRetryLimit
If you enable IMF, the agent invokes the imf_register agent function to register the
resource with the IMF notification module. The value of the RegisterRetyLimit key
determines the number of times the agent must retry registration for a resource. If
the agent cannot register the resource within the limit that is specified, then intelligent
monitoring is disabled until the resource state changes or the value of the Mode
key changes.
IMFRegList An ordered list of attributes whose values are registered with the IMF notification
module.
InfoInterval Duration (in seconds) after which the info function is invoked by the agent framework
for ONLINE resources of the particular resource type.
(user-defined)
If set to 0, the agent framework does not periodically invoke the info function. To
manually invoke the info function, use the command hares -refreshinfo. If the value
you designate is 30, for example, the function is invoked every 30 seconds for all
ONLINE resources of the particular resource type.
IntentionalOffline Defines how VCS reacts when a configured application is intentionally stopped outside
of VCS control.
(user-defined)
Add this attribute for agents that support detection of an intentional offline outside of
VCS control. Note that the intentional offline feature is available for agents registered
as V51 or later.
The value 0 instructs the agent to register a fault and initiate the failover of a service
group when the supported resource is taken offline outside of VCS control.
The value 1 instructs VCS to take the resource offline when the corresponding
application is stopped outside of VCS control.
InfoTimeout Timeout value for info function. If function does not complete by the designated time,
the agent framework cancels the function’s thread.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 30 seconds
LevelTwoMonitorFreq Specifies the frequency at which the agent for this resource type must perform
second-level or detailed monitoring.
Default: 0
LogDbg Indicates the debug severities enabled for the resource type or agent framework. Debug
severities used by the agent functions are in the range of DBG_1–DBG_21. The debug
(user-defined)
messages from the agent framework are logged with the severities DBG_AGINFO,
DBG_AGDEBUG and DBG_AGTRACE, representing the least to most verbose.
LogFileSize Specifies the size (in bytes) of the agent log file. Minimum value is 64 KB. Maximum
value is 134217728 bytes (128MB).
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 33554432 (32MB)
MigrateWaitLimit Number of monitor intervals to wait for a resource to migrate after the migrating
procedure is complete. MigrateWaitLimit is applicable for source as well as for target
(user-defined)
node, as the migrate operation will bring the resource offline on source and online on
the target node. We can also define MigrateWaitLimit as the number of monitor intervals
to wait for resource to go offline on the source node after completing the migrate
procedure and the number of monitor intervals to wait for resource to come online on
the target after resource is offline on source.
Probes fired manually are counted when MigrateWaitLimit is set and the resource is
waiting to migrate. For example, if the MigrateWaitLimit of a resource is set to 5 and
the MonitorInterval is set to 60 (seconds), the resource waits for a maximum of five
monitor intervals (that is, 5 x 60), and if all five monitors within MigrateWaitLimit report
the resource as online on source node, it sets the ADMIN_WAIT flag. If you run another
probe, the resource waits for four monitor intervals (that is, 4 x 60), and if the fourth
monitor does not report the state as offline on source , it sets the ADMIN_WAIT flag.
This procedure is repeated for 5 complete cycles. Similarly if resource not moved to
online state within the MigrateWaitLimit then it sets the ADMIN_WAIT flag.
MigrateTimeout Maximum time (in seconds) within which the migrate procedure must complete or else
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 600 seconds
MonitorInterval Duration (in seconds) between two consecutive monitor calls for an ONLINE or
transitioning resource.
(user-defined)
Note: Note: The value of this attribute for the MultiNICB type must be less than its
Note: This attribute can be
value for the IPMultiNICB type. See the Symantec Cluster Server Bundled Agents
overridden.
Reference Guide for more information.
A low value may impact performance if many resources of the same type exist. A high
value may delay detection of a faulted resource.
MonitorStatsParam Stores the required parameter values for calculating monitor time statistics.
Frequency: The number of monitor cycles after which the average monitor cycle time
should be computed and sent to the engine. If configured, the value for this attribute
must be between 1 and 30. The value 0 indicates that the monitor cycle ti me should
not be computed. Default=0.
ExpectedValue: The expected monitor time in milliseconds for all resources of this
type. Default=100.
MonitorTimeout Maximum time (in seconds) within which the monitor function must complete or else
be terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
VCS attributes 761
Resource type attributes
NumThreads Number of threads used within the agent process for managing resources. This number
does not include threads used for other internal purposes.
(user-defined)
If the number of resources being managed by the agent is less than or equal to the
NumThreads value, only that many number of threads are created in the agent. Addition
of more resources does not create more service threads. Similarly deletion of resources
causes service threads to exit. Thus, setting NumThreads to 1 forces the agent to just
use 1 service thread no matter what the resource count is. The agent framework limits
the value of this attribute to 30.
OfflineMonitorInterval Duration (in seconds) between two consecutive monitor calls for an OFFLINE resource.
If set to 0, OFFLINE resources are not monitored.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
OfflineTimeout Maximum time (in seconds) within which the offline function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
OfflineWaitLimit Number of monitor intervals to wait for the resource to go offline after completing the
offline procedure. Increase the value of this attribute if the resource is likely to take a
(user-defined)
longer time to go offline.
Note: This attribute can be
Probes fired manually are counted when OfflineWaitLimit is set and the resource is
overridden.
waiting to go offline. For example, say the OfflineWaitLimit of a resource is set to 5 and
the MonitorInterval is set to 60. The resource waits for a maximum of five monitor
intervals (five times 60), and if all five monitors within OfflineWaitLimit report the resource
as online, it calls the clean agent function. If the user fires a probe, the resource waits
for four monitor intervals (four times 60), and if the fourth monitor does not report the
state as offline, it calls the clean agent function. If the user fires another probe, one
more monitor cycle is consumed and the resource waits for three monitor intervals
(three times 60), and if the third monitor does not report the state as offline, it calls the
clean agent function.
OnlineClass Enables you to control the scheduling class for the online agent function (entry point).
This attribute controls the class whether the entry point is in C or scripts.
The following values are valid for this attribute:
■ RT (Real Time)
■ TS (Time Sharing)
■ -1—indicates that VCS does not use this attribute to control the scheduling class
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
OnlinePriority Enables you to control the scheduling priority for the online agent function (entry point).
This attribute controls the priority whether the entry point is in C or scripts.
The following values are valid for this attribute:
■ 0—indicates the default priority value for the configured scheduling class as given
by the OnlineClass for the operating system.
Symantec recommends that you set the value of the OnlinePriority attribute to 0.
■ Greater than 0—indicates a value greater than the default priority for the operating
system.
■ -1—indicates that VCS does not use this attribute to control the scheduling priority
of entry points.
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
OnlineRetryLimit Number of times to retry the online operation if the attempt to online a resource is
unsuccessful. This parameter is meaningful only if the clean operation is implemented.
(user-defined)
Note: This attribute can be ■ Type and dimension: integer-scalar
overridden. ■ Default: 0
VCS attributes 763
Resource type attributes
OnlineTimeout Maximum time (in seconds) within which the online function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 300 seconds
OnlineWaitLimit Number of monitor intervals to wait for the resource to come online after completing
the online procedure. Increase the value of this attribute if the resource is likely to take
(user-defined)
a longer time to come online.
Note: This attribute can be
Each probe command fired from the user is considered as one monitor interval. For
overridden.
example, say the OnlineWaitLimit of a resource is set to 5. This means that the resource
will be moved to a faulted state after five monitor intervals. If the user fires a probe,
then the resource will be faulted after four monitor cycles, if the fourth monitor does
not report the state as ONLINE. If the user again fires a probe, then one more monitor
cycle is consumed and the resource will be faulted if the third monitor does not report
the state as ONLINE.
OpenTimeout Maximum time (in seconds) within which the open function must complete or else be
terminated.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 60 seconds
Operations Indicates valid operations for resources of the resource type. Values are OnOnly (can
online only), OnOff (can online and offline), None (cannot online or offline).
(user-defined)
■ Type and dimension: string-scalar
■ Default: OnOff
RestartLimit Number of times to retry bringing a resource online when it is taken offline unexpectedly
and before VCS declares it FAULTED.
(user-defined)
■ Type and dimension: integer-scalar
Note: This attribute can be
overridden. ■ Default: 0
VCS attributes 764
Resource type attributes
ScriptClass Indicates the scheduling class of the script processes (for example, online) created by
the agent.
(user-defined)
Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
ScriptPriority Indicates the priority of the script processes created by the agent.
(user-defined) Use only one of the following sets of attributes to configure scheduling class and priority
for VCS:
SourceFile File from which the configuration is read. Do not configure this attribute in main.cf.
(user-defined) Make sure the path exists on all nodes before running a command that configures this
attribute.
SupportedOperations Indicates the additional operations for a resource type or an agent. Only migrate
keyword is supported.
(user-defined)
■ Type and dimension: string-keylist
■ Default: {}
ToleranceLimit After a resource goes online, the number of times the monitor function should return
OFFLINE before declaring the resource FAULTED.
(user-defined)
A large value could delay detection of a genuinely faulted resource.
Note: This attribute can be
overridden. ■ Type and dimension: integer-scalar
■ Default: 0
TypeOwner This attribute is used for VCS notification. VCS sends notifications to persons designated
in this attribute when an event occurs related to the agent's resource type. If the agent
(user-defined)
of that type faults or restarts, VCS send notification to the TypeOwner. Note that while
VCS logs most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to be sent to
TypeOwner or to at least one recipient defined in the SmtpRecipients attribute of the
NotifierMngr agent.
TypeRecipients This attribute is used for VCS email notification. VCS sends email notification to persons
designated in this attribute when events related to the agent's resource type occur and
(user-defined)
when the event's severity level is equal to or greater than the level specified in the
attribute.
Make sure to set the severity level at which you want notifications to be sent to
TypeRecipients or to at least one recipient defined in the SmtpRecipients attribute of
the NotifierMngr agent.
AdministratorGroups List of operating system user account groups that have administrative
privileges on the service group.
(user-defined)
This attribute applies to clusters running in secure mode.
Authority Indicates whether or not the local cluster is allowed to bring the
service group online. If set to 0, it is not, if set to 1, it is. Only one
(user-defined)
cluster can have this attribute set to 1 for a specific global group.
AutoDisabled Indicates that VCS does not know the status of a service group (or
specified system for parallel service groups). This could occur
(system use only)
because the group is not probed (on specified system for parallel
groups) in the SystemList attribute. Or the VCS engine is not running
on a node designated in the SystemList attribute, but the node is
visible.
When VCS does not know the status of a service group on a node
but you want VCS to consider the service group enabled, perform
this command to change the AutoDisabled value to 0.
■ 0—Autorestart is disabled.
■ 1—Autorestart is enabled.
■ 2—When a faulted persistent resource recovers from a fault, the
VCS engine clears the faults on all non-persistent faulted
resources on the system. It then restarts the service group.
AutoStartList List of systems on which, under specific conditions, the service group
will be started with VCS (usually at system boot). For example, if a
(user-defined)
system is a member of a failover service group’s AutoStartList
attribute, and if the service group is not already running on another
system in the cluster, the group is brought online when the system
is started.
AutoStartPolicy Sets the policy VCS uses to determine the system on which a service
group is brought online during an autostart operation if multiple
(user-defined)
systems exist.
Possible values:
1: Capacity is reserved.
To list this attribute, use the -all option with the hagrp -display
command.
ClusterFailOverPolicy Determines how a global service group behaves when a cluster faults
or when a global group faults. The attribute can take the following
(user-defined)
values:
ClusterList Specifies the list of clusters on which the service group is configured
to run.
(user-defined)
■ Type and dimension: integer-association
■ Default: {} (none)
DeferAutoStart Indicates whether HAD defers the auto-start of a global group in the
local cluster in case the global cluster is not fully connected.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: Not applicable
(user-defined) The attribute can have global or local scope. If you define local
(system-specific) scope for this attribute, VCS prevents the service
group from coming online on specified systems that have a value of
0 for the attribute. You can use this attribute to prevent failovers on
a system when performing maintenance on the system.
Evacuating Indicates the node ID from which the service group is being
evacuated.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
FailOverPolicy Defines the failover policy used by VCS to determine the system to
which a group fails over. It is also used to determine the system on
(user-defined)
which a service group has been brought online through manual
operation.
The policy is defined only for clusters that contain multiple systems:
FromQ Indicates the system name from which the service group is failing
over. This attribute is specified when service group failover is a direct
(system use only)
consequence of the group event, such as a resource fault within the
group or a group switch.
Frozen Disables all actions, including autostart, online and offline, and
failover, except for monitor actions performed by agents. (This
(system use only)
convention is observed by all agents supplied with VCS.)
GroupOwner This attribute is used for VCS email notification and logging. VCS
sends email notification to the person designated in this attribute
(user-defined)
when events occur that are related to the service group. Note that
while VCS logs most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to
be sent to GroupOwner or to at least one recipient defined in the
SmtpRecipients attribute of the NotifierMngr agent.
GroupRecipients This attribute is used for VCS email notification. VCS sends email
notification to persons designated in this attribute when events related
(user-defined)
to the service group occur and when the event's severity level is
equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to
be sent to GroupRecipients or to at least one recipient defined in the
SmtpRecipients attribute of the NotifierMngr agent.
Guests List of operating system user accounts that have Guest privileges
on the service group.
(user-defined)
This attribute applies to clusters running in secure mode.
(system use only) VCS sets this attribute to 1 if an attempt has been made to bring the
service group online.
For failover groups, VCS sets this attribute to 0 when the group is
taken offline.
For parallel groups, it is set to 0 for the system when the group is
taken offline or when the group faults and can fail over to another
system.
VCS sets this attribute to 2 for service groups if VCS attempts to
autostart a service group; for example, attempting to bring a service
group online on a system from AutoStartList.
IntentionalOnlineList Lists the nodes where a resource that can be intentionally brought
online is found ONLINE at first probe. IntentionalOnlineList is used
(system use only)
along with AutoStartList to determine the node on which the service
group should go online when a cluster starts.
LastSuccess Indicates the time when service group was last brought online.
When the cluster attribute Statistics is not enabled, the allowed key
value is Units.
■ You cannot change this attribute when the service group attribute
CapacityReserved is set to 1 in the cluster and when the
FailOverPolicy is set to BiggestAvailable. This is because the
VCS engine reserves system capacity based on the service group
attribute Load.
When the service group's online transition completes and after
the next forecast cycle, CapacityReserved is reset.
■ If the FailOverPolicy is set to BiggestAvailable for a service group,
the attribute Load must be specified with at least one of the
following keys, CPU, Mem, or Swap.
VCS attributes 776
Service group attributes
ManageFaults Specifies if VCS manages resource failures within the service group
by calling the Clean function for the resources. This attribute can
(user-defined)
take the following values.
NONE—VCS does not call the Clean function for any resource in
the group. You must manually handle resource faults.
MeterWeight Represents the weight given for the cluster attribute’s HostMeters
key to determine a target system for a service group when more than
(user-defined)
one system meets the group attribute’s Load requirements.
MigrateQ Indicates the system from which the service group is migrating. This
attribute is specified when group failover is an indirect consequence
(system use only)
(in situations such as a system shutdown or another group faults
and is linked to this group).
OnlineClearParent When this attribute is enabled for a service group and the service
group comes online or is detected online, VCS clears the faults on
all online type parent groups, such as online local, online global, and
online remote.
For example, assume that both the parent group and the child group
faulted and both cannot failover. Later, when VCS tries again to bring
the child group online and the group is brought online or detected
online, the VCS engine clears the faults on the parent group, allowing
VCS to restart the parent group too.
OnlineRetryInterval Indicates the interval, in seconds, during which a service group that
has successfully restarted on the same system and faults again
(user-defined)
should be failed over, even if the attribute OnlineRetryLimit is
non-zero. This prevents a group from continuously faulting and
restarting on the same system.
OnlineRetryLimit If non-zero, specifies the number of times the VCS engine tries to
restart a faulted service group on the same system on which the
(user-defined)
group faulted, before it gives up and tries to fail over the group to
another system.
OperatorGroups List of operating system user groups that have Operator privileges
on the service group. This attribute applies to clusters running in
(user-defined)
secure mode.
Operators List of VCS users with privileges to operate the group. A Group
Operator can only perform online/offline, and temporary
(user-defined)
freeze/unfreeze operations pertaining to a specific group.
PathCount Number of resources in path not yet taken offline. When this number
drops to zero, the engine may take the entire service group offline if
(system use only)
critical fault has occurred.
■ The value 0 indicates that the service group is not part of the
hagrp -online -propagate operation or the hagrp
-offline -propagate operation.
■ The value 1 indicates that the service group is part of the hagrp
-online -propagate operation.
■ The value 2 indicates that the service group is part of the hagrp
-offline -propagate operation.
PreOnline Indicates that the VCS engine should not bring online a service group
in response to a manual group online, group autostart, or group
(user-defined)
failover. The engine should instead run the PreOnline trigger.
You can set a local (per-system) value or a global value for this
attribute. A per-system value enables you to control the firing of
PreOnline triggers on specific nodes in the cluster.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and the -global
option, see the man pages associated with the hagrp command.
PreOnlining Indicates that VCS engine invoked the preonline script; however, the
script has not yet returned with group online.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
VCS attributes 780
Service group attributes
PreonlineTimeout Defines the maximum amount of time in seconds the preonline script
takes to run the command hagrp -online -nopre for the group.
(user-defined)
Note that HAD uses this timeout during evacuation only. For example,
when a user runs the command hastop -local -evacuate and the
Preonline trigger is invoked on the system on which the service
groups are being evacuated.
If you set the value as 1, the VCS engine looks for any resource in
the service group that supports PreSwitch action. If the action is not
defined for any resource, the VCS engine switches a service group
normally.
If the action is defined for one or more resources, then the VCS
engine invokes PreSwitch action for those resources. If all the actions
succeed, the engine switches the service group. If any of the actions
fail, the engine aborts the switch operation.
The engine invokes the PreSwitch action in parallel and waits for all
the actions to complete to decide whether to perform a switch
operation. The VCS engine reports the action’s output to the engine
log. The PreSwitch action does not change the configuration or the
cluster state.
PreSwitching Indicates that the VCS engine invoked the agent’s PreSwitch action;
however, the action is not yet complete.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
VCS attributes 782
Service group attributes
Priority Enables users to designate and prioritize the service group. VCS
does not interpret the value; rather, this attribute enables the user
(user-defined)
to configure the priority of a service group and the sequence of
actions required in response to a particular event.
VCS assigns the following node weight based on the priority of the
service group:
Probed Indicates whether all enabled resources in the group have been
detected by their respective agents.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: Not applicable
VCS attributes 783
Service group attributes
SourceFile File from which the configuration is read. Do not configure this
attribute in main.cf.
(user-defined)
Make sure the path exists on all nodes before running a command
that configures this attribute.
Make sure the path exists on all nodes before configuring this
attribute.
SystemList List of systems on which the service group is configured to run and
their priorities. Lower numbers indicate a preference for the system
(user-defined)
as a failover target.
Note: You must define this attribute prior to setting the AutoStartList
attribute.
SystemZones Indicates the virtual sublists within the SystemList attribute that grant
priority in failing over. Values are string/integer pairs. The string key
(user-defined)
is the name of a system in the SystemList attribute, and the integer
is the number of the zone. Systems with the same zone number are
members of the same zone. If a service group faults on one system
in a zone, it is granted priority to fail over to another system within
the same zone, despite the policy granted by the FailOverPolicy
attribute.
TargetCount Indicates the number of target systems on which the service group
should be brought online.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable.
ToQ Indicates the node name to which the service is failing over. This
attribute is specified when service group failover is a direct
(system use only)
consequence of the group event, such as a resource fault within the
group or a group switch.
(user-defined) If a trigger is enabled but the trigger path is "" (default), VCS invokes
the trigger from the $VCS_HOME/bin/triggers directory. If you specify
an alternate directory, VCS invokes the trigger from that path. The
value is case-sensitive. VCS does not trim the leading spaces or
trailing spaces in the Trigger Path value. If the path contains leading
spaces or trailing spaces, the trigger might fail to get executed.
$VCS_HOME/TriggerPath/Trigger
TriggerResFault Defines whether VCS invokes the resfault trigger when a resource
faults. The value 0 indicates that VCS does not invoke the trigger.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 1
(user-defined) Triggers are disabled by default. You can enable specific triggers on
all nodes or on selected nodes. Valid values are VIOLATION,
NOFAILOVER, PREONLINE, POSTONLINE, POSTOFFLINE,
RESFAULT, RESSTATECHANGE, and RESRESTART.
To enable triggers on all nodes in the cluster, add trigger keys in the
following format:
TriggersEnabled = {POSTOFFLINE, POSTONLINE}
The postoffline trigger and postonline trigger are enabled on all nodes.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and the -global
option, see the man pages associated with the hagrp command.
VCS attributes 789
Service group attributes
To list this attribute, use the -all option with the hagrp -display
command.
UserAssoc Use this attribute for any purpose. It is not used by VCS.
You can change the attribute scope from local to global as follows:
You can change the attribute scope from global to local as follows:
For more information about the -local option and -global option,
see the man pages associated with the hagrp command.
UserIntGlobal Use this attribute for any purpose. It is not used by VCS.
UserStrGlobal VCS uses this attribute in the ClusterService group. Do not modify
this attribute in the ClusterService group.Use the attribute for any
(user-defined)
purpose in other service groups.
UserIntLocal Use this attribute for any purpose. It is not used by VCS.
UserStrLocal Use this attribute for any purpose. It is not used by VCS.
System attributes
Table D-4 lists the system attributes.
System Definition
Attributes
AgentsStopped This attribute is set to 1 on a system when all agents running on the
system are stopped.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
System Definition
Attributes
ConfigBlockCount Number of 512-byte blocks in configuration when the system joined the
cluster.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
ConfigDiskState State of configuration on the disk when the system joined the cluster.
System Definition
Attributes
ConfigModDate Last modification date of configuration when the system joined the
cluster.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable
CPUThresholdLevel Determines the threshold values for CPU utilization based on which
various levels of logs are generated. The notification levels are Critical,
(user-defined)
Warning, Note, and Info, and the logs are stored in the file engine_A.log.
If the Warning level is crossed, a notification is generated. The values
are configurable at a system level in the cluster.
(system use only) CurrentLimits = Limits - (additive value of all service group
Prerequisites).
System Definition
Attributes
EngineRestarted Indicates whether the VCS engine (HAD) was restarted by the hashadow
process on a node in the cluster. The value 1 indicates that the engine
(system use only)
was restarted; 0 indicates it was not restarted.
FencingWeight Indicates the system priority for preferred fencing. This value is relative
to other systems in the cluster and does not reflect any real value
(user-defined)
associated with a particular system.
If the cluster-level attribute value for PreferredFencingPolicy is set to
System, VCS uses this FencingWeight attribute to determine the node
weight to ascertain the surviving subcluster during I/O fencing race.
System Definition
Attributes
Frozen Indicates if service groups can be brought online on the system. Groups
cannot be brought online if the attribute value is 1.
(system use only)
■ Type and dimension: boolean-scalar
■ Default: 0
GUIIPAddr Determines the local IP address that VCS uses to accept connections.
Incoming connections over other IP addresses are dropped. If
(user-defined)
GUIIPAddr is not set, the default behavior is to accept external
connections over all configured local IP addresses.
HostMonitor List of host resources that the HostMonitor agent monitors. The values
of keys such as Mem and Swap are measured in MB or GB, and CPU
(system use only)
is measured in MHz or GHz.
System Definition
Attributes
LicenseType Indicates the license type of the base VCS key used by the system.
Possible values are:
(system use only)
0—DEMO
1—PERMANENT
2—PERMANENT_NODE_LOCK
3—DEMO_NODE_LOCK
4—NFR
5—DEMO_EXTENSION
6—NFR_NODE_LOCK
7—DEMO_EXTENSION_NODE_LOCK
Where the value UP for nic1 means there is at least one peer in the
cluster that is visible on nic1.
Where the value DOWN for nic2 means no peer in the cluster is visible
on nic2.
System Definition
Attributes
LoadTimeThreshold How long the system load must remain at or above LoadWarningLevel
before the LoadWarning trigger is fired. If set to 0 overload calculations
(user-defined)
are disabled.
LoadWarningLevel A percentage of total capacity where load has reached a critical limit.
If set to 0 overload calculations are disabled.
(user-defined)
For example, setting LoadWarningLevel = 80 sets the warning level to
80 percent.
The value of this attribute can be set from 1 to 100. If set to 1, system
load must equal 1 percent of system capacity to begin incrementing
the LoadTimeCounter. If set to 100, system load must equal system
capacity to increment the LoadTimeCounter.
MemThresholdLevel Determines the threshold values for memory utilization based on which
various levels of logs are generated. The notification levels are Critical,
(user-defined)
Warning, Note, and Info, and the logs are stored in the file engine_A.log.
If the Warning level is crossed, a notification is generated. The values
are configurable at a system level in the cluster.
System Definition
Attributes
MeterRecord Acts as an internal system attribute with predefined keys. This attribute
is updated only when the Cluster attribute AdpativePolicy is set to
(system use only)
Enabled.
NoAutoDisable When set to 0, this attribute autodisables service groups when the VCS
engine is taken down. Groups remain autodisabled until the engine is
(system use only)
brought up (regular membership).
This attribute’s value is updated whenever a node joins (gets into
RUNNING state) or leaves the cluster. This attribute cannot be set
manually.
System Definition
Attributes
ReservedCapacity Indicates the reserved capacity on the systems for service groups which
are coming online and with FailOverPolicy is set to BiggestAvailable.
(system use only)
It has all of the keys specified in HostMeters, such as CPU, Mem, and
Swap. The values for keys are set in corresponding units as specified
in the Cluster attribute MeterUnit.
When the service group completes online transition and after the next
forecast cycle, ReservedCapacity is updated.
ShutdownTimeout Determines whether to treat system reboot as a fault for service groups
running on the system.
(user-defined)
On many systems, when a reboot occurs the processes are stopped
first, then the system goes down. When the VCS engine is stopped,
service groups that include the failed system in their SystemList
attributes are autodisabled. However, if the system goes down within
the number of seconds designated in ShutdownTimeout, service groups
previously online on the failed system are treated as faulted and failed
over. Symantec recommends that you set this attribute depending on
the average time it takes to shut down the system.
If you do not want to treat the system reboot as a fault, set the value
for this attribute to 0.
System Definition
Attributes
SourceFile File from which the configuration is read. Do not configure this attribute
in main.cf.
(user-defined)
Make sure the path exists on all nodes before running a command that
configures this attribute.
SwapThresholdLevel Determines the threshold values for swap space utilization based on
which various levels of logs are generated. The notification levels are
(user-defined)
Critical, Warning, Note, and Info, and the logs are stored in the file
engine_A.log. If the Warning level is crossed, a notification is generated.
The values are configurable at a system level in the cluster.
System Definition
Attributes
SystemOwner Use this attribute for VCS email notification and logging. VCS sends
email notification to the person designated in this attribute when an
(user-defined)
event occurs related to the system. Note that while VCS logs most
events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to
SystemOwner or to at least one recipient defined in the SmtpRecipients
attribute of the NotifierMngr agent.
SystemRecipients This attribute is used for VCS email notification. VCS sends email
notification to persons designated in this attribute when events related
(user-defined)
to the system occur and when the event's severity level is equal to or
greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to
be sent to SystemRecipients or to at least one recipient defined in the
SmtpRecipients attribute of the NotifierMngr agent.
TRSE Indicates in seconds the time to Regular State Exit. Time is calculated
as the duration between the events of VCS losing port h membership
(system use only)
and of VCS losing port a membership of GAB.
System Definition
Attributes
(system use only) Down (0): System is powered off, or GAB and LLT are not running on
the system.
Up but not in cluster membership (1): GAB and LLT are running but the
VCS engine is not.
UserInt Stores integer values you want to use. VCS does not interpret the value
of this attribute.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 0
VCSFeatures Indicates which VCS features are enabled. Possible values are:
1—L3+ is enabled
Cluster attributes
Table D-5 lists the cluster attributes.
VCS attributes 802
Cluster attributes
AdministratorGroups List of operating system user account groups that have administrative privileges on
the cluster. This attribute applies to clusters running in secure mode.
(user-defined)
■ Type and dimension: string-keylist
■ Default: ""
AutoStartTimeout If the local cluster cannot communicate with one or more remote clusters, this attribute
specifies the number of seconds the VCS engine waits before initiating the AutoStart
(user-defined)
process for an AutoStart global service group.
AutoAddSystemtoCSG Indicates whether the newly joined or added systems in cluster become part of the
SystemList of the ClusterService service group if the service group is configured. The
(user-defined)
value 1 (default) indicates that the new systems are added to SystemList of
ClusterService. The value 0 indicates that the new systems are not added to SystemList
of ClusterService.
BackupInterval Time period in minutes after which VCS backs up the configuration files if the
configuration is in read-write mode.
(user-defined)
The value 0 indicates VCS does not back up configuration files. Set this attribute to
at least 3.
See “Scheduling automatic backups for VCS configuration files” on page 196.
Once VCS receives the snapshot from the engine, it reads the file
/etc/vx/.uuids/clusuuid file. VCS uses the file’s contents as the value for the CID
attribute. The clusuuid file’s first line must not be empty. If the file does not exists or
is empty VCS then exits gracefully and throws an error.
A node that joins a cluster in the RUNNING state receives the CID attribute as part
of the REMOTE_BUILD snapshot. Once the node has joined completely, it receives
the snapshot. The node reads the file /etc/vx/.uuids/clusuuid to compare the value
that it received from the snapshot with value that is present in the file. If the value does
not match or if the file does not exist, the joining node exits gracefully and does not
join the cluster.
See “Configuring and unconfiguring the cluster UUID value” on page 239.
You cannot change the value of this attribute with the haclus –modify command.
ClusterAddress Specifies the cluster’s virtual IP address (used by a remote cluster when connecting
to the local cluster).
(user-defined)
■ Type and dimension: string-scalar
■ Default: ""
ClusterOwner This attribute used for VCS notification. VCS sends notifications to persons designated
in this attribute when an event occurs related to the cluster. Note that while VCS logs
(user-defined)
most events, not all events trigger notifications.
Make sure to set the severity level at which you want notifications to be sent to
ClusterOwner or to at least one recipient defined in the SmtpRecipients attribute of
the NotifierMngr agent.
ClusterRecipients This attribute is used for VCS email notification. VCS sends email notification to
persons designated in this attribute when events related to the cluster occur and when
(user-defined)
the event's severity level is equal to or greater than the level specified in the attribute.
Make sure to set the severity level at which you want notifications to be sent to
ClusterRecipients or to at least one recipient defined in the SmtpRecipients attribute
of the NotifierMngr agent.
ClusterTime The number of seconds since January 1, 1970. This is defined by the lowest node in
running state.
(system use only)
■ Type and dimension: string-scalar
■ Default: Not applicable
CompareRSM Indicates if VCS engine is to verify that replicated state machine is consistent. This
can be set by running the hadebug command.
(system use only)
■ Type and dimension: integer-scalar
■ Default: 0
ConnectorState Indicates the state of the wide-area connector (wac). If 0, wac is not running. If 1, wac
is running and communicating with the VCS engine.
(system use only)
■ Type and dimension: integer-scalar
■ Default: Not applicable.
VCS attributes 805
Cluster attributes
CounterInterval Intervals counted by the attribute GlobalCounter indicating approximately how often
a broadcast occurs that will cause the GlobalCounter attribute to increase.
(user-defined)
The default value of the GlobalCounter increment can be modified by changing
CounterInterval. If you increase this attribute to exceed five seconds, consider
increasing the default value of the ShutdownTimeout attribute.
CounterMissAction Specifies the action that must be performed when the GlobalCounter is not updated
for CounterMissTolerance times the CounterInterval. Possible values are LogOnly
(user-defined)
and Trigger. If you set CounterMissAction to LogOnly, the system logs the message
in Engine Log and Syslog. If you set CounterMissAction to Trigger, the system invokes
a trigger which has default action of collecting the comms tar file.
CounterMissTolerance Specifies the time interval that can lapse since the last update of GlobalCounter before
VCS reports an issue. If the GlobalCounter does not update within
(user-defined)
CounterMissTolerance times CounterInterval, VCS reports the issue. Depending on
the CounterMissAction.value, appropriate action is performed.
CredRenewFrequency The number of days after which the VCS engine renews its credentials with the
authentication broker. For example, the value 5 indicates that credentials are renewed
(user-defined)
every 5 days; the value 0 indicates that credentials are not renewed.
DeleteOnlineResource Defines whether you can delete online resources. Set this value to 1 to enable deletion
of online resources. Set this value to 0 to disable deletion of online resources.
(user-defined)
You can override this behavior by using the -force option with the hares -delete
command.
DumpingMembership Indicates that the engine is writing or dumping the configuration to disk.
RT, TS
EnableVMAutoDiscovery Enables or disables auto discovery of virtual machines. By default, auto discovery of
virtual machines is disabled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 0
EnginePriority The priority in which HAD runs. Generally, a greater priority value indicates higher
scheduling priority. A range of priority values is assigned to each scheduling class.
(user-defined)
For more information on the range of priority values, see the operating system
documentation.
EngineShutdown Defines the options for the hastop command. The attribute can assume the following
values:
(user-defined)
Enable—Process all hastop commands. This is the default behavior.
DisableClusStop—Do not process the hastop -all command; process all other hastop
commands.
PromptLocal—Prompt for user confirmation before running the hastop -local command;
reject all other hastop commands.
PromptAlways—Prompt for user confirmation before running any hastop command.
FipsMode Indicates whether FIPS mode is enabled for the cluster. The value depends on the
mode of the broker on the system. If FipsMode is set to 1, FIPS mode is enabled. If
(system use only)
FipsMode is set to 0, FIPS mode is disabled.
GlobalCounter This counter increases incrementally by one for each counter interval. It increases
when the broadcast is received.
(system use only)
VCS uses the GlobalCounter attribute to measure the time it takes to shut down a
system. By default, the GlobalCounter attribute is updated every five seconds. This
default value, combined with the 600-second default value of the ShutdownTimeout
attribute, means if system goes down within 120 increments of GlobalCounter, it is
treated as a fault. Change the value of the CounterInterval attribute to modify the
default value of GlobalCounter increment.
Guests List of operating system user accounts that have Guest privileges on the cluster.
HostAvailableMeters Lists the meters that are available for measuring system resources. You cannot
configure this attribute in main.cf.
(System use only)
■ Type and dimension: string-association
Keys are the names of parameters and values are the names of meter libraries.
■ Default: HostAvailableMeters = { CPU = “libmeterhost_cpu.so”, Mem =
“libmeterhost_mem.so”, Swap = “libmeterhost_swap.so”}
HostMeters Indicates the parameters (CPU, Mem, or Swap) that are currently metered in the
cluster.
(user-defined)
■ Type and dimension: string-keylist
■ Default: HostMeters = {“CPU”, “Mem”, “Swap”}
You can configure this attribute in main.cf. The keys must be one or more from
CPU, Mem, and Swap. You cannot modify the value at run time.
LockMemory Controls the locking of VCS engine pages in memory. This attribute has the following
values. Values are case-sensitive:
(user-defined)
ALL: Locks all current and future pages.
LogClusterUUID Enables or disables logging of the cluster UUID in each log message. By default,
cluster UUID is not logged.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 0
MeterControl Indicates the intervals at which metering and forecasting for the system attribute
AvailableCapacity are done for the keys specified in HostMeters.
(user-defined)
■ Type and dimension: integer-association
This attribute includes the following keys:
■ MeterInterval
Frequency in seconds at which metering is done by the HostMonitor agent.
The value for this key can equal or exceed 30. The default value is 120 indicating
that the HostMonitor agent meters available capacity and updates the System
attribute AvailableCapacity every 120 seconds. The HostMonitor agent checks
for changes in the avaliable capacity for every monitoring cycle and when there
is a change, the HostMonitor agent updates the values in the same monitoring
cycle . The MeterInterval value applies only if Statistics is set to Enabled or
MeterHostOnly.
■ ForecastCycle
The number of metering cycles after which forecasting of available capacity is
done. The value for this key can equal or exceed 1. The default value is 3
indicating that forecasting of available capacity is done after every 3 metering
cycles. Assuming the default MeterInterval value of 120 seconds, forecasting
is done after 360 seconds or 6 minutes. The ForecastCycle value applies only
if Statistics is set to Enabled.
You can configure this attribute in main.cf. You cannot modify the value at run time.
The values of MeterInterval and ForecastCycle apply to all keys of HostMeters.
You can configure this attribute in main.cf, if configured in main.cf then it must contain
units for all the keys as specified in HostMeters. You cannot modify the value at run
time.
When Statistics is set to Enabled then service group attribute Load, and the following
system attributes are represented in corresponding units for parameters such as CPU,
Mem or Swap:
■ AvailableCapacity
■ HostAvailableForecast
■ Capacity
■ ReservedCapacity
The values of keys such as Mem and Swap can be represented in MB or GB, and
CPU can be represented in CPU, MHz or GHz.
VCS attributes 810
Cluster attributes
MeterWeight Indicates the default meter weight for the service groups in the cluster. You can
configure this attribute in main.cf but you cannot modify the value at run time. If the
(user-defined)
attribute is defined in main.cf, it must have at least one key defined. The weight for
the key must be in the range of 0 to 10. Only keys from HostAvailableMeters are
allowed in this attribute.
(system use only) State—Current state of notifier, such as whether or not it is connected to VCS.
Host—The host on which notifier is currently running or was last running. Default =
None
Severity—The severity level of messages queued by VCS for notifier. Values include
Information, Warning, Error, and SevereError. Default = Warning
Queue—The size of queue for messages queued by VCS for notifier.
OpenExternalCommunicationPort Indicates whether communication over the external communication port for VCS is
allowed or not. By default, the external communication port for VCS is 14141.
(user-defined)
■ Type and dimension: string-scalar
■ Valid values: YES, NO
■ Default: YES
■ YES: The external communication port for VCS is open.
■ NO: The external communication port for VCS is not open.
Note: When the external communication port for VCS is not open, RemoteGroup
resources created by the RemoteGroup agent cannot access VCS.
OperatorGroups List of operating system user groups that have Operator privileges on the cluster.
PanicOnNoMem Indicate the action that you want VCS engine (HAD) to take if it cannot receive
messages from GAB due to low-memory.
(user-defined)
■ If the value is 0, VCS exits with warnings.
■ If the value is 1, VCS calls the GAB library routine to panic the system.
■ Default: 0
PreferredFencingPolicy The I/O fencing race policy to determine the surviving subcluster in the event of a
network partition. Valid values are Disabled, System, Group, or Site.
Disabled: Preferred fencing is disabled. The fencing driver favors the subcluster with
maximum number of nodes during the race for coordination points.
System: The fencing driver gives preference to the system that is more powerful than
others in terms of architecture, number of CPUs, or memory during the race for
coordination points. VCS uses the system-level attribute FencingWeight to calculate
the node weight.
Group: The fencing driver gives preference to the node with higher priority service
groups during the race for coordination points. VCS uses the group-level attribute
Priority to determine the node weight.
Site: The fencing driver gives preference to the node with higher site priority during
the race for coordination points. VCS uses the site-level attribute Preference to
determine the node weight.
ProcessClass Indicates the scheduling class processes created by the VCS engine. For example,
triggers.
(user-defined)
■ Type and dimension: string-scalar
■ Default = TS
ProcessPriority The priority of processes created by the VCS engine. For example triggers.
SecInfo Enables creation of secure passwords, when the SecInfo attribute is added to the
main.cf file with the security key as the value of the attribute.
(user-defined)
■ Type and dimension: string-scalar
■ Default: ""
SecureClus Indicates whether the cluster runs in secure mode. The value 1 indicates the cluster
runs in secure mode. This attribute cannot be modified when VCS is running.
(user-defined)
■ Type and dimension: boolean-scalar
■ Default: 0
You can configure a site from Veritas Operations Manager. This attribute will be
automatically set to 1 when configured using Veritas Operations Manager. If site
information is not configured for some nodes in the cluster, those nodes are placed
under a default site that has the lowest preference.
SourceFile File from which the configuration is read. Do not configure this attribute in main.cf.
(user-defined) Make sure the path exists on all nodes before running a command that configures
this attribute.
Statistics Indicates if statistics gathering is enabled and whether the FailOverPolicy can be set
to BiggestAvailable. You need to manually configure this attribute by adding Statistics
(user-defined)
= Enabled in the main.cf file.
Stewards The IP address and hostname of systems running the steward process.
SystemRebootAction Determines whether frozen service groups are ignored on system reboot.
(user-defined) ■ Type and dimension: string-keylist
■ Default: ""
If the SystemRebootAction value is "", VCS tries to take all service groups offline.
Because VCS cannot be gracefully stopped on a node where a frozen service group
is online, applications on the node might get killed.
Note: The SystemRebootAction attribute applies only on system reboot and system
shutdown.
(user-defined) The value SCSI3 indicates that the cluster uses either disk-based or server-based I/O
fencing. The value NONE indicates it does not use either.
UserNames List of VCS users. The installer uses admin as the default user name.
VCSFeatures Indicates which VCS features are enabled. Possible values are:
1—L3+ is enabled
WACPort The TCP port on which the wac (Wide-Area Connector) process on the local cluster
listens for connection from remote clusters. Type and dimension: integer-scalar
(user-defined)
■ Default: 14155
Heartbeat Definition
Attributes
Arguments List of arguments to be passed to the agent functions. For the Icmp
agent, this attribute can be the IP address of the remote cluster.
(user-defined)
■ Type and dimension: string-vector
■ Default: ""
AYARetryLimit The maximum number of lost heartbeats before the agent reports that
heartbeat to the cluster is down.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 3
VCS attributes 816
Heartbeat attributes (for global clusters)
Heartbeat Definition
Attributes
AYATimeout The maximum time (in seconds) that the agent will wait for a heartbeat
AYA function to return ALIVE or DOWN before being canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 30
CleanTimeOut Number of seconds within which the Clean function must complete or
be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
InitTimeout Number of seconds within which the Initialize function must complete
or be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
StartTimeout Number of seconds within which the Start function must complete or
be canceled.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
StopTimeout Number of seconds within which the Stop function must complete or
be canceled without stopping the heartbeat.
(user-defined)
■ Type and dimension: integer-scalar
■ Default: 300 seconds
VCS attributes 817
Remote cluster attributes
DeclaredState Specifies the declared state of the remote cluster after its
cluster state is transitioned to FAULTED.
(user-defined)
See “Disaster declaration” on page 715.
■ Disaster
■ Outage
■ Disconnect
■ Replica
1—L3+ is enabled
(system use only) Even though the VCSMode is an integer attribute, when
you query the value with the haclus -value command or the
haclus -display command, it displays as the string
UNKNOWN_MODE for value 0 and VCS for value 7.
Site attributes
Table D-8 lists the site attributes.
VCS attributes 821
Site attributes
Site Definition
Attributes
1 10000
2 1000
3 100
4 10
J L
Java Console
LastOnline attribute 750
administering clusters 91
LastSuccess attribute 790
administering logs 168
license keys
administering resources 144
about 179
administering service groups 127
installing 179
administering systems 159
troubleshooting 718
administering user profiles 123
LicenseType attribute 801
administering VCS Simulator 299
licensing issues 33
arranging icons 111
Limits attribute 801
Cluster Explorer 102
LinkHbStatus attribute 801
Cluster Manager 95
LLT 47
Cluster Monitor 97
about 314
Cluster Query 117
tunable parameters 650
components of 95
LLT timer tunable parameters
customizing display 100
setting 657
disability compliance 91
LLTNodeId attribute 801
icons 95
Load attribute 790
impact on performance 637
Load policy for SGWM 430
logging off a cluster 123
LoadTimeCounter attribute 801
logging on to a cluster 121
LoadTimeThreshold attribute 801
overview 91
loadwarning event trigger 524
running commands from 162
LoadWarningLevel attribute 801
running virtual fire drill 158
local attributes 72
setting initial display 93
LockMemory attribute 815
starting 94
log files 710
user profiles 123
LogClusterUUID attribute 815
using with ssh 93
LogDbg attribute 765
viewing server credentials 119
LogFileSize attribute 765
viewing user credentials 119
Index 832
logging networks
agent log 671 detecting failure 645
engine log 671 NoAutoDisable attribute 801
message tags 671 NodeId attribute 801
logs nofailover event trigger 525
customizing display in Java Console 169 notification
searching from Java Console 168 about 505
viewing from Java Console 118 deleting messages 507
LogSize attribute 815 error messages 507
Low Latency Transport (LLT) 47 error severity levels 507
event triggers 521
M hanotify utility 509
message queue 507
main.cf
notifier process 508
about 64
setting using wizard 166
cluster definition 64
SNMP files 515
group dependency clause 64
troubleshooting 714
include clauses 64
Notifier attribute 815
resource definition 64
notifier process 508
resource dependency clause 64
Notifier Resource Configuration wizard 165
service group definition 64
NumRetries attribute 790
system definition 64
NumThreads attribute
ManageFaults attribute
definition 765
about 436
modifying for performance 636
definition 790
ManualOps attribute 790
MemThresholdLevel attribute 801 O
message tags OfflineMonitorInterval attribute 765
about 671 OfflineTimeout attribute 765
MeterRecord attribute 801 OfflineWaitLimit attribute 765
MeterWeight attribute 790 On-Off resource 36
MigrateQ attribute 790 On-Only resource 36
migrating OnGrpCnt attribute 801
service groups 214 OnlineAtUnfreeze attribute 790
MonitorInterval attribute 765 OnlineClass attribute 765
MonitorMethod attribute 750 OnlinePriority attribute 765
MonitorOnly attribute 750 OnlineRetryInterval attribute 790
MonitorStartParam attribute 765 OnlineRetryLimit attribute
MonitorTimeout attribute 765 for resource types 765
MonitorTimeStats attribute 750 for service groups 790
OnlineTimeout attribute 765
N OnlineWaitLimit attribute 765
Open IMF
N+1 configuration 56
overview 43
N-to-1 configuration 55
OpenExternalCommunicationPort attribute 815
N-to-N configuration 57
OpenTimeout attribute 765
Name attribute 750
Operations attribute 765
network failure 112
OperatorGroups attribute
network links
for clusters 815
detecting failure 642
for service groups 790
Index 833
VCS (continued)
stopping with other options 192
stopping without -force 191
troubleshooting resources 698
troubleshooting service groups 692
troubleshooting sites 699
VCS agent statistics 647
VCS attributes 70
VCS Simulator
administering from Java Console 299
bringing systems online 303
creating power outages 303
description of 249
faulting resources 304
saving offline configurations 303
simulating cluster faults from command line 308
simulating cluster faults from Java Console 301
starting from command line 299
VCSFeatures attribute
for clusters 815
for systems 801
VCSMode attribute 815
vector attribute dimension 70
version information
retrieving 241
violation event trigger 533
Virtual Business Services
features 534
overview 534
sample configuration 535
virtual fire drill
about 295
vxfen.. See fencing module
vxkeyless utility 179–180
vxlicinst utility 179
W
wac 542
WACPort attribute 815
wide-area connector 542
wide-area failover 61
VCS agents 544
Wide-Area Heartbeat agent 543
wizards
Notifier Resource Configuration 165
Remote Cluster Configuration 582