How To Add A New Resource Group To An Active Cluster
How To Add A New Resource Group To An Active Cluster
html
unixwerk > AIX > add a resource group April 30, 2013
Contents
AIX
1. Introduction
BSD 2. Create new Volume Groups
3. Add Resources: New Service IP Adresses
HP-UX 4. Add Resources: New Application Start/Stop Scripts
5. Create a New Resource Group
Linux 6. Add the Resources to the New Resource Group
7. Final Synchronization of the Cluster
Solaris
A. Related Information
Others
Un*x
1. Introduction
Topics
This article describes how to add a new resource group RES_GRP_03 to an HACMP cluster. Primary
node for the new resource group shall be betty. We find the following situation:
betty# clRGinfo
Guestbook -----------------------------------------------------------------------------
Group Name Group State Node
Contact -----------------------------------------------------------------------------
RES_GRP_01 ONLINE barney
Archive OFFLINE betty
betty# cfgmgr
Now we are ready to run the configuration manager on the other node and set the reservation policy of the
disks:
barney# cfgmgr
barney# chdev -l hdiskA1 -a reserve_policy=no_reserve
barney# chdev -l hdiskA2 -a reserve_policy=no_reserve
barney# chdev -l hdiskB1 -a reserve_policy=no_reserve
:
:
Since the new resource group will be primarily online on betty, we create the volume group and the
filesystems on betty. Anyway, there is no technical reason for doing this on the primary node. We could do
this on the other node as well. The volume groups will be created as follows:
Please note: The volume groups we create are enhanced-concurrent-capable. By default these volume
groups have the following properties: Quorum is on, auto-varyon is off. Further note the volume group's
major number we set with the -V option. Get free major numbers on both nodes with the lvlstmajor
command.
With the volume groups created and vary'd on we can go on creating logical volumes. Usually we would use
logical volumes mirrored over two datacenters. So we ensure that all first LV copies (PV1 in lslv -m output)
reside in Datacenter A and all second LV copies (PV2 in lslv -m output) reside in Datacenter B. We use
superstrictness and the most narrow upper bound possible to reduce the risk of a mirroring mess, e.g.
where hdiskA1 and hdiskA2 reside in datacenter A and hdiskB1 and hdiskB2 reside in datacenter B.
Filesystems on top of our logical volumes should be created with the mount = false option, e.g.
Once all filesystems are created we unmount everything we just created and close the volume group:
Now we are ready to import all Volume Groups we created into the other node.
Get information about major number, Volume Group, and PVID from the other node.
For the new application server we first need application start and stop scripts. We assume that we've got
these scripts and copied them to /etc/hacmp on betty. Then we can add a new application server:
Since the scripts are local we have to copy them over to barney:
# vi /etc/hosts
111.111.111.130 haservice3 haservice3.domain.com
111.111.111.131 haservice31 haservice31.domain.com
+--------------------------------------------------------------------------+
| Select a Service IP Label/Address type |
| |
| Move cursor to desired item and press Enter. |
| |
| Configurable on Multiple Nodes |
The service address needs to move with the application - so we select "Configurable on Multiple Nodes"
here.
[Entry Fields]
* IP Label/Address haservice3 +
Netmask(IPv4)/Prefix Length(IPv6) []
* Network Name net_ether_01
Alternate HW Address to accompany IP Label/Address []
[Entry Fields]
* Resource Group Name [RES_GRP_03]
* Participating Nodes (Default Node Priority) [betty barney]
If you pick the partitipating nodes with <F7> from the list, you might end up with th wrong order. The home
node must be the first in the line!
betty# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
RES_GRP_01 ONLINE barney
OFFLINE betty
OFFLINE betty
clRGinfo: Operation failed, error = 0.
clRGinfo will show the new resource group once we synchronized the cluster.
Before we start to add resources to our new resource group it might be a good idea to synchronize the
cluster first.
The last step is to add resources to our resource groups. If you didn't run the HACMP discovering tool yet,
you have to run it now. Otherwise HACMP can't find our newly created volume groups. If you exactly did the
steps in the order of this article then the discovering tool has already be run.
+--------------------------------------------------------------------------+
| Change/Show Resources and Attributes for a Resource Group |
| |
| Move cursor to desired item and press Enter. |
| |
| RES_GRP_01 |
| RES_GRP_02 |
| RES_GRP_03 |
| |
| F1=Help F2=Refresh F3=Cancel |
| F8=Image F10=Exit Enter=Do |
| /=Find n=Find Next |
+--------------------------------------------------------------------------+
betty# clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
RES_GRP_01 ONLINE barney
OFFLINE betty
You will also notice that the service IP addresses have been acquired and that all our filesystems have been
mounted on the primary node.
A. Related Information
AIX 6.1 Information Center > PowerHA SystemMirror > HACMP Version 6.1
Also on unixwerk
HACMP: Cluster Commandline
How to Add a New VG to an Active HACMP Resource Group
How to Add a Node to an HACMP Cluster
How to Remove a Node from an HACMP Cluster
Setup a Two-Node Cluster with HACMP