0% found this document useful (0 votes)
80 views9 pages

Redhat Cluster FTP Installation Administration Guide

This document summarizes the configuration of an FTP server in a Red Hat cluster environment. It describes: 1) Configuring two nodes with hostnames, IP addresses, and passwords. 2) Installing cluster packages and configuring the cluster interconnect. 3) Creating a clustered logical volume for FTP data and configuration files using LVM over a shared disk. 4) Configuring the FTP service by creating users and groups, scripts to start/stop the FTP daemon, and mounting the clustered file systems.

Uploaded by

Chu Hà Khanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views9 pages

Redhat Cluster FTP Installation Administration Guide

This document summarizes the configuration of an FTP server in a Red Hat cluster environment. It describes: 1) Configuring two nodes with hostnames, IP addresses, and passwords. 2) Installing cluster packages and configuring the cluster interconnect. 3) Creating a clustered logical volume for FTP data and configuration files using LVM over a shared disk. 4) Configuring the FTP service by creating users and groups, scripts to start/stop the FTP daemon, and mounting the clustered file systems.

Uploaded by

Chu Hà Khanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Redhat Cluster FTP Installation

Administration Guide
I. Cluster Information
1. Redhat Cluster Installation
Attributes / Properties Value
System 1 host name vcr-bgw01
System 2 host name vcr-bgw02
Operating System Redhat Enterprise 6.4
Cluster Package rgmanager, lvm2-cluster, gfs2-utils
License Resilient Storage (1-2 sockets)
2. Configuring a Cluster
Information Node 1 Node 2
Host name vcr-bgw01 vcr-bgw02
Cluster Node Name node1 node2
Cluster name vcr-bgw
Cluster interconnect eth3:192.168.10.1/24 eth2:192.168.10.2/24
(vlan100) (vlan100)
Public network interface eth2: 10.124.248.4/25 Eth3:10.124.248.11/25
ricci password ricci123 ricci123
WEB GUI IP Address vcr-bgw01:8084 vcr-bgw02:8084
3. Preparing Application Services
Object Value
System 1 host name vcr-bgw01
System 2 host name vcr-bgw02
Disk assignment for disk group /dev/mapper/36005076802810c30f00000000000004c
Volume group name vg1
Volume name lv_data1
Mount point /u02
Public network interface vcr-bgw01:eth2, vcr-bgw02:eth3
Virtual IP Address vcr-bgw: 10.124.248.5

II. Config Redhat Cluster


1. Config static IP address, hostname and /etc/hosts file on all two nodes.
On both nodes:
Config Static IP Address for two cluster nodes.
Edit /etc/hosts file
vi /etc/hosts
10.124.248.4 vcr-bgw01 #node1 hostname
10.124.248.11 vcr-bgw02 #node2 hostname
192.168.10.1 node1 #node1 cluster node name
192.168.10.2 node2 #node2 cluster node name

2. Start Ricci and Luci Service


On both nodes:
Start Luci and Ricci Service ( service cho giao dien cau hinh)
service ricci start
service luci start
chkconfig ricci on
chkconfig luci on
Set Ricci Password
passwd ricci
ricci123

3. Create /etc/cluster/cluster.conf file


3.1 Method 1: By command line
On both nodes:
Create and edit /etc/cluster/cluster.conf file
vi /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="2" name="vcr-bgw"> # cluster name: vcr-bgw
<clusternodes>
<clusternode name="node1" nodeid="1"> # cluster node:
node1
<fence>
</fence>
</clusternode>
<clusternode name="node2" nodeid="2"/> # cluster node:
node2
</clusternodes>
<cman two_node="1" expected_votes="1"/>
<rm>
</rm>
<totem token="170000"/>
<logging debug="on">
<logging_daemon debug="on" name="rgmanager"
syslog_priority="debug" to_logfile="no"/>
</logging>
</cluster>

3.2 Method 2: By Luci web interface


Connect to https:/vcr-bgw01:8084

4. Start cman, rgmanager service


On both nodes:
service cman start (cluster manager service quan ly cluster)
service rgmanager start (cluster resource managerment manager quan ly volume)
5. Create GFS2 Cluster File System
5.1 Enable Clustering on LVM2
On both nodes:
lvmconf --enable-cluster ( cho phep lvm lam viec duoc o che do cluster)

5.2 Start clvmd


On both nodes:
service clvmd start

5.3 Create Physical Volume on Shared Storage


On one node:
pvcreate /dev/mapper/36005076802810c30f00000000000004c
Physical volume "/dev/mapper/36005076802810c30f00000000000004c"
successfully created

5.4 Create Volume Group


On one node:
vgcreate -c y vg1 /dev/mapper/36005076802810c30f00000000000004c
Clustered volume group "vg1" successfully created

5.5 Create Logical Volume


On one node
lvcreate -n lv_data1 -L 790G vg1
Logical volume "lv_data1" created
lvcreate -n lv_data2 -L 1G vg1
Logical volume "lv_data2" created

5.6 Create GFS2 Filesystem


On one node: ( co che lock_dlm quan ly nhieu node doc ghi tren cung phan vung mount)
mkfs -t gfs2 -p lock_dlm -t vcr-bgw:mygfs2 -j 2 /dev/mapper/vg1-lv_data1
This will destroy any data on /dev/mapper/vg1-lv_data1.
It appears to contain: symbolic link to `../dm-8'

Are you sure you want to proceed? [y/n] y

Device: /dev/mapper/vg1-lv_data1
Blocksize: 4096
Device Size 790.00 GB (207093760 blocks)
Filesystem Size: 790.00 GB (207093758 blocks)
Journals: 2
Resource Groups: 3160
Locking Protocol: "lock_dlm"
Lock Table: "vcr-bgw:mygfs2"
UUID: 725d0119-2a4c-cecd-66d7-fe9fe4d7c788
mkfs -t gfs2 -p lock_dlm -t vcr-bgw:mygfs2_config -j 2 /dev/mapper/vg1-lv_data2
This will destroy any data on /dev/mapper/vg1-lv_data2.
It appears to contain: symbolic link to `../dm-9'

Are you sure you want to proceed? [y/n] y

Device: /dev/mapper/vg1-lv_data2
Blocksize: 4096
Device Size 1.00 GB (262144 blocks)
Filesystem Size: 1.00 GB (262142 blocks)
Journals: 2
Resource Groups: 4
Locking Protocol: "lock_dlm"
Lock Table: "vcr-bgw:mygfs2_config"
UUID: 53a7ba70-e8b7-814f-a6ac-4443e979fd0c

5.7 Mount GFS2 file system


On both nodes:

mkdir /u02
mkdir /ftpconfig
# Mountpoint /u02 is used for ftp data
mount /dev/mapper/vg1-lv_data1 /u02
# Mountpoint /ftpconfig is used to hold ftp config script file
mount /dev/mapper/vg1-lv_data2 /ftpconfig

Add this line to the end of /etc/fstab


/dev/mapper/vg1-lv_data1 /u02 gfs2 defaults 0
0
/dev/mapper/vg1-lv_data2 /ftpconfig gfs2 defaults
0 0
service gfs2 start

6. Config FTP service in Cluster


7.1 Create FTP Group and User
On both nodes:
groupadd ftpgroup
useradd -g ftpgroup ftptest1
chgrp /u02 ftpgroup
chown root:ftpgroup /u02
chmod 774 /u02
passwd ftptest1
ftptest1

7.2 Create FTP Service Script


From one node:
mkdir /ftpconfig/log/
mkdir /ftpconfig/srv/
touch /ftpconfig/log/srv_ftp.log
vi /ftpconfig/srv/srv_ftp.sh
 
#
-------------------------------------------------------------------------
--------
# Script to stop/start and give a status of ftp service in the cluster.
# This script is built to receive 3 parameters.
# - start : Executed by cluster to start the application(s) or
service(s)
# - stop : Executed by cluster to stop the application(s) or
service(s)
# - status: Executed by cluster every 30 seconds to check service
status.
#
-------------------------------------------------------------------------
--------
#set -x
CDIR="/ftpconfig" ; export CDIR # Root directory for
Services
CSVC="$CDIR/srv" ; export CSVC # Service Scripts
Directory
INST="srv_ftp" ; export INST # Service Instance Name
LOG="$CDIR/log/${INST}.log" ; export LOG # Service Log file name
HOSTNAME=`hostname` ; export HOSTNAME # HostName
VSFTPD="/usr/sbin/vsftpd" ; export VSFTPD # Service Program name
RC=0 ; export RC # Service Return Code
DASH="---------------------"; export DASH # Dash Line
 
# Where the Action Start
#
-------------------------------------------------------------------------
--------
case "$1" in
start) echo -e "\n${DASH}" >> $LOG 2>&1
echo -e "Starting service $INST on $HOSTNAME at `date`" >> $LOG
2>&1
echo -e "${VSFTPD}" >> $LOG 2>&1
${VSFTPD} >> $LOG 2>&1
RC=$?
FPID=`ps -ef |grep -v grep |grep ${VSFTPD} |awk '{ print $2 }'|
head -1`
echo "Service $INST started on $HOSTNAME - PID=${FPID}
RC=$RC">> $LOG
echo "${DASH}" >> $LOG 2>&1                
;;
stop ) echo -e "\n${DASH}" >> $LOG 2>&1
echo -e "Stopping Service $INST on $HOSTNAME at `date` " >>
$LOG
ps -ef | grep ${VSFTPD}| grep -v grep >> $LOG 2>&1
FPID=`ps -ef |grep -v grep |grep ${VSFTPD} |awk '{ print $2 }'|
head -1`
echo -e "Killing PID ${FPID}" >> $LOG 2>&1
kill $FPID > /dev/null 2>&1
echo -e "Service $INST is stopped ..." >> $LOG 2>&1
RC=0
echo "${DASH}" >> $LOG 2>&1                
;;
status) COUNT=`ps -ef | grep ${VSFTPD}| grep -v grep | wc -l`
FPID=`ps -ef |grep -v grep |grep ${VSFTPD} |awk '{ print $2 }'|
head -1`
echo -n "`date` Service $INST ($COUNT) on $HOSTNAME">> $LOG
2>&1
if [ $COUNT -gt 0 ]
then echo " - PID=${FPID} - OK" >> $LOG 2>&1
RC=0
else echo " - NOT RUNNING" >> $LOG 2>&1
ps -ef | grep -i ${VSFTPD} | grep -v grep >> $LOG 2>&1
RC=1
fi
;;
esac
exit $RC

chmod 774 /ftpconfig/srv/srv_ftp.sh

7. Update Cluster Configuration


7.2 Edit /etc/cluster/cluster.conf file
On both nodes:
edit /etc/cluster/cluster.conf file
vi /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="vcr-bgw">
<clusternodes>
<clusternode name="node1" nodeid="1"/>
<clusternode name="node2" nodeid="2"/>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices/>
<rm>
<resources>
<ip address="10.124.248.5/25"/>
<script file="/ftpconfig/srv/srv_ftp.sh"
name="ftp_resource"/>
</resources>
<failoverdomains>
<failoverdomain name="fail_over_domain1"
ordered="1">
<failoverdomainnode name="node1"
priority="1"/>
<failoverdomainnode name="node2"
priority="2"/>
</failoverdomain>
</failoverdomains>
<service domain="fail_over_domain1" name="srv_ftp"
recovery="relocate">
<ip ref="10.124.248.5/25">
<script ref="ftp_resource"/>
</ip>
</service>
</rm>
</cluster>

1.2 Re-enable Cluster


On both nodes
Stop Services
service rgmanager stop
service gfs2 stop
service clvmd stop
service cman stop
Start Services
service cman start (cluster manager service)
service clvmd start (cluster logical volume manager)
service gfs2 start (gfs2 mount service)
service rgmanager start (cluster resource managerment)

1.3 Configure all service start at boot


chkconfig cman on
chkconfig rgmanager on
chkconfig clvmd on
chkconfig gfs2 on

III. Monitor and administrate Cluster


1. Verify Cluster Status
To verify the status of nodes, quorum and services
clustat
Cluster Status for vcr-bgw @ Sat Sep 7 19:18:18 2013
Member Status: Quorate

Member Name ID Status


------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online, rgmanager
/dev/block/253:8 0 Online, Quorum Disk

Service Name Owner (Last) State


------- ---- ----- ------ -----
service:srv_ftp node1 started

Verify cluster service status


service rgmanager status
rgmanager (pid 32495) is running...
service gfs2 status
Configured GFS2 mountpoints:
/u02
/ftpconfig
Active GFS2 mountpoints:
/u02
/ftpconfig
service clvmd status
clvmd (pid 32330) is running...
Clustered Volume Groups: vg_gfs2_1
Active clustered Logical Volumes: lv_gfs2_1 lv_gfs2_2
service cman status
cluster is running

2. Manage Cluster Service


Enable service
clusvcadm –e srv_ftp
Disable service
clusvcadm –d srv_ftp
Relocate service
clusvcadm –r srv_ftp
Stop service
clusvcadm –s srv_ftp
Freeze service
clusvcadm –Z srv_ftp
Unfreeze service
clusvcadm –U srv_ftp
Migrate Service
clusvcadm –M srv_ftp
Restart Service
clusvcadm –R srv_ftp

3. Start and Stop Cluster Software


3.1 Start Cluster Software
service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
service clvmd start
Starting clvmd:
Activating VG(s): 7 logical volume(s) in volume group "vg_vcr-bgw02"
now active
1 logical volume(s) in volume group "vg_gfs2_1" now active
[ OK ]
service gfs2 start
Mounting GFS2 filesystem (/u02): [ OK ]
Mounting GFS2 filesystem (/ftpconfig): [ OK ]
service rgmanager start
Starting Cluster Service Manager: [ OK ]

3.2 Stop Cluster Software


service rgmanager stop
Stopping Cluster Service Manager: [ OK ]
service gfs2 stop
Unmounting GFS2 filesystem (/u02): [ OK ]
Unmounting GFS2 filesystem (/ftpconfig): [ OK ]
service clvmd stop
Deactivating clustered VG(s): 0 logical volume(s) in volume group
"vg_gfs2_1" now active
[ OK ]
service cman stop
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown:[ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]

8. Log checking
Check the following logs to investigate cluster issues:

File Name Description Note


/var/log/messages Log all event
/var/log/cluster/corosync.log Log specifically cman event
/ Log specifically DLM event
var/log/cluster/dlm_controld.log
/var/log/cluster/gfs_controld.log Log specifically GFS event
/var/log/cluster/rgmanager.log Log specifically rgmanager event
/var/log/cluster/fenced.log Log specifically fence event
/var/log/cluster/qdiskd.log Log specifically quorum disk daemon event
/var/log/cluster/debug Debug specifically log for troubleshooting Disable by default

You might also like