0% found this document useful (0 votes)
255 views

Red Hat Cluster Configuration and Management: Available Resources

This document provides instructions for setting up a Red Hat cluster using Conga for configuration and management. It describes the available hardware including 3 servers, 2 of which can be used as cluster nodes. It outlines installing cluster software packages, enabling required IP ports, and creating the initial cluster configuration file using Conga which will deploy packages, configure each node, and start the cluster services.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
255 views

Red Hat Cluster Configuration and Management: Available Resources

This document provides instructions for setting up a Red Hat cluster using Conga for configuration and management. It describes the available hardware including 3 servers, 2 of which can be used as cluster nodes. It outlines installing cluster software packages, enabling required IP ports, and creating the initial cluster configuration file using Conga which will deploy packages, configure each node, and start the cluster services.
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Red Hat Cluster Configuration and Management

Available resources:
 3 Servers out of which two of them can be used as nodes and 1 as the shared storage.
 DB1
IP Address – 192.168.188.173
NIC cards – 2
RAM - 3 GB RAM
OS – 32 bit RHEL 5.2
Kernel -2.6.18-92.el5

 DB2
IP Address – 192.168.188.153
NIC cards – 2
RAM - 3 GB RAM
OS – 32 bit RHEL 5.2
Kernel -2-6.18-92.el5

 Openfiler
IP Address – 192.168.188.209
NIC cards – 2
Kernel - 2 2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686 (SMP)
OS -Openfiler 2.3

Setting Up Hardware
Setting up hardware consists of connecting cluster nodes to other hardware required to run a
RedHat Cluster. The amount and type of hardware varies according to the purpose and
availability requirements of the cluster.

 Cluster nodes — Computers that are capable of running Red Hat Enterprise Linux 5
software, with at least 1GB of RAM. The maximum number of nodes supported in a Red
Hat Cluster is 16.
 Ethernet switch or hub for public network — This is required for client access to the
cluster.
 Ethernet switch or hub for private network — This is required for communication among
the cluster nodes and other cluster hardware such as network power switches and Fibre
Channel switches.
 Storage (here we used another server)— Some type of storage is required for a cluster.
The type required depends on the purpose of the cluster.

Configuring Red Hat Cluster Software  Conga


Conga is an integrated set of software components that provides centralized configuration and

Management of Red Hat clusters and storage. Conga provides the following major features:

• One Web interface for managing cluster and storage

• Automated Deployment of Cluster Data and Supporting Packages

• Easy Integration with Existing Clusters

• No Need to Re-Authenticate

• Integration of Cluster Status and Logs

• Fine-Grained Control over User Permissions

Packages needed luci and ricci


Considerations for Using Conga

 There are no explicit private interconnects.


 It has a single xml configuration file : /etc/cluster/cluster.conf
 Requires 3 services:
cman

rgmanager

clvmd (assuming you are using LVM)

Installing packages
Conga will install packages for you using yum.  However it may be better to pre-install the
packages so that you decide on the version you are using.

If using kickstart then add the following package clusters to your ks.cfg:

@clustering
@cluster-storage

This will install the following packages.

Clustering:

 cluster-cim
 cluster-snmp
 ipvsadm
 luci
 modcluster
 piranha
 rgmanager
 ricci
 system-config-cluster

Cluster-storage:

 gfs-utils
 gnbd
 kmod-gfs
 kmod-gnbd
 lvm2-cluster
Enabling IP Ports on Cluster Nodes

IP Port Number Protocol Component

5404, 5405 UDP cman (Cluster Manager)

11111 TCP ricci (part of Conga remote agent)

14567 TCP gnbd (Global Network Block Device)

16851 TCP modclusterd (part of Conga remote agent)

21064 TCP dlm (Distributed Lock Manager)

50006, 50008,

50009 TCP ccsd (Cluster Configuration System daemon)

50007 UDP ccsd (Cluster Configuration System daemon)

8084 TCP luci (Conga user interface server)

Installing the group packages for RHEL cluster

yum groupinstall -y Cluster Storage Clustering

Starting luci and ricci

To administer Red Hat Clusters with Conga, install and run luci and ricci as follows:
1. At each node to be administered by Conga, install the ricci agent. For example:

# yum install ricci

2. 2. At each node to be administered by Conga, start ricci. For example:

# service ricci start

Starting ricci: [ OK ]

3. Select a computer to host luci and install the luci software on that computer. For
example:
# yum install luci

4. At the computer running luci, initialize the luci server using the luci_admin init
command.
For example:

# luci_admin init
Initializing the Luci server
Creating the 'admin' user
Enter password: <Type password and press ENTER.>
Confirm password: <Re-type password and press ENTER.>
Please wait...
The admin password has been successfully set.
Generating SSL certificates...
Luci server has been successfully initialized
Restart the Luci server for changes to take effect
eg. service luci restart

5. Start luci using service luci restart. For example:

# service luci restart


Shutting down luci: [ OK ]
Starting luci: generating https SSL certificates... done
[ OK ]
Please, point your web browser to https://nano-01:8084 to access luci

6. At a Web browser, place the URL of the luci server into the URL address box and
click Go (or the equivalent). The URL syntax for the luci server is
https://luci_server_hostname:8084.

 Change the channel of the server after registering in RHN


 Select the system that u want to use as cluster node and go to Alter channel
subscriptions select.

RHEL Cluster-Storage (v. 5 for 32-bit x86)

RHEL Clustering (v. 5 for 32-bit x86)

Install system-config-cluster
Conga should install the cluster packages for you.  However you may also wish to install system-
config-cluster
# yum install system-config-cluster
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.as29550.net
* updates: mirror.as29550.net
* addons: mirror.as29550.net
* extras: mirror.as29550.net
Setting up Install Process
Parsing package install arguments
Resolving Dependencies
--> Running transaction check
---> Package system-config-cluster.noarch 0:1.0.55-1.0 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================
==============================================================================
=
Package Arch
Version Repository Size
==============================================================================
==============================================================================
=
Installing:
system-config-cluster noarch
1.0.55-1.0 base 287 k

Transaction Summary
==============================================================================
==============================================================================
=
Install 1 Package(s)
Update 0 Package(s)
Remove 0 Package(s)

Total download size: 287 k


Is this ok [y/N]: y
Downloading Packages:
system-config-cluster-1.0.55-1.0.noarch.rpm
| 287 kB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : system-config-cluster [1/1]

Installed: system-config-cluster.noarch 0:1.0.55-1.0


Complete!
Creating a Cluster

Hostname resolution

Make sure your conga server (luci) and your cluster nodes can each look the other up using DNS
or /etc/hosts

Install /  Setup conga.

Install and setup conga (luci and ricci).

Enable cluster-based locking in lvm.conf

This makes sure that lvm is cluster safe and that any modifications to volumes is performed
using the Distributed Lock Manager (DLM).

On each node in the cluster do the following to set locking_type = 3.

# vi /etc/lvm/lvm.conf
locking_type = 3

When the cluster is created the clvmd service will be started which will read this file.

Creating a Cluster

Assuming Conga is already setup, use it to create the cluster.  This will create you an initial
/etc/cluster/cluster.conf file.  You could alternatively just create your own cluster.conf file using
a text editor and put it on each node and start the cluster services.  You can also use ccs_tool. 
However Conga is by far the easiest method.

As administrator of luci, select the cluster tab.

2. Click Create a New Cluster.

3. At the Cluster Name text box, enter a cluster name. The cluster name cannot exceed 15

characters. Add the node name and password for each cluster node. Enter the node name

for each node in the Node Hostname column; enter the root password for each node in the

in the Root Password column. Check the Enable Shared Storage Support checkbox if
clustered storage is required.

4. Click Submit. Clicking Submit causes the following actions:

a. Cluster software packages to be downloaded onto each cluster node.

b. Cluster software to be installed onto each cluster node.

c. Cluster configuration file to be created and propagated to each node in the cluster.

d. Starting the cluster.

A progress page shows the progress of those actions for each node in the cluster.

When the process of creating a new cluster is complete, a page is displayed providing a

configuration interface for the newly created cluster.

After the creation has completed you will have a simple /etc/cluster/cluster.conf file and the
cman service will have been started.

The cluster.conf file

After creating the cluster with conga you will have a cluster.conf file similar to the following:

<?xml version="1.0"?>
<cluster alias="linuxcluster2" config_version="1" name="Sierra">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="192.168.188.153" nodeid="1" votes="1"/>
<clusternode name="192.168.188.173" nodeid="2" votes="1"/>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices/>
<rm/>
</cluster>

Check cluster services

Check that the main cluster services are enabled at this point.

# chkconfig --list modclusterd


modclusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# chkconfig --list cman
cman 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# chkconfig --list clvmd
clvmd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# chkconfig --list rgmanager
rgmanager 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# service modclusterd status
modclusterd (pid 3264) is running...
# service cman status
cman is running.
# service clvmd status
clvmd (pid 30846) is running...
active volumes: LogVol00 LogVol01
# service rgmanager status
clurgmgrd (pid 30916 30915) is running...

Check the cluster status

At this point we can run clustar to check the cluster is up.

# clustat

Shared Storage  Openfiler setup Graphical Installation

The use of software RAID, or software Logical Volume Management (LVM), is not supported on shared
storage.The Red Hat Cluster Manager requires that all cluster members have simultaneous access to the
shared storage. These products typically do not allow for online repair of a failed member. Only host
RAID adapters listed in the Red Hat Hardware Compatibility List are supported.

Here, we’ve used Openfiler for creating the shared storage.

System Requirements

Openfiler has the following hardware requirements to be successfully installed:

1. x86 or x64 based computer with at least 512MB RAM and 1GB storage for the OS
image.
2. At least one supported network interface card
3. A CDROM or DVD-ROM drive if you are performing a local install
4. A supported disk controller with data drives attached.

Installation

The installation process is described with screenshots for illustrative purposes. If you are unable
to proceed at any point with the installation process or you make a mistake, use the Back button
to return to previous points in the installation process.

Starting the Installation


To begin the installation, insert the Openfiler disk into your CD/DVD-ROM drive and ensure
your system is configured to boot off the CD/DVD-ROM drive. After the system POSTs, the
installer boot prompt will come up. At this point, just hit the Enter key to proceed.

After a few moments, the first screen of the installer will be presented. If at this point your screen
happens to be garbled, it is likely that the installer has been unable to automatically detect your
graphics subsystem hardware. You may restart the installation process in text-mode and
proceed accordingly in that case. The first screen of the installer is depicted below. The next step
is to click on the Next button to proceed with the installation.
Keyboard
Selection

This screen
deals with
keyboard
layout
selection.
Use the
scroll bar on
the right to
scroll up
and down
and select
your desired
keyboard
layout from
the list.
Once you
are satisfied
with your
selection,
click the
Next button
to proceed.
Disk
Partitioni
ng Setup

Next comes
the disk
partitioning. 
You must
select
manual disk
partitioning
as it ensures
you will end
up with a
bootable
system and
with the
correct
partitioning
scheme.
Openfiler
does not
support
automatic
partitioning
and you
will be
unable to
configure
data
storage
disks in the
Openfiler
graphical
user
interface if
you select
automatic
partitioning
. Click the
Next button
once you
have
selected the
correct
radiobutton
option.
Disk
Setup

On the disk
setup
screen, if
you have
any existing
partitions
on the
system,
please
delete them.
DO NOT
DELETE
ANY
EXISTING
OPENFIL
ER DATA
PARTITIO
NS
UNLESS
YOU NO
LONGER
REQUIRE
THE
DATA ON
THEM. To
delete a
partition,
highlight it
in the list of
partitions
and click
the Delete
button. You
should now
have a clean
disk on
which to
create your
partitions.

You need to
create three
partitions
on the
system in
order to
proceed
with the
installation:

1. "/bo
ot" -
this
is
wher
e the
kern
el
will
resid
e
and
the
syste
m
will
boot
from

2. "sw
ap"
-
this
is
the
swa
p
parti
tion
for
mem
ory
swa
ppin
g to
disk

3. "/"-
this
is
the
syste
m
root
parti
tion
wher
e all
syste
m
appli
catio
ns
and
libra
ries
will
be
insta
lled
Create /boot
Partition

Proceed by
creating a
boot
partition.
Click on the
New button.
You will be
presented
with a form
with several
fields and
checkboxes.
Enter the
partition
mount path
"/boot" and
the select
the disk on
with to
create the
partition. In
the
illustrated
example,
this disk is
hda (the
first IDE
hard disk).
Your setup
will very
likely be
different as
you may
have several
disks of
different
types. You
should
make sure
that only the
first disk is
checked and
no others. If
you are
installing on
a SCSI-only
system, this
disk will be
designated
sda. If you
are
installing on
a system
that has
both IDE
and SCSI
disks,
please
select hda if
you intend
to use the
IDE disk as
your boot
drive.

The
following is
a list of all
entries
required to
create the
boot
partition:

1. Mou
nt
Poin
t:
/boo
t

2. Files
yste
m
Typ
e:
ext3

3. Allo
wabl
e
Driv
es:
selec
t
one
disk
only
.
This
shou
ld be
the
first
IDE
(hda
) or
first
SCS
I
disk
(sda
)
4. Size
(MB
):
100
(this
is
the
size
in
Meg
abyt
es,
alloc
ate
100
MB
by
enter
ing
"100
")

5. Addi
tiona
l
Size
Opti
ons:
selec
t
Fixe
d
Size
radi
obut
ton
from
the
opti
ons.

6. Forc
e to
be a
prim
ary
parti
tion:
chec
ked
(sele
ct
this
chec
kbox
to
forc
e the
parti
tion
to be
creat
ed
as a
prim
ary
parti
tion)

After
configuratio
n, your
settings
should
resemble
the
following
illustration:
Once you
are satisfied
with your
entries,
click the
OK button
to create the
partition.

Create /
(root)
Partition

Proceed by
creating a
root
partition.
Click on the
New button.
You will be
presented
with the
same form
as
previously
when
creating the
boot
partition.
The details
are identical
to what was
entered for
the /boot
partition
except this
time the
Mount
Point:
should be
"/" and the
Size(MB):
should be
2048MB or
at a
minimum
1024MB.

Once you
are satisfied
with your
entries,
click the
OK button
to proceed.
Create
Swap
Partition

Proceed by
creating a
swap
partition.
Click on the
New button.
You will be
presented
with the
same form
as
previously
when
creating the
boot and
root
partitions.
The details
are identical
to what was
entered for
the boot
partition
except this
time the
Mount
Point:
should
swap. Use
the drop
down list to
select a
swap
partition
type. The
Size(MB):
of the
partition
should be at
least
1024MB
and need
not exceed
2048MB.

Once you
are satisfied
with your
entries,
proceed by
clicking the
OK button
to create the
partition.
You should
now have a
set of
partitions
ready for
the
Openfiler
Operating
System
image to
install to.
Your disk
partition
scheme
should
resemble
the
following:

You have
now
completed
the
partitioning
tasks of the
installation
process and
should click
Next to
proceed to
the next
step.

Network
Configur
ation

In this
section you
will
configure
network
devices,
system
hostname
and DNS
parameters.
You will
need to
configure at
least one
network
interface
card in
order to
access the
Openfiler
web
interface
and to serve
data to
clients on a
network. In
the unlikely
event that
you will be
using
DHCP to
configure
the network
address, you
can simply
click Next
and proceed
to the next
stage of the
installation
process.
If on the
other hand
you wish to
define a
specific IP
address and
hostname,
click the
Edit button
at the top
right corner
of the
screen in
the Network
Devices
section.
Network
interface
devices are
designated
ethX where
X is a
number
starting at 0.
The first
network
interface
device is
therefore
eth0. If you
have more
than one
network
interface
device, they
will all be
listed in the
Network
Devices
section.

When you
click the
Edit button,
a new form
will popup
for you to
configure
the network
device in
question. As
you do not
wish to use
DHCP for
this
interface,
uncheck the
Configure
Using
DHCP
checkbox.
This will
then allow
you to enter
a network
IP address
and
Netmask in
the
appropriate
form fields.
Enter your
desired
settings and
click OK to
proceed.

Once you
have
configured
a network
IP address,
you may
now enter a
hostname
for the
system. The
default
hostname
localhost.lo
caldomain
is not
suitable and
you will
need to
enter a
proper
hostname
for the
system.
This will be
used later
when you
configure
the system
to
participate
on your
network
either as an
Active
Directory /
Windows
NT PDC
client or as
an LDAP
domain
member
server. You
will also, at
this point,
need to
configure
gateway IP
address and
DNS server
IP
addresses.
To
complete
this task
you will
need the
following
information:

1. Desi
red
host
nam
e-
this
is
the
nam
e
you
will
call
the
syste
m.
Usu
ally
this
will
be a
fully
quali
fied
host
nam
e e.g
hom
er.th
e-
simp
sons
.com
.

2. Gate
way
IP
addr
ess -
this
is
the
IP
addr
ess
of
your
netw
ork
gate
way
to
allo
w
routi
ng
to
the
Inter
net

3. Prim
ary
DN
S
Serv
er -
this
is
the
DN
S
serv
er
on
your
netw
ork.
Note
that
if
you
inten
d to
use
Acti
ve
Dire
ctor
y or
LD
AP
as
your
auth
entic
ation
mec
hani
sm,
you
will
need
to
assig
na
func
tiona
l
DN
S IP
addr
ess
so
that
the
auth
entic
ation
mec
hani
sm
is
able
to
resol
ve
the
auth
entic
ation
serv
er
host
nam
es.

4. Seco
ndar
y/Te
rtiar
y
DN
S
Serv
er -
enter
a
seco
nd
and
third
DN
S
serv
er if
they
are
avail
able
on
your
netw
ork.

The
following
illustration
shows an
example
where a
hostname
has been
assigned,
and
gateway IP,
primary and
secondary
DNS
information
has also
been
entered.
Once you
are satisfied
with your
entries,
please
proceed by
clicking the
Next button.

Time
Zone
Selection

Set the
default
system time
zone. You
can achieve
this by
following
the
instructions
on the left
side of the
screen. If
your system
BIOS has
been
configured
to use UTC,
check the
UTC
checkbox at
the bottom
of the
screen and
click Next
to proceed.

Set Root
Password

You need to
configure a
root
password
for the
system. The
root
password is
the
superuser
administrato
r password.
With the
root
account,
you can log
into the
system to
perform any
administrati
ve tasks that
are not
offered via
the web
interface.
Select a
suitable
password
and enter it
twice in the
provided
textboxes.
When you
are satisfied
with your
entries,
click Next
to proceed
with the
installation
process.
NB: the
root
password is
meant for
logging
into the
console of
the
Openfiler
server. The
default
username
and
password
for the
Openfiler
web
manageme
nt GUI are:
"openfiler"
and
"password
"
respectively
.

About To
Install

This screen
informs you
that
installation
configuratio
n has been
completed
and the
installer is
awaiting
your input
to start the
installation
process
which will
format
disks, copy
data to the
system and
configure
system
parameters
such as
setting up
the boot
loader and
adding
system
users. Click
Next if you
are satisfied
with the
entries you
have made
in the
previous
screens.

Note

You cannot
go back to
previous
screens
once you
have gone
past this
point. The
installer
will erase
any data on
the
partitions
you defined
in the
partitioning
section.
Installati
on

Once you
have
clicked
Next in the
preceding
section, the
installer
will begin
the
installation
process.
The
following
screenshots
depict what
happens at
this point.
Installati
on
Complete

Once the
installation
has
completed,
you will be
presented
with a
congratulato
ry message.
At this point
you simply
need to
click the
Reboot
button to
finish the
installer and
boot into
the installed
Openfiler
system.

Note

After you
click
Reboot
remove the
installation
CD from
the
CD/DVD-
ROM drive.

Once the
system
boots up,
start
configuring
Openfiler
by pointing
your
browser at
the host
name or IP
address of
the
Openfiler
system. The
interface is
accessible
from https
port 446.
e.g..
https://home
r.the-
simpsons.co
m:446.

Manageme
nt
Interface:
https://<ip
of openfiler
host>:446

Administra
tor
Username:
openfiler

Administrat
or Password:
password
You need to configure Openfiler as a iSCSI target. What’s this? Basicaly that
Openfiler will be acting as a server.

 First create physical volume. Go to Volumes and click on the link (see image below).

 Then create a partition on this volume. Clic on /dev/sdb


You will get onto a page which looks like this one… Click on the button Create
 Then Create a Volume Group. Give it some name and click the button Add Volume Group

 Then add a volume at Volumes, then Add Volume.


You will then be directed to the page where you can fill the name for your iSCSI volume and
also Description. You then must pull the button indicated with the slider bar to the right to
specify how big the volume should be. Don’t forgot to use the drop down box to select iSCSI
as a type of volume.

Go to General Tab and edit the properties of the volume.

Just go to general tab and click on Network Setup. Now, fill in the IP adress of Openfiler VM,
set the subnet mask, select Share, and then click on the button Update.
Go and enable the iSCSI target

To be able to do that you must go to Services and click on a link to enable the iSCSI target.
Configuring shared storage on both the nodes
 Install the iscsi-initiator rpm on both the nodes

yum install iscsi-initiator-utils-6.2.0.872-6.el5

After you have installed your iSCSI initiator, we have to configure it.  The first step is to
enter your authentication information.  This is stored in the /etc/iscsi/iscsid.conf file.
Open this file in your favorite text editor.  Assuming your server requires authentication,
there are four lines you need to watch for, they are as follows:

node.session.auth.username = test
node.session.auth.password = test

discovery.sendtargets.auth.username = test
discovery.sendtargets.auth.password = test

[root@db2 ~]# /etc/init.d/iscsi start

 Execute the following set of commands on both the nodes to configure shared storage

iscsiadm -m discovery -t st -p 192.168.188.209 I default -P 1

iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.345e82b7ef0a


-p 192.168.188.209:3260 --login

[root@db1 httpd]# fdisk –l

Disk /dev/sda: 160.0 GB, 160000000000 bytes


255 heads, 63 sectors/track, 19452 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 1434 11518573+ 83 Linux
/dev/sda2 1435 6456 40339215 83 Linux
/dev/sda3 6457 10660 33768630 82 Linux
swap / Solaris
/dev/sda4 10661 19452 70621740 5 Extended
/dev/sda5 10661 11425 6144831 83 Linux
/dev/sda6 11426 15629 33768598+ 83 Linux
/dev/sda7 15630 16139 4096543+ 83 Linux
/dev/sda8 16140 18942 22515066 83 Linux
/dev/sda9 18943 19197 2048256 83 Linux
/dev/sda10 19198 19452 2048256 83 Linux
Disk /dev/sdb: 369.8 GB, 369836949504 bytes
255 heads, 63 sectors/track, 44963 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

gfs_mkfs -p lock_dlm -t Sierra:storage1 -j 8 /dev/vg1/lvo

 Now make an entry in /etc/fstab as follows

/dev/vg1/lvo /var/www/html/ gfs defaults 0 0

[root@db2 ~]# /etc/init.d/gfs status


Configured GFS mountpoints:
/var/www/html/
Active GFS mountpoints:
/var/www/html

[root@db1 ~]# lvs


LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvo vg1 -wi-a- 78.12G

[root@db1 ~]# vgs


VG #PV #LV #SN Attr VSize VFree
vg1 1 1 0 wz--nc 344.43G 266.31G

[root@db1 ~]# pvs


PV VG Fmt Attr PSize PFree
/dev/sdb vg1 lvm2 a- 344.43G 266.31G
[root@db1 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2

Note:

Execute the following set of commands on both the nodes to remove any previous logs or
shared storage

Logging out of session


iscsiadm -m node --logoutall=all

This command will remove the record

iscsiadm -m session --sid=1 --op=delete

iscsiadm -d 9 -m discovery -t sendtargets -p 192.168.1.5

Adding the fence device in Conga


as

Installing and Configuring the Apache HTTP Server


The Apache HTTP Server must be installed and configured on all nodes in the assigned failover
domain, if used, or in the cluster. The basic server configuration must be the same on all nodes
on which it runs for the service to fail over correctly. The following example shows a basic
Apache HTTP Server installation that includes no third-party modules or performance tuning.
On all node in the cluster (or nodes in the failover domain, if used), install the httpd RPM
package.

yum install httpd-2.2.3-11.el5_1.3


To configure the Apache HTTP Server as a cluster service, perform the following tasks:

1. Edit the /etc/httpd/conf/httpd.conf configuration file and customize the file


according to your configuration. For example:

 Specify the directory that contains the HTML files. Also specify this mount point when
adding the service to the cluster configuration. It is only required to change this field if
the mount point for the web site's content differs from the default setting of
/var/www/html/. For example:
 DocumentRoot "/mnt/httpdservice/html"
 Specify a unique IP address to which the service will listen for requests. For example:
 Listen 192.168.188.201:80

This IP address then must be configured as a cluster resource for the service using the Cluster
Configuration Tool.

 If the script directory resides in a non-standard location, specify the


directory that contains the CGI programs. For example:
ScriptAlias /cgi-bin/ "/mnt/httpdservice/cgi-bin/"

 Specify the path that was used in the previous step, and set the access
permissions to default to that directory. For example:
<Directory /mnt/httpdservice/cgi-bin">
AllowOverride None
Options None
Order allow,deny
Allow from all
</Directory>

Additional changes may need to be made to tune the Apache HTTP Server or add
module functionality.

The standard Apache HTTP Server start script, /etc/rc.d/init.d/httpd is also


used within the cluster framework to start and stop the Apache HTTP Server on
the active cluster node. Accordingly, when configuring the service, specify this
script by adding it as a Script resource in the Cluster Configuration Tool.

2. Copy the configuration file over to the other nodes of the cluster (or nodes of the failover
domain, if configured).

Before the service is added to the cluster configuration, ensure that the Apache HTTP Server
directories are not mounted. Then, on one node, invoke the Cluster Configuration Tool to add
the service, as follows. This example assumes a failover domain named httpd-domain was
created for this service.

1. Add the init script for the Apache HTTP Server service.

 Select the Resources tab and click Create a Resource. The Resources
Configuration properties dialog box is displayed.
 Select Script form the drop down menu.
 Enter a Name to be associated with the Apache HTTP Server service.
 Specify the path to the Apache HTTP Server init script (for example,
/etc/rc.d/init.d/httpd) in the File (with path) field.
 Click OK.

2. Add a device for the Apache HTTP Server content files and/or custom scripts.

 Click Create a Resource.


 In the Resource Configuration dialog, select File System from the drop-
down menu.
 Enter the Name for the resource (for example, httpd-content.
 Choose ext3 from the File System Type drop-down menu.
 Enter the mount point in the Mount Point field (for example,
/var/www/html/).
 Enter the device special file name in the Device field (for example,
/dev/sda3).

3. Add an IP address for the Apache HTTP Server service.


 Click Create a Resource.
 Choose IP Address from the drop-down menu.
 Enter the IP Address to be associated with the Apache HTTP Server service.
 Make sure that the Monitor Link checkbox is left checked.
 Click OK.

4. Click the Services property.


5. Create the Apache HTTP Server service.

 Click Create a Service. Type a Name for the service in the Add a Service
dialog.
 In the Service Management dialog, select a Failover Domain from the drop-
down menu or leave it as None.
 Click the Add a Shared Resource to this service button. From the available
list, choose each resource that you created in the previous steps. Repeat this
step until all resources have been added.
 Click OK.

6. Choose File => Save to save your changes.

References
http://www.openfiler.com/learn/how-to/graphical-installation
http://www.everythingvm.com/content/connecting-storage-systems-using-iscsi-
nfs-and-cifs-smb

http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-
vmware-esx/

http://www.vladan.fr/how-to-connect-esx4-vsphere-to-openfiler-iscsi-nas/

http://sources.redhat.com/cluster/conga/doc/user_manual.htm

http://rhel-cluster.blogspot.com/

https://access.redhat.com/kb/docs/DOC-9826

http://www.itchythinking.com/itchythinking/knowledge/node/81
http://www.vladan.fr/how-to-configure-openfiler-iscsi-storage-for-use-with-vmware-esx/

You might also like