Clustered Samba Configuration
Clustered Samba Configuration
Mark Heslin
Principal Software Engineer
Version 1.0
October 2012
1801 Varsity Drive™
Raleigh NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
PO Box 13588
Research Triangle Park NC 27709 USA
Linux is a registered trademark of Linus Torvalds. Red Hat, Red Hat Enterprise Linux and the Red Hat
"Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other
countries.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
UNIX is a registered trademark of The Open Group.
Intel, the Intel logo and Xeon are registered trademarks of Intel Corporation or its subsidiaries in the
United States and other countries.
All other trademarks referenced herein are the property of their respective owners.
© 2012 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set
forth in the Open Publication License, V1.0 or later (the latest version is presently available at
http://www.opencontent.org/openpub/).
The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable
for technical or editorial errors or omissions contained herein.
Distribution of modified versions of this document is prohibited without the explicit permission of Red
Hat Inc.
Distribution of this work or derivative of this work in any standard (paper) book form for commercial
purposes is prohibited unless prior permission is obtained from Red Hat Inc.
www.redhat.com ii [email protected]
Table of Contents
1 Executive Summary..........................................................................................1
2 Component Overview........................................................................................2
2.1 Red Hat Enterprise Linux 6................................................................................................2
2.2 High Availability Add-On.....................................................................................................3
2.2.1 Quorum.........................................................................................................................3
2.2.2 Resource Group Manager............................................................................................3
2.2.3 Fencing.........................................................................................................................4
2.2.3.1 IPMI.........................................................................................................................4
2.2.4 CMAN............................................................................................................................4
2.2.5 Conga............................................................................................................................5
2.2.5.1 Luci..........................................................................................................................5
2.2.5.2 Ricci.........................................................................................................................5
2.2.6 CCS...............................................................................................................................6
2.3 Resilient Storage Add-On...................................................................................................7
2.3.1 GFS2.............................................................................................................................7
2.3.2 Cluster Logical Volume Manager (CLVM)....................................................................7
2.3.3 CTDB (Clustered Samba).............................................................................................8
2.3.3.1 Lock Volume............................................................................................................8
2.3.3.2 Data Volume............................................................................................................8
2.4 DM Multipath......................................................................................................................8
2.5 Samba................................................................................................................................9
2.6 SMB/CIFS...........................................................................................................................9
2.7 Winbind...............................................................................................................................9
2.8 Solution Stack...................................................................................................................11
2.8.1 Products and Components.........................................................................................11
2.8.2 Package Details..........................................................................................................12
2.9 Component Log Files.......................................................................................................14
www.redhat.com iv [email protected]
6 Windows Active Directory Integration..............................................................66
6.1 Overview...........................................................................................................................66
6.1.1 Configuration Summary..............................................................................................66
6.1.2 Cluster Configuration with Active Directory Integration..............................................67
6.1.3 Authentication and ID Components............................................................................68
6.2 Integration Tasks..............................................................................................................69
6.2.1 Synchronize Time Service..........................................................................................69
6.2.2 Configure DNS............................................................................................................70
6.2.3 Update Hosts File.......................................................................................................70
6.2.4 Install/Configure Kerberos Client................................................................................71
6.2.5 Install oddjob-mkhomedir............................................................................................72
6.2.6 Configure Authentication............................................................................................73
6.2.7 Verify/Test Active Directory........................................................................................78
6.2.8 Modify Samba Configuration......................................................................................79
6.2.9 Verification of Services...............................................................................................83
6.2.10 Configure CTDB Winbind Management (optional)...................................................86
7 Conclusion.......................................................................................................87
Appendix A: References....................................................................................88
Acknowledgements.........................................................................................103
[email protected] v www.redhat.com
1 Executive Summary
This reference architecture details the deployment, configuration and management of highly
available file shares using clustered Samba on Red Hat Enterprise Linux 6. The most
common administration tasks are included - starting/stopping nodes, adding/removing nodes
and file shares. For environments interested in integrating clustered Samba into Windows
Active Directory domains, a separate section is provided. Active Directory integration permits
clients to access Samba cluster file shares using existing Active Directory user accounts and
authentication methods.
Clustered Samba extends the benefits of Samba file sharing by providing clients with
concurrent access to highly available file shares. In the event of a cluster node fault or
failure, client sessions through the remaining nodes maintain access to the highly available
file shares. Client sessions through a faulty node are not maintained and require a reconnect
due to client protocol limitations.
Clustered Samba enhances Samba functionality through the use of two of Red Hat's premier
Add-On products:
• High Availability (HA) Add-On
• Resilient Storage (RS) Add-On
The High Availability Add-On provides reliability, availability and scalability (RAS) to critical
production services by eliminating single points of failure, and providing automatic failover of
those services in the event of a cluster node failure or error condition. The Resilient Storage
Add-On extends these capabilities by providing a cluster logical volume manager (CLVM), a
cluster file system (GFS2) and a cluster implementation of the Samba TDB database (CTDB).
In combination, the High Availability Add-On and Resilient Storage Add-On provide the
underlying framework for configuring clustered Samba and deploying highly-available file
shares.
A three-node cluster is deployed to provide simultaneous (active-active), read-write client
access to file shares. A maximum of four nodes is supported on Red Hat Enterprise Linux 6.
I/O performance is increased and scales out linearly as the number of clustered Samba
nodes is expanded.
The underlying storage for file share data and cluster recovery utilizes clustered LVM (CLVM)
volumes. The CLVM volumes within this reference architecture are created on Fibre Channel
based storage, but other shared storage (e.g. - iSCSI) may be used.
Additional redundancy and performance increases are achieved through the use of separate
public and private (cluster interconnect) networks. Multiple network adapters are used on
these networks with all interfaces bonded together. Similarly, device mapper multipathing is
used to maximize performance and availability to all CLVM volumes.
This document does not require extensive Red Hat Enterprise Linux experience but the
reader is expected to have a working knowledge of Linux administration, clustering, Samba
and client side file sharing concepts.
[email protected] 1 www.redhat.com
2 Component Overview
This section provides an overview on the Red Hat Enterprise Linux operating system, Red
Hat's High Availability Add-On and the other components used in this reference architecture.
www.redhat.com 2 [email protected]
2.2 High Availability Add-On
The High Availability Add-On for Red Hat Enterprise Linux provides high availability of
services by eliminating single points of failure. By offering failover services between nodes
within a cluster, the High Availability Add-On supports high availability for up to 16 nodes.
(Currently this capability is limited to a single LAN or datacenter located within one physical
site.)
The High Availability Add-On also enables failover for off-the-shelf applications such as
Apache, MySQL, PostgreSQL and Samba, any of which can be coupled with resources like
IP addresses and single-node file systems to form highly available services. The High
Availability Add-On can also be easily extended to any user-specified application that is
controlled by an init script per UNIX System V (SysV) standards.
When using the High Availability Add-On, a highly available service can fail over from one
node to another with no apparent interruption to cluster clients. The High Availability Add-On
ensures data integrity when one cluster node takes over control of a service from another
cluster node. It achieves this by promptly evicting nodes from the cluster that are deemed to
be faulty using a method called "fencing", thus preventing data corruption. The High
Availability Add-On supports several types of fencing, including both power and storage area
network (SAN) based fencing.
The following sections describe the various components of the High Availability Add-On in the
context of this reference architecture and the deployment of clustered Samba.
2.2.1 Quorum
Quorum is a voting algorithm used by the cluster manager (CMAN). To maintain quorum, the
nodes in the cluster must agree about their status among themselves. Quorum determines
which nodes in the cluster are dominant. For example, if there are three nodes in a cluster
and one node loses connectivity, the other two nodes communicate with each other and
determine that the third node needs to be fenced. The action of fencing ensures that the node
which lost connectivity does not corrupt data.
By default each node in the cluster has one quorum vote, although this is configurable. There
are two methods the nodes can communicate with each other to determine quorum. The first
method quorum via network consists of a simple majority (50% of the nodes +1 extra). The
second method is by adding a quorum disk. The quorum disk allows for user-specified
conditions to exist which help determine which node(s) should be dominant.
This reference architecture uses network quorum - a dedicated quorum disk is not required.
2.2.2 Resource Group Manager
Resource group manager (rgmanager) provides failover capabilities for collections of cluster
resources known as resource groups or resource trees. Rgmanager allows system
administrators to define, configure, and monitor cluster services such as httpd or mysql.
In the event of a node failure, rgmanager relocates the clustered service to another node to
restore service availability. Services can also be restricted to run on specific cluster nodes.
[email protected] 3 www.redhat.com
In the context of this reference architecture, rgmanager is not used as it does not provide
support for clustered Samba file sharing services.
2.2.3 Fencing
Fencing is the disconnection of a node from the cluster's shared storage. Fencing prevents
the affected node from issuing I/O to shared storage, thus ensuring data integrity. The cluster
infrastructure performs fencing through fenced, the fence daemon.
When CMAN determines that a node has failed, it communicates to other cluster-
infrastructure components to inform them that the node has failed. The failed node is fenced
when fenced is notified. Other cluster-infrastructure components determine what actions to
take - that is, they perform any recovery that needs to done. For example, distributed lock
manager (DLM) and Global File System version 2 (GFS2), when notified of a node failure,
suspend activity until they detect that fenced has completed fencing the failed node. Upon
confirmation that the failed node is fenced, DLM and GFS2 perform recovery. DLM releases
locks of the failed node; GFS2 recovers the journal of the failed node.
The fencing program (fenced) determines from the cluster configuration file which fencing
method to use. Two key elements in the cluster configuration file define a fencing method:
fencing agent and fencing device. The fencing program makes a call to a fencing agent
specified in the cluster configuration file. The fencing agent, in turn, fences the node via a
fencing device. When fencing is complete, the fencing program notifies the cluster manager.
The High Availability Add-On provides a variety of fencing methods:
• Power fencing - A fencing method that uses a power controller to power off an
inoperable node
• Storage fencing - Includes fencing methods that disable the Fibre Channel port that
connects storage to an inoperable node. SCSI-3 persistent reservations are another
commonly used storage fencing method in which access to a common shared storage
device can be revoked to an inoperable node.
• Systems management fencing - Fencing methods that disable I/O or power to an
inoperable node. Examples include IBM® BladeCenter, Dell® DRAC/MC, HP® ILO,
IPMI, and IBM RSA II.
2.2.3.1 IPMI
The Intelligent Platform Management Interface (IPMI) is a standardized computer interface
that allows administrators to remotely manage a system. Centered around a baseboard
management controller (BMC), IPMI supports functions to access the system BIOS, display
event logs, power on, power off and power cycle a system.
This reference architecture uses IPMI to fence faulty cluster nodes across the public network
through the fence_ipmilan agent.
2.2.4 CMAN
CMAN manages cluster membership, fencing, locking and quorum. CMAN runs as a service
www.redhat.com 4 [email protected]
on all cluster nodes and simplifies the management of the following HA cluster daemons:
• corosync (manages cluster membership, messaging, quorum)
• fenced (manages cluster node I/O fencing)
• dlm_controld (manages distributed file locking to shared file systems)
• gfs_controld (manages GFS2 file system mounting and recovery )
From a systems management perspective, CMAN is the first service in the component stack
started when bringing up a clustered Samba node.
2.2.5 Conga
Conga is an agent/server architecture for the remote administration of cluster nodes. The
agent component is called ricci and the server component is called luci. One luci
management server can communicate with ricci agents installed on multiple cluster nodes.
When a system is added to a luci management server, authentication is only done the first
time. No authentication is necessary afterwards. The luci management interface allows
administrators to configure and manage cluster nodes. Communications between luci and
ricci is done via XML over SSL.
2.2.5.1 Luci
Luci provides a web-based graphical user interface that helps visually administer the nodes
in a cluster, manage fence devices, failover domains, resources, clustered services and other
cluster attributes.
In the context of clustered Samba, luci is not used.
2.2.5.2 Ricci
Ricci is the cluster management and configuration daemon that runs on the cluster nodes.
When ricci is installed it creates a user account called ricci and a password is set for the
account. All ricci accounts must be configured with the same password across all cluster
nodes to allow authentication with the luci management server. The ricci daemon
requires port 11111 to be open for both tcp and udp traffic.
[email protected] 5 www.redhat.com
2.2.6 CCS
The Cluster Configuration System (CCS) was introduced in Red Hat Enterprise Linux 6.1.
CCS provides a powerful way of managing a Red Hat Enterprise Linux cluster from the
command line. CCS allows an administrator to create, modify and view cluster configurations
from a remote node through ricci or on a local file system. CCS has a robust man page
detailing all of the options but the ones most commonly used are described in Table 2.2.6:
Common CCS Switches:
Switches Function
www.redhat.com 6 [email protected]
2.3 Resilient Storage Add-On
The Resilient Storage Add-On for Red Hat Enterprise Linux provides numerous file system
capabilities for improving resiliency to system failure. The following components are included
with the Resilient Storage Add-On:
• Global File System 2 (GFS2)
• Cluster Logical Volume Manager (CLVM)
• Clustered Samba (CTDB)
The following sections describe each component in further detail.
2.3.1 GFS2
GFS2 is a shared disk clustered file system in which data is shared across all cluster nodes
with concurrent access to the file system name space. Processes on different cluster nodes
work with GFS2 files in the same way that processes on a single node access files on a local
file system.
This reference architecture uses the GFS2 file system on all CLVM volumes.
2.3.2 Cluster Logical Volume Manager (CLVM)
Volume managers create a layer of abstraction between physical storage and applications
and services running on host operating systems. Volume managers present logical volumes
that can be flexibly managed with little to no impact on the applications or services accessing
them. Logical volumes can be increased in size or the underlying storage relocated to another
physical device without need to unmount the file system.
The architecture of LVM consists of three components:
• Physical Volume (PV)
• Volume Group (VG)
• Logical Volume (LV)
Physical volumes (PV) are the underlying physical storage – i.e. a block device such as a
whole disk or partition. A volume group (VG) is the combination of one or more physical
volumes. Once a volume group has been created, logical volumes (LV) can be created from it
with each logical volume formatted and mounted similar to a physical disk.
Cluster logical volumes (CLVM) expand the use of Logical Volumes (LV) by making them
accessible and shared among cluster nodes. Cluster logical volumes must be formatted with
a cluster file system such as GFS2.
The CLVM volumes within this reference architecture consist of one physical volume (PV) that
is a member of a volume group (VG) from which a single logical volume (LV) is created.
[email protected] 7 www.redhat.com
2.3.3 CTDB (Clustered Samba)
CTDB is the cluster implementation of the TDB database used by Samba. To use CTDB, a
clustered file system must be available and mounted on all nodes in the cluster. Under Red
Hat Enterprise Linux 6, the cluster file system is GFS2 (included in the Resilient Storage Add-
On).
CTDB extends state information and inter-process communications across clustered Samba
nodes in order to maintain consistent data and locking. CTDB also provides HA features such
as node monitoring, node failover and IP takeover (IPAT) in the event of a cluster node fault
or failure. When a node in a cluster fails, CTDB will relocate the IP address of the failed node
to a different node to ensure that the IP addresses for the Samba file sharing services are
highly available.
As of Red Hat Enterprise Linux 6.2, CTDB runs as a cluster stack in conjunction with the Red
Hat Enterprise Linux 6 High Availability Add-On clustering. From a cluster management
perspective, this is important and impacts the sequence for starting and stopping of services.
Section 5.1 Starting, Shutting down, Restarting Cluster Nodes discusses in further detail.
2.4 DM Multipath
Device mapper multipathing (DM Multipath) allows multiple I/O paths to be configured
between a server and the connection paths to SAN storage array volumes. The paths are
aggregated and presented to the server as a single device to maximize performance and
provide high availability. A daemon (multipathd) handles checking for path failures and
status changes.
This reference architecture uses DM Multipath on the all CLVM volumes.
www.redhat.com 8 [email protected]
2.5 Samba
Samba is an open source suite of programs that can be installed on Red Hat Enterprise Linux
6 systems to provide file and print services to Microsoft Windows clients.
Samba provides two daemons that run on a Red Hat Enterprise Linux 6 system:
• smbd - primary daemon providing file and print services to clients via SMB
• nmbd – NBT (NetBIOS over TCP) namespace server
When combined with the reliability and simplified management capabilities of Red Hat
Enterprise Linux 6, Samba is the application of choice for providing file and print sharing to
Windows clients. Samba version 3.5 is used in the Samba based configurations detailed
within this reference architecture.
2.6 SMB/CIFS
Server Message Block (SMB), sometimes referred to as the Common Internet File System
(CIFS), is a network protocol developed to facilitate client to server communications for file
and print services. The SMB protocol was originally developed by IBM and later extended by
Microsoft.
Samba supports the current SMB protocol (SMB1) as used in all Windows systems from
Legacy Windows 2000 through to current implementations.
2.7 Winbind
Winbind is a component of the Samba suite of programs that allows for unified user logon.
Winbind uses an implementation of Microsoft RPC (Remote Procedure Call), PAM (Pluggable
Authentication Modules), and Red Hat Enterprise Linux 6 nsswitch (Name Service Switch) to
allow Windows Active Directory Domain Services users to appear and operate as local users
on a Red Hat Enterprise Linux machine. Winbind minimizes the need for system
administrators to manage separate user accounts on both the Red Hat Enterprise Linux 6 and
Windows Server 2008 R2 environments. Winbind provides three separate functions:
• Authentication of user credentials (via PAM). This makes it possible to log onto a Red
Hat Enterprise Linux 6 system using Active Directory user accounts. Authentication is
responsible for identifying “Who” a user claims to be.
• ID Tracking/Name Resolution via nsswitch (NSS). The nsswitch service allows user
and system information to be obtained from different database services such as LDAP
or NIS. ID Tracking/Name Resolution is responsible for determining “Where” user
identities are found.
• ID Mapping represents the mapping between Red Hat Enterprise Linux 6 user (UID),
group (GID), and Windows Server 2008 R2 security (SID) IDs. ID Mappings are
handled through an idmap “backend” that is responsible for tracking “What” ID's users
are known by in both operating system environments.
[email protected] 9 www.redhat.com
Figure 2.7: Winbind Authentication, ID Components and Backends represents the
relationship between Winbind and Active Directory:
Winbind idmap “backends” are one of the most commonly misunderstood components in
Samba. Since Winbind provides a number of different “backends” and each manages ID
Mappings differently, it is useful to classify them as follows:
• Allocating - “Read-Writeable” backends that store ID Mappings in a local database
file on the Red Hat Enterprise Linux 6 system(s).
• Algorithmic - “Read-Only” backends that calculate ID Mappings on demand and
provide consistent ID Mappings across each Red Hat Enterprise Linux 6 system.
• Assigned - “Read-Only” backends that use ID Mappings pre-configured within Active
Directory.
www.redhat.com 10 [email protected]
2.8 Solution Stack
The full set of products, components and packages that comprise the clustered Samba
solution stack are outlined in the next two sections.
2.8.1 Products and Components
Figure 2.8.1: Solution Stack - Products, Components, Daemons provides a summary of
the product and components that comprise clustered Samba on Red Hat Enterprise Linux 6:
[email protected] 11 www.redhat.com
2.8.2 Package Details
Details on the individual products/groups, packages and versions, can be found in Table 2.8.2: Solution Stack – Product
Package Details:
Product/Group Package Architecture Version Release Installation Requirement
Samba samba x86_64 3.5.10 125.el6 Mandatory
Samba samba-client x86_64 3.5.10 125.el6 Recommended
Samba samba-common x86_64 3.5.10 125.el6 Mandatory
Samba samba-winbind x86_64 3.5.10 125.el6 Mandatory
Samba samba-winbind-clients x86_64 3.5.10 125.el6 Recommended
High Availability cman x86_64 3.0.12.1 32.el6_3.1 Mandatory
High Availability ccs x86_64 0.16.2 55.el6 Default
High Availability omping x86_64 0.0.4 1.el6 Default
High Availability rgmanager x86_64 3.0.12.1 12.el6 Default
High Availability cluster-cim x86_64 0.16.2 18.el6 Optional
High Availability cluster-glue-libs-devel x86_64 1.0.5 6.el6 Optional
High Availability cluster-snmp x86_64 0.16.2 18.el6 Optional
High Availability clusterlib-devel x86_64 3.0.12.1 32.el6_3.1 Optional
High Availability corosynclib-devel x86_64 1.4.1 7.el6_3.1 Optional
High Availability fence-virtd-checkpoint x86_64 0.2.3 9.el6 Optional
High Availability foghorn x86_64 0.1.2 1.el6 Optional
High Availability libesmtp-devel x86_64 1.0.4 15.el6 Optional
High Availability openaislib-devel x86_64 1.1.1 7.el6 Optional
High Availability pacemaker x86_64 1.1.7 6.el6 Optional
High Availability pacemaker-libs-devel x86_64 1.1.7 6.el6 Optional
High Availability python-repoze-what-quickstart noarch 1.0.1 1.el6 Optional
www.redhat.com 14 [email protected]
3 Reference Architecture Configuration
This section provides an overview of the hardware components that were used in the
deployment of this reference architecture. The cluster nodes (smb-srv1, smb-srv2, smb-srv3)
were configured on an HP BladeSystem c7000 enclosure using three HP ProLiant BL460c G6
Blade servers. Two 10 Gb/s ethernet networks were configured for use as the public and
cluster interconnect networks. The HP Blade servers share access to the CTDB Lock and
Samba Data (file share) volume located on an HP StorageWorks MSA2324fc fibrechannel
storage array.
All public and cluster node networks are configured with two bonded interfaces for
redundancy. Client access to Samba file share (Data Volume) is over the public network.
Figure 3: Cluster Configuration depicts an overview of the cluster configuration:
[email protected] 15 www.redhat.com
3.1 Cluster Server - Node 1
Component Detail
Hostname smb-srv1
Red Hat Enterprise Linux 6.3 (64-bit)
Operating System
(2.6.32-279.1.1.el6.x86_64 kernel)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
4 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb
www.redhat.com 16 [email protected]
3.3 Cluster Server - Node 3
Component Detail
Hostname smb-srv3
Red Hat Enterprise Linux 6.3 (64-bit)
Operating System
(2.6.32-279.1.1.el6.x86_64 kernel)
System Type HP ProLiant BL460c G6
Quad Socket, Quad Core (16 cores)
Processor
Intel® Xeon® CPU X5550 @2.67GHz
Memory 48 GB
4 x 146 GB SATA internal disk drive (RAID 1)
Storage
2 x Qlogic QMH2562 8Gb FC HBA
Network 8 x Broadcom NetXtreme II BCM57711E XGb
[email protected] 17 www.redhat.com
3.5 Fibre Channel Storage Array
Component Detail
Hostname ra-msa20
HP StorageWorks MSA2324fc
System Type
(1 x HP MSA70 expansion shelf)
CPU Type: Turion MT32 1800MHz
Controllers Cache: 1GB
2 x Host Ports
Storage Controller Code Version: M112R14
Memory Controller FPGA Code Version: F300R22
Storage Controller Loader Code Version: 19.009
Management Controller Code Version: W441R35
Firmware
Management Controller Loader Code Version: 12.015
Expander Controller Code Version: 1112
CPLD Code Version: 8
Hardware Version: 56
Physical Drives 48 x 146GB SAS drives (24 enclosure, 24 expansion shelf)
Logical Drives 4 x 1.3 TB Virtual Disks (12 disk, RAID 6)
www.redhat.com 18 [email protected]
4 Clustered Samba Deployment
4.1 Deployment Task Flow
Figure 4.1: Clustered Samba Deployment Task Flow provides an overview of the order in
which the deployment of the cluster nodes and cluster creation tasks are performed:
Appendix H: Deployment Checklists provides a detailed list of steps to follow for deploying
highly available file shares on a Red Hat Enterprise Linux 6 Samba Cluster.
[email protected] 19 www.redhat.com
4.2 Deploy Cluster Nodes
Prior to creating the cluster, each cluster node is deployed by performing the following series
of steps on each cluster node:
• Install Red Hat Enterprise Linux 6
• Configure Networks and Bonding
• Configure Firewall
• Configure Time Service (NTP)
• Configure Domain Name System (DNS)
• Install Cluster Node Software (“High Availability” Add-On)
• Configure Storage
The next sections describe how to perform the deployment steps in detail.
4.2.1 Install Red Hat Enterprise Linux 6
The installation of Red Hat Enterprise Linux 6 on each of the three cluster nodes is performed
using a Red Hat Satellite server. Details on how the Satellite server was configured can be
found in the Microsoft Windows Server 2008 R2 section of Appendix A: References. Local
media can be used in lieu of a Satellite server deployment.
Once Red Hat Enterprise Linux 6 has been installed on each cluster node, perform the
following sequence of steps to register and update each cluster node:
...output abbreviated...
www.redhat.com 20 [email protected]
Expires: 01/01/2022
Machine Type: physical
4. Update each node to take in the latest patches and security updates:
# yum update
Follow the steps above and consult the Red Hat Enterprise Linux 6 Installation and
Deployment Guides found in the Red Hat Enterprise Linux 6 section of Appendix A:
References for further details.
[email protected] 21 www.redhat.com
4.2.2 Configure Networks and Bonding
The cluster nodes are configured to provide access to all members across both the public and
cluster interconnect (private) networks. The public network (10.16.142.0) is configured on the
eth0 interface and bonded to the eth1 interface for redundancy. The cluster interconnect
(10.0.0.0) is configured on the eth2 interface and bonded to the eth3 interface for
redundancy. Static IP addressing is used throughout the cluster configuration.
1. Verify that NetworkManager is disabled on startup to prevent conflicts with the High
Availability Add-On cluster services:
# chkconfig NetworkManager off
# chkconfig NetworkManager --list
NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off
2. Create bond configuration files for the public and cluster interconnect networks:
# echo "alias bond0 bonding" >> /etc/modprobe.d/bonding.conf
# echo "alias bond1 bonding" >> /etc/modprobe.d/bonding.conf
3. Create the bond interface file for the public network and save the file as
/etc/sysconfig/network-scripts/ifcfg-bond0:
DEVICE=bond0
IPADDR=10.16.142.101
NETMASK=255.255.248.0
GATEWAY=10.16.143.254
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=0 miimon=100"
4. Create the bond interface file for the cluster interconnect network and save the file as
/etc/sysconfig/network-scripts/ifcfg-bond1:
DEVICE=bond1
IPADDR=10.0.0.101
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1"
5. Modify the interface file for the first public interface and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
www.redhat.com 22 [email protected]
6. Create the interface file for the second public interface and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
7. Modify the interface file for the first cluster interconnect and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth2:
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no
8. Create the interface file for the second cluster interconnect and save the file as
/etc/sysconfig/network-scripts/ifcfg-eth3:
DEVICE=eth3
BOOTPROTO=none
ONBOOT=yes
MASTER=bond1
SLAVE=yes
USERCTL=no
[email protected] 23 www.redhat.com
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:17:a4:77:24:46
Slave queue ID: 0
12. Edit the /etc/hosts file to include the IP addresses, hostname/aliases of all cluster
node and management server interfaces:
127.0.0.1 localhost localhost.localdomain
#
#----------------#
# Cluster Nodes: #
#----------------#
#
10.16.142.101 smb-srv1 smb-srv1.cloud.lab.eng.bos.redhat.com
10.0.0.101 smb-srv1-ci smb-srv1-ci.cloud.lab.eng.bos.redhat.com
10.16.142.102 smb-srv2 smb-srv2.cloud.lab.eng.bos.redhat.com
10.0.0.102 smb-srv2-ci smb-srv2-ci.cloud.lab.eng.bos.redhat.com
10.16.142.103 smb-srv3 smb-srv3.cloud.lab.eng.bos.redhat.com
10.0.0.103 smb-srv3-ci smb-srv3-ci.cloud.lab.eng.bos.redhat.co
#
13. Distribute the file to the other two cluster nodes. For example, if the file was initially
created on cluster node smb-srv1, copy it to the other nodes as follows:
# scp -p /etc/hosts smb-srv2:/etc/hosts
# scp -p /etc/hosts smb-srv3:/etc/hosts
www.redhat.com 24 [email protected]
14. Verify all public and cluster interconnect interfaces are properly configured and
responding:
# ping smb-srv1
# ping smb-srv1-ci
# ping smb-srv2
# ping smb-srv2-ci
# ping smb-srv3
# ping smb-srv3-ci
4.2.3 Configure Firewall
Before the cluster can be created, the firewall ports must be configured to allow access to the
cluster network daemons. The specific ports requiring access are listed in Table 4.2.3
Cluster Node Ports:
Port Number Protocol Component
5404 UDP corosync/cman (Cluster Manager)
5405 UDP corosync/cman (Cluster Manager)
11111 TCP ricci (Cluster Configuration)
11111 UDP ricci (Cluster Configuration)
21064 TCP dlm (Distributed Lock Manager)
16851 TCP modclusterd
445 TCP smb (Samba)
4379 TCP ctdb (CTDB)
4379 UDP ctdb (CTDB)
137 UDP NBT (Name Service)
138 UDP NBT (Datagram Service)
139 TCP NBT (Session Service)
[email protected] 25 www.redhat.com
2 7 588 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
3 1 60 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
4 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
state NEW tcp dpt:22
5 3762 547K REJECT all -- * * 0.0.0.0/0 0.0.0.0/0
reject-with icmp-host-prohibited
3. Create a new iptables chain called cluster-chain and insert it into the INPUT
chain:
# iptables --new-chain cluster-chain
# iptables --insert INPUT --jump cluster-chain
4. Create a new iptables chain called samba-chain and insert it into the INPUT
chain:
# iptables --new-chain samba-chain
# iptables --insert INPUT --jump samba-chain
6. Add the rules for the cluster components to the chain clusters-chain:
# iptables --append cluster-chain --proto udp --destination-port 5404 \
--jump ACCEPT
# iptables --append cluster-chain --proto udp --destination-port 5405 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 11111 \
--jump ACCEPT
# iptables --append cluster-chain --proto udp --destination-port 11111 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 21064 \
--jump ACCEPT
# iptables --append cluster-chain --proto tcp --destination-port 16851 \
--jump ACCEPT
7. Add the rule for web service components to the chain samba-chain:
# iptables --append samba-chain --proto tcp --destination-port 445 \
--jump ACCEPT
# iptables --append samba-chain --proto tcp --destination-port 4379 \
--jump ACCEPT
# iptables --append samba-chain --proto udp --destination-port 4379 \
www.redhat.com 26 [email protected]
--jump ACCEPT
8. If NetBIOS is in use by clients, add the rule for web service components to the chain
netbios-chain:
# iptables --append netbios-chain --proto udp --destination-port 137 \
--jump ACCEPT
# iptables --append netbios-chain --proto udp --destination-port 138 \
--jump ACCEPT
# iptables --append netbios-chain --proto tcp --destination-port 139 \
--jump ACCEPT
[email protected] 27 www.redhat.com
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0
tcp dpt:4379
3 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0
udp dpt:4379
10. Save the new rules and verify iptables is activated on system boot:
# service iptables save
# chkconfig iptables on
4.2.4 Configure Time Service (NTP)
Configure the time service on each cluster node as follows:
1. Edit the file /etc/ntp.conf so that the time on each cluster node is synchronized from a
known, reliable time service:
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server ns1.bos.redhat.com
server 10.5.26.10
2. Activate the change by stopping the ntp daemon, updating the time, then starting the
ntp daemon. Verify the change on both servers:
# service ntpd stop
Shutting down ntpd: [ OK ]
# ntpdate 10.16.255.2
22 Mar 20:17:00 ntpdate[14784]: adjust time server 10.16.255.2 offset
-0.002933 sec
# service ntpd start
Starting ntpd: [ OK ]
4.2.5 Configure Domain Name System (DNS)
Configure DNS lookups on each cluster node as follows:
1. Edit the file /etc/resolv.conf so that the fully qualified domain name (FQDN) of the DNS
servers is specified:
domain cloud.lab.eng.bos.redhat.com
search cloud.lab.eng.bos.redhat.com
nameserver 10.nn.nnn.3
nameserver 10.nn.nnn.247
nameserver 10.nn.nnn.2
www.redhat.com 28 [email protected]
2. Similarly, the hostname of each cluster node should be set to its Fully Qualified
Domain Name (FQDN). Edit the file /etc/sysconfig/network and set the hostname to
use the FQDN:
HOSTNAME=smb-srv1.cloud.lab.eng.bos.redhat.com
4.2.6 Install Cluster Node Software
Install the High Availability and Resilient Storage Add-On packages. Perform this step on
each of the three cluster nodes:
# yum groupinstall “High Availability”
# yum groupinstall “Resilient Storage”
4.3.1 Create Cluster
Cluster creation is performed from the first cluster node (smb-srv1) and updates are deployed
to the other cluster nodes across the public network interfaces. The process involves creating
a full cluster configuration file (/etc/cluster/cluster.conf) on one node (smb-srv1-ci) then
distributing the configuration and activating the cluster on the remaining nodes. Cluster
interconnects are specified within the configuration file for all node communications.
Configure the appropriate cluster services then create the cluster.
1. Start the ricci service, configure to start on system boot and verify. Perform this step
[email protected] 29 www.redhat.com
on all cluster nodes:
# service ricci start
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]
# chkconfig ricci on
# chkconfig --list ricci
ricci 0:off 1:off 2:on 3:on 4:on 5:on 6:off
2. Configure a password for the ricci user account on each node. The same password
may be used on all cluster nodes to simplify administration:
# passwd ricci
Changing password for user ricci.
New password: **********
Retype new password: **********
passwd: all authentication tokens updated successfully.
3. Create a cluster named samba-cluster from the first cluster node (smb-srv1):
# ccs --host smb-srv1 --createcluster samba-cluster
smb-srv1 password: **********
Note that the ricci password specified in Step 2. should be entered in Step 3.
4.3.2 Add Nodes
Once the cluster has been created, specify the member nodes in the cluster configuration.
1. Add the three cluster nodes (smb-srv1-ci, smb-srv2-ci, smb-srv3-ci) to the cluster.
Perform this step from the first cluster node (smb-srv1):
# ccs --host smb-srv1 --addnode smb-srv1-ci –-nodeid=”1”
Node smb-srv1-ci added.
# ccs --host smb-srv1 --addnode smb-srv2-ci –-nodeid=”2”
Node smb-srv2-ci added.
# ccs --host smb-srv1 --addnode smb-srv3-ci –-nodeid=”3”
Node smb-srv3-ci added.
4.3.3 Add Fence Devices
Add the fence method then add devices and instances for each cluster node to the method.
IPMI LAN fencing is used in this configuration. Other fencing methods and devices can be
used depending on the resources available. Perform all steps from the first cluster node
(smb-srv1).
1. Add a fence method for the Primary fencing devices:
# ccs --host smb-srv1 --addmethod Primary smb-srv1-ci
Method Primary added to smb-srv1-ci.
# ccs --host smb-srv1 --addmethod Primary smb-srv2-ci
Method Primary added to smb-srv2-ci.
# ccs --host smb-srv1 --addmethod Primary smb-srv3-ci
Method Primary added to smb-srv3-ci.
www.redhat.com 30 [email protected]
2. Add a fence device for the IPMI LAN device:
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv1-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.232 lanplus=on \
login=root name=IPMI-smb-srv1-ci passwd=password \
power_wait=5 timeout=20
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv2-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.233 lanplus=on \
login=root name=IPMI-smb-srv2-ci passwd=password \
power_wait=5 timeout=20
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv3-ci \
agent=fence_ipmilan auth=password \
ipaddr=10.16.143.241 lanplus=on \
login=root name=IPMI-smb-srv3-ci passwd=password \
power_wait=5 timeout=20
3. Add a fence instance for each node to the Primary fence method:
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv1-ci smb-srv1-ci Primary
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv2-ci smb-srv2-ci Primary
# ccs –-host smb-srv1 --addfenceinst IPMI-smb-srv3-ci smb-srv3-ci Primary
4.3.4 Activate Cluster
Once the cluster has been created, the configuration needs to be activated and the cluster
started on all nodes.
[email protected] 31 www.redhat.com
4.4 Configure Storage
Two volumes are created – one to maintain the CTDB lock state and another to hold the
contents of the Samba file share. Access to both volumes is shared across the cluster nodes.
In the event of a node failure, access to both volumes is maintained across all remaining
cluster nodes. Since all cluster nodes require simultaneous access, both volumes are
configured as Clustered Logical Volume Manager (CLVM) volumes.
The Logical Unit Number (LUN) for the volume must be provisioned and accessible to each of
the cluster nodes before continuing. Appendix B: Fibre Channel Storage Provisioning
describes how the LUN used for this reference architecture was provisioned.
4.4.1 Configure Multipathing
1. Install the DM Multipath Package on each cluster node:
# yum install device-mapper-multipath.x86_64
3. On the first cluster node (smb-srv1) view the multipath device, paths and World Wide
ID (WWID):
# multipath -ll
3600c0ff000d7e69dd26a325001000000 dm-6 HP,MSA2324fc
size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 2:0:0:1 sdd 8:48 active ready running
|- 2:0:1:1 sdf 8:80 active ready running
`- 1:0:1:1 sdh 8:112 active ready running
3600c0ff000d7e69df36a325001000000 dm-7 HP,MSA2324fc
size=186G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 1:0:0:2 sdc 8:32 active ready running
|- 2:0:0:2 sde 8:64 active ready running
|- 2:0:1:2 sdg 8:96 active ready running
`- 1:0:1:2 sdi 8:128 active ready running
# ls /dev/mapper
3600508b1001030374142393845301000 3600508b1001030374142393845301000p2
3600c0ff000d7e69df36a325001000000 vg_smbsrv1-lv_home vg_smbsrv1-lv_swap
3600508b1001030374142393845301000p1 3600c0ff000d7e69dd26a325001000000
control vg_smbsrv1-lv_root
4. On the first cluster node (smb-srv1) edit the file /etc/multipath.conf and add aliases
for both the CTDB (smb-srv-ctdb-01) and Data (smb-srv-data-01) volumes
using the WWIDs from the previous step:
www.redhat.com 32 [email protected]
multipaths {
multipath {
alias smb-srv-ctdb-01
wwid " 3600c0ff000d7e69dd26a325001000000”
}
multipath {
alias smb-srv-data-01
wwid "3600c0ff000d7e69df36a325001000000"
}
}
5. Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/multipath.conf smb-srv2:/etc/multipath.conf
# scp -p /etc/multipath.conf smb-srv3:/etc/multipath.conf
# ls /dev/mapper
3600508b1001030374142393845301000 3600508b1001030374142393845301000p2
smb-srv-ctdb-01 vg_smbsrv1-lv_home vg_smbsrv1-lv_swap
3600508b1001030374142393845301000p1 control
smb-srv-data-01 vg_smbsrv1-lv_root
8. On each cluster node configure multipath to start on system boot:
# chkconfig multipathd on
# chkconfig multipathd --list
multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[email protected] 33 www.redhat.com
4.4.2 Create Cluster Logical Volumes
Create volume groups (VG) and logical volumes (LV) on the previously defined LUN's.
1. Ensure the parameter locking_type is set to the value of 3 (to enable built-in
clustered locking) in the global section of the file /etc/lvm/lvm.conf on all nodes:
# grep "locking_type" /etc/lvm/lvm.conf | grep -v "#"
locking_type = 3
2. Start the cluster manager (CMAN) and clvmd services on each cluster node:
# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
3. Configure the physical volumes (PV) using the Multipath devices (/dev/mapper/smb-
srv-ctdb-01, /dev/mapper/smb-srv-data-01) and display the attributes. Perform this
step on the first cluster node (smb-srv1) only:
# pvcreate /dev/mapper/smb-srv-ctdb-01
Writing physical volume data to disk "/dev/mapper/smb-srv-ctdb-01"
Physical volume "/dev/mapper/smb-srv-ctdb-01" successfully created
# pvcreate /dev/mapper/smb-srv-data-01
Writing physical volume data to disk "/dev/mapper/smb-srv-data-01"
Physical volume "/dev/mapper/smb-srv-data-01" successfully created
# pvdisplay /dev/mapper/smb-srv-ctdb-01
"/dev/mapper/smb-srv-ctdb-01" is a new physical volume of "1.86 GiB"
--- NEW Physical volume ---
PV Name /dev/mapper/smb-srv-ctdb-01
VG Name
PV Size 1.86 GiB
Allocatable NO
PE Size 0
Total PE 0
www.redhat.com 34 [email protected]
Free PE 0
Allocated PE 0
PV UUID K9eYda-fJa9-kMmX-tOme-nXER-ZYxe-rmhxtj
# pvdisplay /dev/mapper/smb-srv-data-01
"/dev/mapper/smb-srv-data-01" is a new physical volume of "186.26 GiB"
--- NEW Physical volume ---
PV Name /dev/mapper/smb-srv-data-01
VG Name
PV Size 186.26 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID plJUZN-wDLh-F3Fh-5UFK-FaIu-GVle-l1oyId
4. Create volume groups (VG) to contain the logical volumes (LV) and display the
attributes. Perform this step on the first cluster node (smb-srv1) only:
# vgcreate --clustered y SMB-CTDB-VG /dev/mapper/smb-srv-ctdb-01
Clustered volume group "SMB-CTDB-VG" successfully created
# vgdisplay SMB-CTDB-VG
--- Volume group ---
VG Name SMB-CTDB-VG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.86 GiB
PE Size 4.00 MiB
Total PE 476
Alloc PE / Size 0 / 0
Free PE / Size 476 / 1.86 GiB
VG UUID RdFhK1-yKI6-tE65-R60U-rmsz-p7cL-2gVW8S
# vgdisplay SMB-DATA1-VG
--- Volume group ---
VG Name SMB-DATA1-VG
System ID
[email protected] 35 www.redhat.com
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 186.26 GiB
PE Size 4.00 MiB
Total PE 47683
Alloc PE / Size 0 / 0
Free PE / Size 47683 / 186.26 GiB
VG UUID NrHlsk-HyZk-AWWf-Gs9J-Hq0Q-tfBx-Sh2ce8
5. Create logical volumes (LV) for the CTDB (smb-ctdb-lvol1), Data (smb-data-
lvol1) volumes and display the attributes. Perform this step on the first cluster node
(smb-srv1) only:
# lvcreate --size 1.8GB --name smb-ctdb-lvol1 SMB-CTDB-VG
Rounding up size to full physical extent 1.80 GiB
Logical volume "smb-ctdb-lvol1" created
# lvdisplay SMB-CTDB-VG
--- Logical volume ---
LV Path /dev/SMB-CTDB-VG/smb-ctdb-lvol1
LV Name smb-ctdb-lvol1
VG Name SMB-CTDB-VG
LV UUID oUqpSy-Ucpf-zHSg-dtav-5cJi-NZTm-UbQ0j1
LV Write Access read/write
LV Creation host, time smb-srv1, 2012-08-21 14:46:23 -0400
LV Status available
# open 0
LV Size 1.80 GiB
Current LE 461
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
# lvdisplay SMB-DATA1-VG
--- Logical volume ---
LV Path /dev/SMB-DATA1-VG/smb-data-lvol1
LV Name smb-data-lvol1
VG Name SMB-DATA1-VG
LV UUID 4xQw7X-FrHS-bXvO-BqNh-lBTs-C8kX-917xYi
www.redhat.com 36 [email protected]
LV Write Access read/write
LV Creation host, time smb-srv1, 2012-08-21 14:46:53 -0400
LV Status available
# open 0
LV Size 180.00 GiB
Current LE 46080
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
4.4.3 Create GFS2 Filesystems
1. Format both volumes with the GFS2 filesystem. For each volume, specify 3 journals
(-j3) - one for each cluster node, the cluster locking protocol (-p lock_dlm) and the
lock table name (ClusterName:FileSystem Name). Perform this step on the first
cluster node (smb-srv1) only:
# mkfs -t gfs2 -j3 -p lock_dlm -t samba-cluster:ctdb-state \
/dev/SMB-CTDB-VG/smb-ctdb-lvol1
This will destroy any data on /dev/SMB-CTDB-VG/smb-ctdb-lvol1.
It appears to contain: symbolic link to `../dm-8'
Device: /dev/SMB-CTDB-VG/smb-ctdb-lvol1
Blocksize: 4096
Device Size 1.80 GB (472064 blocks)
Filesystem Size: 1.80 GB (472063 blocks)
Journals: 3
Resource Groups: 8
Locking Protocol: "lock_dlm"
Lock Table: "samba-cluster:ctdb-state"
UUID: 816d88d3-3f4b-4198-ab92-a73157216c22
Device: /dev/SMB-DATA1-VG/smb-data-lvol1
Blocksize: 4096
Device Size 180.00 GB (47185920 blocks)
Filesystem Size: 180.00 GB (47185918 blocks)
Journals: 3
Resource Groups: 720
Locking Protocol: "lock_dlm"
Lock Table: "samba-cluster:smb-data1"
UUID: 9341df53-e6cc-fe6e-6d07-8ac015cd5bd2
2. Create a mount point for both volumes. Perform this step on all cluster nodes:
[email protected] 37 www.redhat.com
# mkdir -p /share/ctdb
# mkdir -p /share/data1
4.4.4 Configure SELinux Security Parameters
By default, SELinux is enabled during the Red Hat Enterprise Linux 6 installation process. For
maximum security, Red Hat recommends running Red Hat Enterprise Linux 6 with SELinux
enabled. In this section, verification is done to ensure that SELinux is enabled and the file
context set correctly on the /share/data1 filesystem for use by Samba.
1. Verify whether or not SELinux is enabled using the getenforce utility. Perform this
step on all cluster nodes:
# getenforce
Enforcing
2. Edit the file /etc/selinux/config and set SELinux to be persistent across reboots.
Perform this step on all cluster nodes:
SELINUX=enforcing
3. Add (-a) the file context (fcontext) for type (-t) samba_share_t to the directory
/share/data1 and all contents within it. This makes the changes permanent.
Perform this step on all cluster nodes:
# semanage fcontext -a -t samba_share_t "/share/data1(/.*)?"
Note: If the semanage (/usr/sbin/semanage) utility is not available, install the core
policy utilities kit and then apply the file context on all nodes:
# yum -y install policycoreutils-python
# semanage fcontext -a -t samba_share_t "/share/data1(/.*)?"
4. View the current security policy file context. Perform this step on all cluster nodes:
# ls -ldZ /share/data1
drwxr-xr-x. root root system_u:object_r:file_t:s0 /share/data1
5. Run the restorecon command to apply the changes and view the updated file
context. Perform this step on all cluster nodes:
# restorecon -R -v /share/data1
restorecon reset /share/data1 context system_u:object_r:file_t: \
s0->system_u:object_r:samba_share_t:s0
restorecon reset /share/data1/data.test context unconfined_u:object_r: \
file_t:s0->unconfined_u:object_r:samba_share_t:s0
www.redhat.com 38 [email protected]
# ls -ldZ /share/data1
drwxr-xr-x. root root system_u:object_r:samba_share_t:s0 /share/data1
1. Install the CTDB package – perform this step on each cluster node:
# yum -y install ctdb
2. Edit and save the CTDB configuration file (/etc/sysconfig/ctdb) on the first cluster node
(smb-srv1) as follows:
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes # optional
3. On the first cluster node (smb-srv1), edit and save the CTDB nodes file
(/etc/ctdb/nodes) by adding the CLUSTER INTERCONNECT IP addresses for each
cluster node (smb-srv1-ci, smb-srv2-ci, smb-srv3-ci) :
10.0.0.101
10.0.0.102
[email protected] 39 www.redhat.com
10.0.0.103
The IP addresses specified here are used for CTDB cluster node communications and
should match those specified in the cluster configuration file (/etc/cluster/cluster.conf).
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes
4. On the first cluster node (smb-srv1), edit and save the CTDB public addresses file
(/etc/ctdb/public_addresses) by adding three, new unique public IP addresses for use
by CTDB on each cluster node (smb-srv1-ctdb, smb-srv2-ctdb, smb-srv3-ctdb):
10.16.142.111/24 bond0
10.16.142.112/24 bond0
10.16.142.113/24 bond0
These addresses co-exist with the existing public addresses defined for bond0. In the
event of a cluster node failover, client access to file shares is maintained through the
use of IP address takeover (IPAT). The addresses specified within this file are relocated
to other cluster nodes to maintain client file share access on the public network. Copy
the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/ctdb/public_addresses smb-srv2:/etc/ctdb/public_addresses
# scp -p /etc/ctdb/public_addresses smb-srv3:/etc/ctdb/public_addresses
5. Edit the /etc/hosts file to include the IP addresses, hostname/aliases of all cluster
node CTDB public address interfaces:
127.0.0.1 localhost localhost.localdomain
#----------------#
# Cluster Nodes: #
#----------------#
#
10.16.142.101 smb-srv1 smb-srv1.cloud.lab.eng.bos.redhat.com
10.16.142.111 smb-srv1-ctdb smb-srv1-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.101 smb-srv1-ci smb-srv1-ci.cloud.lab.eng.bos.redhat.com
10.16.142.102 smb-srv2 smb-srv2.cloud.lab.eng.bos.redhat.com
10.16.142.112 smb-srv2-ctdb smb-srv2-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.102 smb-srv2-ci smb-srv2-ci.cloud.lab.eng.bos.redhat.com
10.16.142.103 smb-srv3 smb-srv3.cloud.lab.eng.bos.redhat.com
10.16.142.113 smb-srv3-ctdb smb-srv3-ctdb.cloud.lab.eng.bos.redhat.com
10.0.0.103 smb-srv3-ci smb-srv3-ci.cloud.lab.eng.bos.redhat.com
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/hosts smb-srv2:/etc/hosts
# scp -p /etc/hosts smb-srv3:/etc/hosts
It is also recommended to register the hosts, addresses within the local site specific DNS.
www.redhat.com 40 [email protected]
4.6 Configure Samba
Install the Samba packages and configure the Clustered Samba file share. Note that CTDB
will manage the starting, stopping of the Samba and Winbind (optional) services – do not
manually start or stop them.
1. Install the Samba server, client and winbind packages – perform this step on each
cluster node:
# yum -y install samba samba-client samba-common samba-winbind \
samba-winbind-clients
• Note that some packages may have been previously installed depending on
which packages were selected during the installation of Red Hat Enterprise
Linux on each cluster node.
2. On the first cluster node (smb-srv1), edit and save the Samba configuration file
(/etc/samba/smb.conf) as follows:
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
guest ok = yes
clustering = yes
[data1]
comment = Clustered Samba Share 1
public = yes
path = /share/data1
writable = yes
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
[email protected] 41 www.redhat.com
idmap backend = tdb2
guest ok = Yes
[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf
# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_smbsrv1-lv_root
51606140 2940604 46044096 7% /
tmpfs 24708156 35236 24672920 1% /dev/shm
/dev/mapper/3600508b1001030374142393845301000p1
495844 98392 371852 21% /boot
/dev/mapper/vg_smbsrv1-lv_home
64583508 185180 61117640 1% /home
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1
1888032 397164 1490868 22% /share/ctdb
/dev/mapper/SMB--DATA--VG-smb--data--lvol1
188723456 397236 188326220 1% /share/data1
# touch /share/ctdb/ctdb.test
# touch /share/data1/data1.test
# ls -l /share/ctdb/ctdb.test /share/data1/data1.test
-rw-r--r--. 1 root root 0 Sep 5 16:43 /share/ctdb/ctdb.test
-rw-r--r--. 1 root root 0 Sep 5 16:43 /share/data1/data1.test
www.redhat.com 42 [email protected]
2. Add mount entries for both volumes to /etc/fstab. Edit and save the file on all cluster
nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 UNHEALTHY (THIS NODE)
pnn:1 10.0.0.102 UNHEALTHY
pnn:2 10.0.0.103 UNHEALTHY
Generation:122968421
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:2
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1330161966
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:2
[email protected] 43 www.redhat.com
Member Name ID Status
------ ---- ---- ------
smb-srv1-ci 1 Online, Local
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online
# smbstatus
No locked files
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1330161966
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
2. Verify the file share is available from a client. For high availability, select one of the
hostnames or IP addresses associated with the transferrable IP addresses (IPAT)
specified within the /etc/ctdb/public_addresses file:
www.redhat.com 44 [email protected]
In the examples below, the round-robin DNS hostname (smb-srv) is specified:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data1
Enter root's password: *******
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
smb: \> ls
. D 0 Wed Sep 5 16:43:54 2012
.. D 0 Wed Sep 5 16:10:52 2012
data1.test 0 Wed Sep 5 16:43:54 2012
# ls -la /mnt/data1
total 12
drwxr-xr-x. 2 root root 0 Sep 5 17:49 .
drwxr-xr-x. 3 root root 4096 Sep 5 17:53 ..
-rw-r--r--. 1 root root 0 Sep 5 16:43 data1.test
Verify the client connection to the fileshare from any cluster node running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------
1:15522 root root bandit (::ffff:10.16.187.21)
No locked files
The pid indicates which cluster node (smb-srv2) is serving the file share (data2) to client
(bandit). The smbstatus utility can be run from any cluster node.
This completes the deployment and configuration of clustered Samba file shares. The next
section details the common use cases in managing a Red Hat Enterprise Linux 6 Samba
Cluster.
[email protected] 45 www.redhat.com
5 Clustered Samba Management
The previous sections of this reference architecture detailed the configuration tasks for
deploying a highly available clustered Samba file share on Red Hat Enterprise Linux.
The following sections focus on the most common cluster management tasks.
Failure to follow the proper sequence can result in the cluster not forming properly and
the clustered Samba file shares not becoming available. For this reason, it is inadvisable
to restart all cluster nodes at once. In cases where all cluster nodes need to be restarted
(e.g. - recovery from an unexpected power outage), the recommended method is to
reboot each node individually, one at a time. Only after the node has been fully started,
the cluster formed and the clustered Samba resources properly started should the next
node be rebooted.
www.redhat.com 46 [email protected]
Table 5.1-1: Cluster Component Startup and Shutdown depicts the proper command
sequences to follow during startup and shutdown:
Startup Sequence Shutdown Sequence
# ctdb stop
# service cman start
# ctdb status
# clustat
# smbstatus
# service clvmd start # umount -a -t gfs2
# service clvmd status # mount
# mount -a -t gfs2 # service clvmd stop
# mount # service clvmd status
# ctdb start
# service cman stop
# ctdb status
# clustat
# smbstatus
Clustered Samba components (CLVM, GFS2, CTDB) are dependent on the underlying HA
cluster services (CMAN). For this reason, it is essential to allow the cluster to form properly
before the clustered Samba services are started.
Table 5.1-2: CTDB Administrative Commands below, provides a summary of the most
commonly used ctdb command options:
Command Option Description
status Display current CTDB cluster status
stop Administratively stop cluster node
(IP address is not relocated to another node)
continue Re-start administratively stopped node
uptime Display CTDB daemon uptime for node
listnodes List IP addresses of all cluster nodes
ip List public addresses, node servicing the address
ipinfo {IP address} Provide detail about specified public address
statistics Display CTDB daemon statistics
disable Administratively disable cluster node
(IP address is relocated to another node)
enable Administratively re-enable cluster node
shutdown Stop the CTDB daemon on a node
recover Trigger a cluster recovery
reloadnodes Reload the nodes file on all nodes
[email protected] 47 www.redhat.com
5.2 Adding Clustered Samba Nodes
Prior to adding a new node to an existing Samba cluster, the system must be deployed
and configured as a member of an HA cluster as outlined in the following sections:
• 4 Clustered Samba Deployment
• Appendix G: Adding/Removing HA Nodes
The new node must be configured as a HA cluster node member. The new node must also
be fully configured with CTDB, GFS2, CLVM and CTDB/Samba as detailed in the previous
sections. Do not proceed until the previous tasks have been completed.
In the steps below, a new node (smb-srv3) is added to an existing two-node Samba cluster.
1. Verify the cluster and CTDB status from an existing Samba cluster node. Ensure
that all nodes are up, running, the cluster status is Online and the CTDB status
is OK. Do not add a node to the cluster unless the cluster is fully formed and in
a healthy state:
# clustat
Cluster Status for samba-cluster @ Tue Oct 16 11:01:38 2012
Member Status: Quorate
# ctdb status
Number of nodes:2
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:1768794705
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0
2. On the first cluster node (smb-srv1), copy the /etc/sysconfig/ctdb file to the new cluster
node (smb-srv3) being added:
# scp -p /etc/sysconfig/ctdb smb-srv3:/etc/sysconfig/ctdb
3. On the first cluster node (smb-srv1), edit the /etc/ctdb/nodes file and add an entry
(10.0.0.103) for the new node being added (smb-srv3). The node entry must be added
to the end of the file:
10.0.0.101
10.0.0.102
10.0.0.103
www.redhat.com 48 [email protected]
Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node being
added:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes
4. On the first cluster node (smb-srv1), edit the /etc/ctdb/public_addresses file and add
an entry (10.16.142.113/21 bond0) for the new node being added (smb-srv3):
10.16.142.111/21 bond0
10.16.142.112/21 bond0
10.16.142.113/21 bond0
5. Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node
being added:
# scp -p /etc/ctdb/public_addresses smb-srv2:/etc/ctdb/public_addresses
# scp -p /etc/ctdb/public_addresses smb-srv3:/etc/ctdb/public_addresses
6. On the new cluster node being added (smb-srv3), restart the CTDB service:
# service ctdb restart
Shutting down ctdbd service: [ OK ]
Starting ctdbd service: [ OK ]
8. On the new cluster node being added (smb-srv3), restart the CTDB service:
# service ctdb restart
Shutting down ctdbd service: [ OK ]
Starting ctdbd service: [ OK ]
9. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Thu Oct 18 11:07:02 2012
Member Status: Quorate
[email protected] 49 www.redhat.com
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK (THIS NODE)
Generation:822655699
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:1
Note that both the HA cluster and the CTDB/Samba cluster now contain three members.
www.redhat.com 50 [email protected]
5.3 Removing Clustered Samba Nodes
Clustered Samba nodes are removed by modifying the CTDB and Samba configuration
files. Two methods are available: on-line and off-line. On-line removal allows member
nodes to remain available and to continue providing file sharing during the removal of
Samba cluster nodes. By design, entries for a removed node remain in the internal CTDB
database. Off-line removal requires a full shutdown of the cluster and effectively rebuilds
the cluster from the CTDB level up without including the removed node. Using this method,
removed nodes are no longer stored in the internal CTDB database.
5.3.1 Online Node Removal (Method 1)
1. Verify the cluster and CTDB status from any node. Ensure that all nodes are up,
running, the HA cluster status is Online and the CTDB status is OK. Do not remove
a node from the cluster unless the cluster is fully formed and in a healthy state:
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:19:40 2012
Member Status: Quorate
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
2. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------
No locked files
[email protected] 51 www.redhat.com
If any active sessions are attached to the node being removed, notify the clients to detach
from the file share before proceeding. The smbstatus utility can be run from any cluster
node.
3. On the first cluster node (smb-srv1), edit the /etc/ctdb/nodes file and comment out the
entry (10.0.0.103) for the node being removed (smb-srv3):
10.0.0.101
10.0.0.102
#10.0.0.103
Copy the file to the other cluster nodes (smb-srv2, smb-srv3), including the node being
removed:
# scp -p /etc/ctdb/nodes smb-srv2:/etc/ctdb/nodes
# scp -p /etc/ctdb/nodes smb-srv3:/etc/ctdb/nodes
4. Run ´ctdb reloadnodes´ to force all nodes to reload the /etc/ctdb/nodes. Run this
from a cluster node not being removed:
# ctdb reloadnodes
ctdb reloadnodes
2012/10/05 18:31:50.213041 [15865]: Reloading nodes file on node 1
2012/10/05 18:31:50.213359 [15865]: Reloading nodes file on node 0
5. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:34:16 2012
Member Status: Quorate
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:82327558
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:1
Note that the HA cluster still contains three members. The internal CTDB database of the
Samba cluster continues to report three nodes (Number of nodes: 3) but has successfully
removed node smb-srv3 as confirmed by the cluster size of two (Size: 2).
www.redhat.com 52 [email protected]
6. On the node to be removed (smb-srv3), the CTDB service is automatically stopped.
Unmount the CLVM volumes and stop the CMAN service:
# umount -a -t gfs2
# mount -t gfs2
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:37:49 2012
Member Status: Quorate
# clustat
Could not connect to CMAN: No such file or directory
# shutdown -h now
This completes the removal of a clustered Samba node using the on-line method.
If round-robin DNS was deployed, the IP address of the decommissioned node should be
removed from the DNS zone file and the /etc/ctdb/public_addresses file on the remaining
cluster nodes (smb-srv1, smb-srv2). The node can now be removed from the HA cluster
as outlined in Appendix G: Adding/Removing HA Nodes.
[email protected] 53 www.redhat.com
5.3.2 Offline Node Removal (Method 2):
1. Verify the cluster and CTDB status from any node. Ensure that all nodes are up,
running, the HA cluster status is Online and the CTDB status is OK. Do not
remove a node from the cluster unless the cluster is fully formed and in a
healthy state:
# clustat
Cluster Status for samba-cluster @ Fri Oct 5 18:19:40 2012
Member Status: Quorate
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
2. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------
No locked files
If any active sessions are attached to the node being removed, notify the clients to detach
from the file share before proceeding. The smbstatus utility can be run from any cluster
node.
www.redhat.com 54 [email protected]
3. On the first cluster node (smb-srv1) create backup copies of the ctdb
(/etc/sysconfig/ctdb), nodes (/etc/ctdb/nodes) and public_addresses
(/etc/ctdb/public_addresses) files:
# mkdir -p /var/tmp/ctdb-backups
# cp -p /etc/sysconfig/ctdb /var/tmp/ctdb-backups/ctdb
# cp -p /etc/ctdb/nodes /var/tmp/ctdb-backups/nodes
# cp -p /etc/ctdb/public_addresses /var/tmp/ctdb-backups/public_addresses
4. On all cluster nodes (smb-srv1, smb-srv2, smb-srv3) stop the CTDB service,
unmount the CLVM volumes and stop the CLVMD service:
# service ctdb stop
Shutting down ctdbd service: [ OK ]
# umount -a -t gfs2
5. On all cluster nodes (smb-srv1, smb-srv2, smb-srv3) remove the CTDB package:
# yum -y remove ctdb
...output abbreviated...
On the remaining cluster nodes (smb-srv1, smb-srv2), re-install the CTDB package:
# yum -y install ctdb
...output abbreviated...
6. On the first cluster node (smb-srv1), restore the saved ctdb (/etc/sysconfig/ctdb),
nodes (/etc/ctdb/nodes) and public_addresses (/etc/ctdb/public_addresses) files:
# cp -p /var/tmp/ctdb-backups/ctdb /etc/sysconfig/ctdb
# cp -p /var/tmp/ctdb-backups/nodes /etc/ctdb/nodes
# cp -p /var/tmp/ctdb-backups/public_addresses /etc/ctdb/public_addresses
/etc/sysconfig/ctdb
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes
[email protected] 55 www.redhat.com
/etc/ctdb/nodes
10.0.0.101
10.0.0.102
/etc/ctdb/public_addresses
10.16.142.111/21 bond0
10.16.142.112/21 bond0
7. On the remaining cluster nodes (smb-srv1, smb-srv2), start the CLVMD service,
refresh the device cache then restart CLVMD. Start the CTDB service on the nodes
after the CLVM volumes are mounted:
# service clvmd start
Starting clvmd:
Activating VG(s): 1 logical volume(s) in volume group "SMB-DATA2-VG" now
active
clvmd not running on node smb-srv3-ci
1 logical volume(s) in volume group "SMB-DATA1-VG" now active
clvmd not running on node smb-srv3-ci
1 logical volume(s) in volume group "SMB-CTDB-VG" now active
clvmd not running on node smb-srv3-ci
3 logical volume(s) in volume group "vg_smbsrv2" now active
clvmd not running on node smb-srv3-ci
[ OK ]
# /usr/sbin/clvmd -R
clvmd not running on node smb-srv3-ci
# mount -a -t gfs2
8. Verify the status of the cluster and the CTDB/Samba cluster from one of the other
cluster nodes (smb-srv1, smb-srv2):
# clustat
Cluster Status for samba-cluster @ Thu Oct 18 15:41:04 2012
Member Status: Quorate
www.redhat.com 56 [email protected]
smb-srv2-ci 2 Online
smb-srv3-ci 3 Online
# ctdb status
Number of nodes:2
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
Generation:1113017468
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:0
Note that the HA cluster still contains three members but the CTDB/Samba cluster no
longer has entries for the removed node in the internal CTDB database.
This completes the removal of a clustered Samba node using the off-line method.
If round-robin DNS was deployed, the IP address of the decommissioned node should be
removed from the DNS zone file and the /etc/ctdb/public_addresses file on the remaining
cluster nodes (smb-srv1, smb-srv2). The node can now be removed from the HA cluster
as outlined in Appendix G: Adding/Removing HA Nodes.
[email protected] 57 www.redhat.com
5.4 Adding File Shares
In the steps below, a new file share (data2) is defined on a previously created and mounted
CLVM volume. Prior to adding the file share, a fibrechannel volume (smb-srv-data-02) is
provisioned and configured as outlined in the following sections:
Do not proceed until the previous tasks have been completed and the CLVM volume
configured.
1. On the first cluster node (smb-srv1), edit and save the Samba configuration file
(/etc/samba/smb.conf) as follows:
[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes
www.redhat.com 58 [email protected]
Test the file using the testparm utility:
# testparm
Load smb config files from /etc/samba/smb.conf
Processing section "[data1]"
Processing section "[data2]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
guest ok = Yes
[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No
[data2]
comment = Clustered Samba Share 2
path = /share/data2
read only = No
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf
2. Mount the new Data2 volume and verify it can be written to. Perform this step on all
cluster nodes:
# mount -t gfs2 /dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2
# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_smbsrv1-lv_root
51606140 2943380 46041320 7% /
tmpfs 24708156 41444 24666712 1% /dev/shm
/dev/mapper/3600508b1001030374142393845301000p1
495844 98392 371852 21% /boot
/dev/mapper/vg_smbsrv1-lv_home
64583508 185180 61117640 1% /home
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1
1888032 397164 1490868 22% /share/ctdb
/dev/mapper/SMB--DATA--VG-smb--data--lvol1
188723456 397228 188326228 1% /share/data1
/dev/mapper/SMB--DATA2--VG-smb--data--lvol1
[email protected] 59 www.redhat.com
188723456 397224 188326232 1% /share/data2
# touch /share/data1/data2.test
# ls -la /share/data2/data2.test
-rw-r--r--. 1 root root 0 Oct 1 13:41 /share/data2/data2.test
3. Add a mount entry for the new volume to /etc/fstab - the new entry is highlighted below.
Edit and save the file on all cluster nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
4. Restart CTDB/Samba on all cluster nodes – individually one node at a time. The
daemons can take up to a minute to synchronize across all cluster nodes:
# ctdb stop
# ctdb continue
5. Verify the cluster, Samba and ctdb status from any cluster node:
# clustat
Cluster Status for samba-cluster @ Mon Oct 1 17:24:09 2012
Member Status: Quorate
# smbstatus
No locked files
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
pnn:2 10.0.0.103 OK
Generation:1457203730
www.redhat.com 60 [email protected]
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
6. Verify the file share is available from a client. If round-robin DNS has been
configured then specify that hostname to automatically cycle through the
transferrable IP addresses. In the examples below, the round-robin DNS
hostname (smb-srv) is used:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data2
Enter root's password: *******
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
smb: \> ls
. D 0 Mon Oct 1 13:41:34 2012
.. D 0 Mon Oct 1 13:40:26 2012
data2.test 0 Mon Oct 1 13:41:34 2012
# ls -la /mnt/data2
total 12
drwxr-xr-x. 2 root root 0 Oct 1 13:41 .
drwxr-xr-x. 4 root root 4096 Oct 1 17:52 ..
-rw-r--r--. 1 root root 0 Oct 1 13:41 data2.test
This completes the deployment and configuration of a new clustered Samba file share.
[email protected] 61 www.redhat.com
5.5 Removing File Shares
In the steps below, an existing file share (data2) is removed from a previously created,
mounted CLVM volume. After the file share is unmounted and the Samba configuration
changes propagated across all cluster nodes, the CLVM and fibrechannel volumes
(smb-srv-data-02) can be removed.
Do not proceed until the filesystem contents have been archived or migrated to new target
locations.
1. Verify whether any clients have active sessions to the file share being removed by
running smbstatus:
# smbstatus
Samba version 3.5.10-125.el6
PID Username Group Machine
-------------------------------------------------------------------
No locked files
If any active sessions are attached to the file share being removed, notify the clients to
detach from the file share before proceeding. The smbstatus utility can be run from any
cluster node.
www.redhat.com 62 [email protected]
2. Unmount the file share. Perform this step on each cluster node:
# mount -t gfs2
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1 on /share/ctdb type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0)
/dev/mapper/SMB--DATA2--VG-smb--data--lvol1 on /share/data2 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)
/dev/mapper/SMB--DATA1--VG-smb--data--lvol1 on /share/data1 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)
# umount /share/data2
# mount -t gfs2
/dev/mapper/SMB--CTDB--VG-smb--ctdb--lvol1 on /share/ctdb type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0)
/dev/mapper/SMB--DATA1--VG-smb--data--lvol1 on /share/data1 type gfs2
(rw,seclabel,noatime,nodiratime,hostdata=jid=0,acl)
3. Remove or comment out the mount entry for the file share from /etc/fstab - the entry
is highlighted below. Edit and save the change on all cluster nodes:
#
# CTDB and DATA volumes for Clustered Samba
#
/dev/SMB-CTDB-VG/smb-ctdb-lvol1 /share/ctdb gfs2 \
defaults,noatime,nodiratime,quota=off 0 0
/dev/SMB-DATA1-VG/smb-data-lvol1 /share/data1 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
#
# Removed from service - 2012-10-04
#
#/dev/SMB-DATA2-VG/smb-data-lvol1 /share/data2 gfs2 \
defaults,acl,noatime,nodiratime,quota=off 0 0
4. On the first cluster node (smb-srv1), edit the Samba configuration file
(/etc/samba/smb.conf) and comment out or remove the existing file share entry:
#
# Removed from service – 2012-10-04
#
#[data2]
# comment = Clustered Samba Share 2
# public = yes
# path = /share/data2
# writable = yes
[email protected] 63 www.redhat.com
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
guest ok = Yes
[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No
Copy the file from the first cluster node (smb-srv1) to the other cluster nodes:
# scp -p /etc/samba/smb.conf smb-srv2:/etc/samba/smb.conf
# scp -p /etc/samba/smb.conf smb-srv3:/etc/samba/smb.conf
6. Verify the cluster, Samba and ctdb status from any cluster node:
# clustat
Cluster Status for samba-cluster @ Thu Oct 4 17:06:03 2012
Member Status: Quorate
# smbstatus
No locked files
# ctdb status
Number of nodes:3
pnn:0 10.0.0.101 OK (THIS NODE)
pnn:1 10.0.0.102 OK
www.redhat.com 64 [email protected]
pnn:2 10.0.0.103 OK
Generation:169828440
Size:3
hash:0 lmaster:0
hash:1 lmaster:1
hash:2 lmaster:2
Recovery mode:NORMAL (0)
Recovery master:0
Attempts to connect to the removed file share returns the following error:
$ smbclient -U root //smb-srv.cloud.lab.eng.bos.redhat.com/data2
Enter root's password:
Domain=[REFARCH-CTDB] OS=[Unix] Server=[Samba 3.5.10-125.el6]
tree connect failed: NT_STATUS_BAD_NETWORK_NAME
This completes the removal of an existing clustered Samba file share. The CLVM and
fibrechannel volumes can now be removed.
[email protected] 65 www.redhat.com
6 Windows Active Directory Integration
In this section, the tasks necessary for integrating Clustered Samba nodes into an existing
Windows Active Directory domain are detailed. Prior to proceeding, each of the following
components must first be configured:
• Windows Server 2008 R2 with Active Directory Domain Services
• Red Hat Enterprise Linux 6 servers clustered with CTDB/Samba
6.1 Overview
This configuration is for environments looking to integrate one or more Red Hat Enterprise
Linux 6 systems into an Active Directory domain or forest with the capability to customize
user configurations. Login access and file sharing services are provided.
6.1.1 Configuration Summary
Configuration Summary
Samba/Winbind – idmap_ad
Components
RHEL 6: • Samba/Winbind
Windows 2008 • Active Directory
Server R2: • Identity Management for UNIX (IMU)
Authentication
• Windbind (pam_winbind)
(pam)
ID Tracking/
Name Resolution • Windbind (nss_winbind)
(nss)
ID Mapping
• Windbind (idmap_ad)
(“back-end”)
Configuration • /etc/krb5.conf • /etc/pam.d/passwd-auth
Files • /etc/samba/smb.conf • /etc/pam.d/system-auth
Advantages • SID mappings homogeneous across multiple RHEL servers
• Customizeable user configurations (shell, home directory)
(configured within AD)
• Centralized user account management
• SFU, RFC2307 compatible mappings
Disadvantages • Requires additional configuration work to support a forest of AD
domains or multiple domain trees
• Requires additional user management tasks – user/group ID
attributes must be set within AD
Notes • Requires the ability to modify user attributes within AD (via IMU)
www.redhat.com 66 [email protected]
6.1.2 Cluster Configuration with Active Directory Integration
Figure 6.1.2: Clustered Samba with Active Directory Integration provides an overview
of the clustered Samba systems and in relation to Windows Active Directory:
[email protected] 67 www.redhat.com
6.1.3 Authentication and ID Components
Figure 6.1.3 depicts the Authentication, ID Tracking and ID Mapping components:
The Winbind idmap_ad backend maintains consistent user ID mappings across all cluster
Samba nodes. Users can login and/or access file shares through any clustered Samba node
using existing Active Directory user accounts and authentication. Customization of user shell
and home directories within Windows Active Directory is also supported. Winbind idmap_ad
requires the Identity Management for UNIX (IMU) role to be enabled on the Windows Active
Directory domain.
www.redhat.com 68 [email protected]
6.2 Integration Tasks
Integrating Red Hat Enterprise Linux 6 Samba cluster nodes into an Active Directory domain
involves the following series of steps:
1. Synchronize Time Service
2. Configure DNS
3. Update Hosts File
4. Install/Configure Kerberos Client
5. Install oddjob-mkhomedir
6. Configure Authentication
7. Verify/Test Active Directory
8. Modify Samba Configuration
9. Verification of Services
10. Configure CTDB to Manage Winbind (optional)
The following provides a step-by-step guide to the integration process.
6.2.1 Synchronize Time Service
It is essential that the time service on each clustered Samba node and the Windows Active
Directory server are synchronized, otherwise Kerberos authentication may fail due to clock
skew. In environments where time services are not reliable, best practice is to configure the
clustered Samba nodes to synchronize time from the Windows Server 2008 R2 server.
1. On each clustered Samba node, edit the file /etc/ntp.conf so the time is synchronized
from a known, reliable time service:
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
server ns1.bos.redhat.com
server 10.5.26.10
2. Activate the change on each clustered Samba node by stopping the ntp daemon,
updating the time, then starting the ntp daemon. Verify the change on both servers:
Clustered Samba node:
# service ntpd stop
Shutting down ntpd: [ OK ]
# ntpdate 10.16.255.2
22 Mar 20:17:00 ntpdate[14784]: adjust time server 10.16.255.2 offset
-0.002933 sec
# service ntpd start
Starting ntpd: [ OK ]
[email protected] 69 www.redhat.com
3. Configure the ntpd daemon to start on server boot:
# chkconfig ntpd on
# chkconfig --list ntpd
smb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
6.2.2 Configure DNS
Proper resolution of DNS hostnames from each clustered Samba node and the Windows
Active Directory server are essential. Improperly resolved hostnames are one of the leading
causes for integration failures. In environments where DNS lookups are not reliable, best
practice is to configure the clustered Samba nodes to perform DNS lookups from the
Windows Server 2008 R2 Active Directory server.
1. Edit the file /etc/resolv.conf on each clustered Samba node so that the domain name
and search list are specified using the fully qualified domain name (FQDN). The
nameserver IP addresses should be listed in preferred lookup order :
domain cloud.lab.eng.bos.redhat.com
search cloud.lab.eng.bos.redhat.com
nameserver 10.nn.nnn.100 # Windows server specified here
nameserver 10.nn.nnn.247 # Alternate server 1
nameserver 10.nn.nnn.2 # Alternate server 2
2. Similarly, the hostname on each clustered Samba node should be set to the FQDN.
Edit the file /etc/sysconfig/network and set the hostname to use the FQDN:
NETWORKING=yes
HOSTNAME=smb-srv1.cloud.lab.eng.bos.redhat.com
GATEWAY=10.16.255.2
Verify on each clustered Samba node by running the hostname utility:
# hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
Best practice is to create both forward and reverse lookup zones on the Windows Active
Directory server. For further detail, consult either the Windows Active Directory server
documentation or Appendix D: Active Directory Domain Configuration Summary in the Red
Hat Reference Architecture Integrating Red Hat Enterprise Linux 6 with Active Directory.
6.2.3 Update Hosts File
On each clustered Samba node, edit /etc/hosts and add an entry for the Windows Active
Directory server:
#
#----------------------------------#
# Windows Active Directory Server: #
#----------------------------------#
#
10.16.142.100 win-srv1 win-srv1.cloud.lab.eng.bos.redhat.com
www.redhat.com 70 [email protected]
6.2.4 Install/Configure Kerberos Client
Best practice is to install and configure the Kerberos client (krb5-workstation) to insure
Kerberos is able to properly authenticate to Active Directory on the Windows Server 2008 R2
server. This step is optional but highly recommended as it is useful for troubleshooting
Kerberos authentication issues. Perform the steps below on each clustered Samba node.
...output abbreviated...
Installed:
krb5-workstation.x86_64 1.9-33.el6_3.2
Complete!
If Kerberos has not been previously configured, modify the Kerberos configuration file
(/etc/krb5.conf) by adding entries for the new Kerberos and Active Directory realms. Note the
differences in the Kerberos [realms] and Active Directory [domain_realm] realm entries.
[libdefaults]
default_realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM = {
kdc = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
admin_server = WIN-SRV1.REFARCH.CLOUD.LAB.ENG.BOS.REDHAT.COM
}
[email protected] 71 www.redhat.com
[domain_realm]
.refarch-ad.cloud.lab.eng.bos.redhat.com = REFARCH-AD.CLOUD.LAB.ENG.BOS.
REDHAT.COM
refarch-ad.cloud.lab.eng.bos.redhat.com = REFARCH-AD.CLOUD.LAB.ENG.BOS.
REDHAT.COM
Under Kerberos, [realms] is set to the Kerberos server definitions and [domain_realm]
defines the Active Directory server. Both are in the Active Directory REFARCH-AD domain.
3. Verify the Kerberos configuration. First, clear out any existing tickets:
# kdestroy
# klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)
At this point Kerberos is fully functional and the client utilities (kinit, klist, kdestroy)
can be used for testing and verifying Kerberos functionality.
6.2.5 Install oddjobmkhomedir
Install the oddjob-mkhomedir package to ensure that user home directories are created
with the proper SELinux file and directory contexts. Perform this step on each clustered
Samba node:
# yum install oddjob-mkhomedir.x86_64
Loaded plugins: product-id, refresh-packagekit, rhnplugin, security,
subscription-manager
Updating certificate-based repositories.
Running Transaction
Installing : oddjob-mkhomedir-0.30-5.el6.x86_64
1/1
Installed products updated.
Installed:
oddjob-mkhomedir.x86_64 0:0.30-5.el6
www.redhat.com 72 [email protected]
...output abbreviated...
Complete!
6.2.6 Configure Authentication
The system-config-authentication tool simplifies configuring the Samba,
Kerberos, security and authentication files for Active Directory integration. Invoke
the tool as follows:
# system-config-authentication
On the Identity & Authentication tab, select the User Account Database drop-down
then select Winbind.
[email protected] 73 www.redhat.com
A new set of fields is displayed. Selecting the Winbind option configures the system to
connect to a Windows Active Directory domain. User information from a domain can then
be accessed, and the following server authentication options can be configured:
www.redhat.com 74 [email protected]
Populate the fields as follows:
[email protected] 75 www.redhat.com
Under Other Authentication Options, select Create home directories on the first login.
On the first successful login to Active Directory, the oddjobd daemon calls a method
to create a new home directory for a user.
www.redhat.com 76 [email protected]
Return to the Identity & Authentication tab, select Join Domain. An alert indicates the
need to save the configuration changes to disk before continuing:
Select Save. A new window prompts for the Domain administrator password:
Select OK. The terminal window displays the status of the domain join:
[/usr/bin/net join -w REFARCH-AD -S WIN-SRV1.REFARCH-
AD.CLOUD.LAB.ENG.BOS.REDHAT.COM -U Administrator]
Enter Administrator's password:<...>
Select Apply. The terminal window indicates that Winbind and the oddjobd were started:
Starting Winbind services: [ OK ]
Starting oddjobd: [ OK ]
Perform the previous authentication configuration tasks on each of the clustered Samba
nodes before proceeding to the next section.
[email protected] 77 www.redhat.com
6.2.7 Verify/Test Active Directory
The join to the Active Directory domain is complete. Verify access by performing each of
the following tasks.
...output abbreviated...
REFARCH-AD\ad-user101
REFARCH-AD\ad-user102
REFARCH-AD\ad-user103
...output abbreviated...
REFARCH-AD\dnsadmins
REFARCH-AD\dnsupdateproxy
REFARCH-AD\rhel-users
Note: If either of these fail to return all users or groups in the domain, the idmap UID, GUI
upper boundaries in the Samba configuration file need to be increased and the winbind
and smb daemons restarted. These tasks are discussed in the next section.
www.redhat.com 78 [email protected]
6.2.8 Modify Samba Configuration
The previous sections configured Winbind by using the default backend to verify Active
Directory domain access. Next, the Samba configuration file is modified to use the
idmap_ad back-end and several other parameters are configured for convenience.
Table 6.2.8: Summary of Changes provides a summary of the configuration file
parameter changes:
Samba Configuration
File Parameters
Parameter Description
idmap uid = 10000-19999 Set user id range for default backend (tdb)
idmap gid = 10000-19999 Set group id range for default backend (tdb)
idmap config REFARCH-AD:backend = ad Configure winbind to use idmap_ad backend
idmap config REFARCH-AD:default = yes Configure REFARCH-AD as default domain
idmap config REFARCH-AD:range =
Set range for idmap_ad backend
10000000-19999999
idmap config REFARCH-AD:
Enable support for rfc2307 UNIX attributes
schema_mode = rfc2307
winbind nss_info = rfc2307 Obtain user home directory and shell from AD
winbind enum users = no Disable enumeration of users
winbind enum groups = no Disable enumeration of groups
winbind separator = + Change default separator from '\' to '+'
winbind use default domain = yes Remove need to specify domain in commands
winbind nested groups = yes Enable nesting of groups in Active Directory
Edit and save the Samba configuration file as follows – changes are highlighted in bold:
[global]
workgroup = REFARCH-AD
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
security = ads
idmap uid = 10000-19999
idmap gid = 10000-19999
idmap config REFARCH-AD:backend = ad
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:range = 10000000-19999999
idmap config REFARCH-AD:schema_mode = rfc2307
winbind nss info = rfc2307
[email protected] 79 www.redhat.com
winbind enum users = no
winbind enum groups = no
winbind separator = +
winbind use default domain = yes
winbind nested groups = yes
[global]
workgroup = REFARCH-AD
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
server string = Samba Server Version %v
security = ADS
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
log file = /var/log/samba/log.%m
max log size = 50
clustering = Yes
idmap backend = tdb2
idmap uid = 10000-19999
idmap gid = 10000-19999
winbind separator = +
winbind use default domain = Yes
winbind nss info = rfc2307
idmap config REFARCH-AD:schema_mode = rfc2307
idmap config REFARCH-AD:range = 10000000-19999999
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:backend = ad
guest ok = Yes
[data1]
comment = Clustered Samba Share 1
path = /share/data1
read only = No
...output abbreviated...
Backup and clear out the existing Samba cache files - requires services to be stopped:
# service smb stop
Shutting down SMB services: [ OK ]
# service winbind stop
Shutting down Winbind services: [ OK ]
www.redhat.com 80 [email protected]
/var/lib/samba/smb_krb5/
/var/lib/samba/smb_krb5/krb5.conf.REFARCH-AD
...output abbreviated...
/var/lib/samba/registry.tdb
/var/lib/samba/perfmon/
/var/lib/samba/winbindd_idmap.tdb
# ls -la /var/tmp/samba-cache-backup.tar
-rw-r--r--. 1 root root 512000 Oct 10 17:06 /var/tmp/samba-cache-backup.tar
# rm -f /var/lib/samba/*
[email protected] 81 www.redhat.com
# service smb start
Starting SMB services: [ OK ]
# service smb status
smbd (pid 24482) is running...
# ps -aef | grep smbd
root 24482 1 0 17:12 ? 00:00:00 smbd -D
root 24495 24482 0 17:12 ? 00:00:00 smbd -D
...output abbreviated...
ad-user101
ad-user102
ad-user103
...output abbreviated...
dnsadmins
dnsupdateproxy
rhel-users
www.redhat.com 82 [email protected]
6.2.9 Verification of Services
Verify the services provided by performing the tasks outlined in the following sections:
1. Login Access
$ ssh ad-user101@smb-srv1
ad-user101@smb-srv1's password: **********
Creating home directory for ad-user101.
$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ pwd
/home/REFARCH-AD/ad-user101
$ ls -ld
drwxr-xr-x. 4 ad-user101 rhel-users 4096 Oct 10 17:23 .
$ echo $SHELL
/bin/bash
Verify access from another Red Hat Enterprise Linux 6 system, using a different Active
Directory user account:
$ hostname
rhel-srv11.cloud.lab.eng.bos.redhat.com
$ ssh ad-user102@smb-srv1
ad-user102@smb-srv1's password:
Creating home directory for ad-user102.
$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
$ id
uid=10000102(ad-user102) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ pwd
/home/REFARCH-AD/ad-user102
$ ls -ld
drwxr-xr-x. 4 ad-user102 rhel-users 4096 Oct 10 17:27 .
$ echo $SHELL
/bin/bash
[email protected] 83 www.redhat.com
2. File Share
Use the smbclient utility to determine what file shares are available on win-srv1:
$ hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ kinit
Password for [email protected]:**********
$ klist
Ticket cache: FILE:/tmp/krb5cc_10000101
Default principal: [email protected]
Server Comment
--------- -------
Workgroup Master
--------- -------
Use the smbclient utility to view what files are available on the Win-Data file share:
$ smbclient //win-srv1/Win-Data -k
OS=[Windows Server 2008 R2 Enterprise 7601 Service Pack 1] Server=[Windows
Server 2008 R2 Enterprise 6.1]
smb: \> showconnect
//win-srv1/Win-Data
smb: \> listconnect
0: server=win-srv1, share=Win-Data
smb: \> ls
. D 0 Wed Oct 10 18:35:44 2012
.. D 0 Wed Oct 10 18:35:44 2012
Win-Srv1.txt A 301 Wed Oct 10 18:38:07 2012
www.redhat.com 84 [email protected]
51097 blocks of size 1048576. 26294 blocks available
smb: \> quit
Note that new Kerberos tickets have been granted for use by the smbclient utility:
$ klist
Ticket cache: FILE:/tmp/krb5cc_10000101
Default principal: [email protected]
Create a mount point, mount the file share locally on a cluster node and access a file:
# hostname
smb-srv1.cloud.lab.eng.bos.redhat.com
# mkdir /mnt/Win-Data
# mount -t cifs //win-srv1/Win-Data /mnt/Win-Data -o username=ad-user101
Password:
# df -k -t cifs
Filesystem 1K-blocks Used Available Use% Mounted on
//win-srv1/Win-Data 52324348 25399260 26925088 49% /mnt/Win-Data
# mount -t cifs
//win-srv1/Win-Data on /mnt/Win-Data type cifs (rw)
# su - ad-user101
$ id
uid=10000101(ad-user101) gid=10000002(rhel-users) groups=10000002(rhel-
users),10001(BUILTIN+users)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
$ ls -la /mnt/Win-Data/Win-Srv1.txt
total 5
drwxr-xr-x. 1 root root 0 Oct 10 19:02 .
drwxr-xr-x. 3 root root 4096 Oct 10 18:57 ..
-rwxr-xr-x. 1 root root 302 Oct 10 19:03 Win-Srv1.txt
$ cat /mnt/Win-Data/Win-Srv1.txt
[email protected] 85 www.redhat.com
+-------------------------------------------------------+
+ This file is located on the Windows Server 2008 R2 +
+ server named 'win-srv1.cloud.lab.eng.bos.redhat.com' +
+ located in the Active Directory domain 'REFARCH-AD' +
+-------------------------------------------------------+
6.2.10 Configure CTDB Winbind Management (optional)
CTDB can be configured to manage the startup and stopping of winbind. This step is optional
but highly recommended for environments when clustered Samba nodes are integrated with
Active Directory domains.
1. Edit and save the CTDB configuration file (/etc/sysconfig/ctdb) on the first cluster node
(smb-srv1). Enable the parameter CTDB_MANAGES_WINBIND as highlighted below:
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes
This change simplifies the management of Samba and Winbind by automatically starting
and stopping the smbd and winbindd daemons when the ctdb service is started or
stopped.
This completes the process of integrating Red Hat Enterprise Linux 6 Samba cluster nodes
into an Active Directory domain. If there are multiple clustered Samba nodes to be
integrated, repeat the integration tasks for each system and verify the services provided.
www.redhat.com 86 [email protected]
7 Conclusion
This reference architecture details the deployment, configuration and management of highly
available file shares using clustered Samba on Red Hat Enterprise Linux 6. The most
common administration tasks are included - starting/stopping nodes, adding/removing nodes
and file shares. For environments interested in integrating Samba clusters into Windows
Active Directory domains, a separate section is provided.
The clustered Samba configuration detailed within can be deployed as presented here,
or customized to meet the specific requirements of individual environments.
[email protected] 87 www.redhat.com
Appendix A: References
Red Hat Enterprise Linux 6
1. Red Hat Enterprise Linux 6 Installation Guide
Installing Red Hat Enterprise Linux 6 for all architectures
Edition 1.0
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/pdf/
Installation_Guide/Red_Hat_Enterprise_Linux-6-Installation_Guide-en-US.pdf
www.redhat.com 88 [email protected]
Microsoft Windows Server 2008 R2
10. Install and Deploy Windows Server
August 6, 2009
http://technet.microsoft.com/en-us/library/dd283085.aspx
Active Directory
11. Active Directory Domain Services
April 18, 2008
http://technet.microsoft.com/en-us/library/cc770946.aspx
16. “How do I set up winbind on our Samba server to create users and groups from our
domain controller?”
Red Hat Knowledge Article - 4821
http://access.redhat.com/knowledge/articles/DOC-4821
17. “How do I configure Kerberos for Active Directory (AD) integration on Linux?”
Red Hat Knowledge Solution - 4734
http://access.redhat.com/knowledge/solutions/DOC-4734
[email protected] 89 www.redhat.com
Appendix B: Fibre Channel Storage
Provisioning
Two CLVM volumes are configured for use by the cluster nodes. Both volumes are created on
an HP StorageWorks MSA2324fc Fibre Channel storage array. The array contains a single
controller (Ports A1, A2) and an MSA70 expansion shelf providing a total of 48 physical
drives. The steps below describe how to provision a 1GB volume (ha-web-data-01) and a
100 GB volume () within a new virtual disk (VD1) from the command line.
Step 2. View the available virtual disks (vdisk), physical disks and volumes
# show vdisk
# show disks
# show volumes
Step 4. Create volumes within the virtual disk for CTDB and Samba data
# create volume vdisk VD1 size 2GB access no-access lun 1 smb-srv-ctdb-01
# create volume vdisk VD1 size 200GB access no-access lun 2 smb-srv-data-01
# show volumes vdisk VD1
Vdisk Name Size Serial Number WR Policy
Cache Opt Read Ahead Size Type Class
Volume Description
----------------------------------------------------------------------------
VD1 smb-srv-ctdb-01 1999.9MB 00c0ffd7e69d0000d26a325001000000 write-back
standard Default standard standard
VD1 smb-srv-data-01 199.9GB 00c0ffd7e69d0000f36a325001000000 write-back
standard Default standard standard
<...output truncated...>
www.redhat.com 90 [email protected]
50060B0000C28634 Yes No Standard
<...output truncated...>
smb-srv1:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c2862C
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c2862E
smb-srv2:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c28634
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c28636
smb-srv3:
# cat /sys/class/fc_host/host1/port_name
0x50060b0000c2863c
# cat /sys/class/fc_host/host2/port_name
0x50060b0000c2863e
Note: If the Fibre Channel storage array has two controllers attached to the SAN fabric then
each host has four port connections instead of the two shown here.
<...output truncated...>
[email protected] 91 www.redhat.com
Vdisk Name Size Serial Number WR Policy
Cache Opt Read Ahead Size Type Class
Volume Description
----------------------------------------------------------------------------
VD1 smb-srv-ctdb-01 1999.9MB 00c0ffd7e69d0000d26a325001000000 write-back
standard Default standard standard
VD1 smb-srv-data-01 199.9GB 00c0ffd7e69d0000f36a325001000000 write-back
standard Default standard standard
<...output truncated...>
<...output truncated...>
<...output truncated...>
www.redhat.com 92 [email protected]
Step 10. Verify volume and host mappings
# show volume-map smb-srv-ctdb-01
Info: Retrieving data...
Volume View [Serial Number (00c0ffd7e69d0000d78a095001000000) Name (smb-srv-
ctdb-01) ] Mapping:
Ports LUN Access Host-Port-Identifier Nickname Profile
----------------------------------------------------------------------
A1,A2 1 read-write 50060B0000C2862C smb-srv1-host1 Standard
A1,A2 1 read-write 50060B0000C2862E smb-srv1-host2 Standard
A1,A2 1 read-write 50060B0000C28634 smb-srv2-host1 Standard
A1,A2 1 read-write 50060B0000C28636 smb-srv2-host2 Standard
A1,A2 1 read-write 50060B0000C2863C smb-srv3-host1 Standard
A1,A2 1 read-write 50060B0000C2863E smb-srv3-host2 Standard
not-mapped all other hosts Standard
# show volume-map smb-srv-data-01
Info: Retrieving data...
Volume View [Serial Number (00c0ffd7e69d0000fa8a095001000000) Name (smb-srv-
data-01) ] Mapping:
Ports LUN Access Host-Port-Identifier Nickname Profile
----------------------------------------------------------------------
A1,A2 2 read-write 50060B0000C2862C smb-srv1-host1 Standard
A1,A2 2 read-write 50060B0000C2862E smb-srv1-host2 Standard
A1,A2 2 read-write 50060B0000C28634 smb-srv2-host1 Standard
A1,A2 2 read-write 50060B0000C28636 smb-srv2-host2 Standard
A1,A2 2 read-write 50060B0000C2863C smb-srv3-host1 Standard
A1,A2 2 read-write 50060B0000C2863E smb-srv3-host2 Standard
not-mapped all other hosts Standard
<...output truncated...>
# show host-map
Host View [ID (50060B0000C2863C) Name (smb-srv3-host1) Profile (Standard) ]
Mapping:
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2
[email protected] 93 www.redhat.com
Name Serial Number LUN Access Ports
-------------------------------------------------------------------------
smb-srv-ctdb-01 00c0ffd7e69d0000d26a325001000000 1 read-write A1,A2
smb-srv-data-01 00c0ffd7e69d0000f36a325001000000 2 read-write A1,A2
Step 11. From each of the cluster nodes, determine which device files are configured for the
2 GB (/dev/sdb, /dev/sdd, /dev/sdf, /dev/sdh) and 200 GB (/dev/sdc, /dev/sde, /dev/sdg,
/dev/sdi) Fibre Channel disks
# fdisk -l 2>/dev/null | grep "^Disk /dev/sd"
Disk /dev/sda: 146.8 GB, 146778685440 bytes
Disk /dev/sdb: 999 MB, 999997440 bytes
Disk /dev/sdc: 100.0 GB, 99999989760 bytes
Disk /dev/sdd: 999 MB, 999997440 bytes
Disk /dev/sde: 100.0 GB, 99999989760 bytes
Disk /dev/sdf: 999 MB, 999997440 bytes
Disk /dev/sdg: 100.0 GB, 99999989760 bytes
Disk /dev/sdh: 999 MB, 999997440 bytes
Disk /dev/sdi: 100.0 GB, 99999989760 bytes
Step 12. Verify the World Wide ID's (WWID) match for each device. The WWID's must be
the same across each cluster node
# /lib/udev/scsi_id --whitelisted --device=/dev/sdb
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdd
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdf
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdh
3600c0ff000d7e69dd78a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdc
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sde
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdg
3600c0ff000d7e69dfa8a095001000000
# /lib/udev/scsi_id --whitelisted --device=/dev/sdi
3600c0ff000d7e69dfa8a095001000000
www.redhat.com 94 [email protected]
Appendix C: Cluster Configuration File
(cluster.conf)
<?xml version="1.0"?>
<cluster config_version="19" name="samba-cluster">
<fence_daemon post_join_delay="60"/>
<clusternodes>
<clusternode name="smb-srv1-ci" nodeid="1">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv1-ci"/>
</method>
</fence>
</clusternode>
<clusternode name="smb-srv2-ci" nodeid="2">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv2-ci"/>
</method>
</fence>
</clusternode>
<clusternode name="smb-srv3-ci" nodeid="3">
<fence>
<method name="Primary">
<device name="IPMI-smb-srv3-ci"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.232" lanplus="on" login="root" \
name="IPMI-smb-srv1-ci" passwd="*******" \
power_wait="5" timeout="20"/>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.233" lanplus="on" login="root" \
name="IPMI-smb-srv2-ci" passwd="*******" \
power_wait="5" timeout="20"/>
<fencedevice agent="fence_ipmilan" auth="password" \
ipaddr="10.16.143.241" lanplus="on" login="root" \
name="IPMI-smb-srv3-ci" passwd="*******" \
power_wait="5" timeout="20"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
[email protected] 95 www.redhat.com
Appendix D: CTDB Configuration Files
/etc/sysconfig/ctdb (Base Configuration)
CTDB_DEBUGLEVEL=ERR
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK=/share/ctdb/.ctdb.lock
CTDB_MANAGES_SAMBA=yes
/etc/ctdb/public_addresses
10.16.142.111/21 bond0
10.16.142.112/21 bond0
10.16.142.113/21 bond0
/etc/ctdb/nodes
10.0.0.101
10.0.0.102
10.0.0.103
www.redhat.com 96 [email protected]
Appendix E: Samba Configuration File
(smb.conf)
Base Configuration
[global]
workgroup = REFARCH-CTDB
server string = Samba Server Version %v
guest ok = yes
clustering = yes
idmap backend = tdb2
passdb backend = tdbsam
[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes
[email protected] 97 www.redhat.com
Advanced Configuration (Active Directory Integration)
[global]
guest ok = yes
clustering = yes
workgroup = REFARCH-AD
password server = WIN-SRV1.REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
realm = REFARCH-AD.CLOUD.LAB.ENG.BOS.REDHAT.COM
security = ads
idmap uid = 20000-29999
idmap gid = 20000-29999
idmap config REFARCH-AD:backend = ad
idmap config REFARCH-AD:default = yes
idmap config REFARCH-AD:range = 10000000-29999999
idmap config REFARCH-AD:schema_mode = rfc2307
winbind nss info = rfc2307
winbind enum users = no
winbind enum groups = no
winbind separator = +
winbind use default domain = yes
winbind nested groups = yes
[data1]
comment = Clustered Samba Share 1
public = yes
path = /share/data1
writable = yes
[data2]
comment = Clustered Samba Share 2
public = yes
path = /share/data2
writable = yes
www.redhat.com 98 [email protected]
Appendix F: Cluster Configuration Matrix
smb-srv1 smb-srv2 smb-srv3
Nodes
Node Name smb-srv1-ci smb-srv2-ci smb-srv3-ci
IP Address 10.0.0.101 10.0.0.102 10.0.0.103
(cluster interconnect)
Hostname smb-srv1 smb-srv2 smb-srv3
IP Address 10.16.142.101 10.16.142.102 10.16.142.103
(public interface)
Fencing
Fence Type IPMI Lan IPMI Lan IPMI Lan
Fence Device
IPMI-smb-srv1-ci IPMI-smb-srv2-ci IPMI-smb-srv3-ci
Name
Fence Device
10.16.143.232 10.16.143.233 10.16.143.241
IP Address
Fence Method Name Primary Primary Primary
Fence Instance IPMI-smb-srv1-ci IPMI-smb-srv2-ci IPMI-smb-srv3-ci
Storage
CTDB Volume
Type Fibrechannel
Physical Disk smb-srv-ctdb-01
Physical Volume /dev/mapper/smb-srv-ctdb-01
Volume Group SMB-CTDB-VG
Logical Volume smb-ctdb-lvol1
Filesystem
Volume CLVM
Type GFS2
Mount point /share/ctdb
Device /dev/SMB-CTDB-VG/smb-ctdb-lvol1
Data Volume
Type Fibrechannel
Physical Disk smb-srv-data-01
Physical Volume /dev/mapper/smb-srv-data-01
Volume Group SMB-DATA1-VG
Logical Volume smb-data-lvol1
Filesystem
Volume CLVM
Type GFS2
Mount point /share/data1
Device /dev/SMB-DATA1-VG/smb-data-lvol1
[email protected] 99 www.redhat.com
Appendix G: Adding/Removing HA Nodes
• Two node HA clusters are a special case scenario requiring a cluster restart
(to the CMAN service) and brief service downtime to activate the change in
membership when adding (2 -> 3) or removing (3 -> 2) a node.
Adding HA Cluster Node
1. Verify the cluster status from any node. Ensure that all nodes are up, running
and the cluster status is Online. Do not add a node to the cluster unless the
cluster is fully formed and in a healthy state:
# clustat
2. Add the new member to the cluster configuration and specify the nodeid. When
expanding from two nodes to three nodes (or greater), the two_node flag must be
disabled. This can be run from any cluster node or the management server:
# ccs --host smb-srv1 --setcman
# ccs --host smb-srv1 --addnode smb-srv3-ci --nodeid=”3”
Node smb-srv3-ci added.
3. Add the new node to the fence method (Primary) and an instance of the fence device
(IPMI-smb-srv3-ci) to the fence method. Run this from any cluster node:
# ccs --host smb-srv1 --addmethod Primary smb-srv3-ci
Method Primary added to smb-srv3-ci.
# ccs --host smb-srv1 --addfencedev IPMI-smb-srv3-ci agent=fence_ipmilan \
auth=password ipaddr=10.16.143.241 lanplus=on login=root \
name=IPMI-smb-srv3-ci passwd=password power_wait=5 timeout=20
# ccs --host smb-srv1 --addfenceinst IPMI-smb-srv3-ci smb-srv3-ci Primary
4. Propagate the change to all cluster members and start the cluster services. A brief
downtime is required to allow the cluster nodes to synchronize and activate the
change. This can be run from any cluster node:
# ccs --host smb-srv1 --stopall
# ccs --host smb-srv1 --sync --activate
# ccs --host smb-srv1 --checkconf
All nodes in sync.
1. Verify the cluster status from any node. Ensure that all nodes are up, running and the
cluster status is Online. Do not remove a node from the cluster unless the cluster is
fully formed and in a healthy state:
# clustat
2. Remove the node from the cluster configuration and propagate the change to the
remaining cluster members. Two-node clusters are a unique case that require the
two_node and expected_votes flags to be enabled. For all other configurations
these flags do not need to be enabled. A brief downtime is required to allow the cluster
nodes to synchronize and activate the change. This can be run from any cluster node
as follows:
# ccs --host smb-srv1 --rmnode smb-srv3-ci
# ccs --host smb-srv1 --setcman two_node=1 expected_votes=1
# ccs --host smb-srv1 --stopall
# ccs --host smb-srv1 --sync --activate
# ccs --host smb-srv1 --checkconf
All nodes in sync.
3. Activate the new (2-node) cluster configuration. The cluster services must be restarted
when downsizing to a two node cluster configuration. This can be run from any cluster
node:
# ccs --host smb-srv1 --startall