Difference between revisions of "GFS CTDB HowTo"
m (→CTDB Configuration)
m (→CTDB Configuration)
|Line 197:||Line 197:|
|Line 202:||Line 203:|
Revision as of 09:36, 9 December 2010
- 1 Introduction
- 2 Platform
- 3 Configuration
- 4 Bringing up the cluster, clvmd (optional), gfs2, ctdb and samba
- 5 Using the Clustered Samba server
This is a step by step HowTo for setting up clustered Samba with CTDB on top of GFS2.
This HowTo covers Fedora Core 13 (or later), or Red Hat Enterprise Linux 6 (or later), on x86_64 platforms.The Red Hat cluster manager supports a minimum of two machines in a clustered environment. All nodes are required to run the exact same software versions unless otherwise mentioned.
All of the cluster nodes over which you plan to use GFS2 must be able to mount the same shared storage (Fibre Channel, iSCSI, etc.).
Fence device: Any installation must use some sort of fencing mechanism. GFS2 requires fencing to prevent data corruption. http://sources.redhat.com/cluster/wiki/FAQ/Fencing has more info about fencing. A Fence Device is not required for test clusters.
/* This list of required and optional packages for clustered samba is not complete. Please help me in filling this in. I usually configure a yum repo and do a 'yum -y install cman lvm2-cluster gfs2-utils' for my cluster and it pulls everything in automatically. */
- cman (Red Hat cluster manager)
- gfs2-utils (Utilities to manage and administer a gfs2 filesystem.)
- lvm2-cluster (Clustered Logical Volume Manager (clvm) to manage shared logical volumes.)
- mrxvt (Multi-tabbed shell for simultaneous administration of cluster nodes.)
Depending on what method you use to access shared storage, additional packages might be needed. For example, you'll need aoetools if you plan to use ATAoE.
It is recommended that you use the yum package management tool to update/install these packages so you have all the necessary dependencies resolved automatically. Something like, "yum install cman lvm2-cluster gfs2-utils ctdb samba"
All configuration steps in this section are to be performed on all of the cluster nodes unless otherwise specified. Setting up one machine and copying the configs over to the other nodes is one way to do this if you're not using a cluster shell (like cssh or mrxvt) that can broadcast keystrokes to all the tabs/windows, each of which is connected to a different cluster node.
To set up a cluster, all you need is a cluster configuration xml file located at /etc/cluster/cluster.conf. A simple configuration for a 3 node cluster is as shown below.
<?xml version="1.0"?> <cluster name="csmb" config_version="1"> <clusternodes> <clusternode name="clusmb-01" nodeid="1"></clusternode> <clusternode name="clusmb-02" nodeid="2"></clusternode> <clusternode name="clusmb-03" nodeid="3"></clusternode> </clusternodes> </cluster>
The <cluster name> attribute is your name for the cluster. This needs to be unique among all the clusters you may have on the same network. As we will see later when configuring GFS2, this cluster name will also associate the filesystem with it. For each <clusternode>, you need to specify a "name" which is also the hostname of the corresponding machine and the "nodeid", which is numeric and unique to the nodes. Note: For a two-node cluster, there's a special attribute that needs to be part of the cluster.conf. Add <cman two_node="1" expected_votes="1"/> within the <cluster></cluster> tags. This example below adds a power fencing (apc) device to the configuration:
<?xml version="1.0"?> <cluster name="csmb" config_version="1"> <clusternodes> <clusternode name="clusmb-01" nodeid="1"> <fence> <method name="single"> <device name="clusmb-apc" switch="1" port="1"/> </method> </fence> </clusternode> <clusternode name="clusmb-02" nodeid="2"> <fence> <method name="single"> <device name="clusmb-apc" switch="1" port="2"/> </method> </fence> </clusternode> <clusternode name="clusmb-03" nodeid="3"> <fence> <method name="single"> <device name="clusmb-apc" switch="1" port="3"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice name="single" agent="fence_apc" ipaddr="192.168.1.102" login="admin" passwd="password"/> </fencedevices> </cluster>
Each <clusternode> has an associated <fence> attribute and a separate <fencedevices> tag within the root <cluster> tag that defines the fencing agent. http://sources.redhat.com/cluster/wiki/FAQ/Fencing has more information on configuring different fencing agents. The cluster.conf manual page has more information on the various options available.
Before using lvm2-cluster(clvmd) to manage your shared storage, make sure you edit the lvm configuration file at /etc/lvm/lvm.conf to change the locking_type attribute to "locking_type=3". Type 3 uses built-in clustered locking.
When using clvm, you must bring up the cluster first (so clvmd is running) before you can create logical volumes and gfs2 filesystems. Please skip this step and move on to the next step CTDB Configuration. We will come back to this step later.
Use the lvm tools like pvcreate, vgcreate, lvcreate to create two logical volumes ctdb_lv and csmb_lv on your shared storage. ctdb_lv will store the shared ctdb state and needs to be 1GB in size. csmb_lv will hold the user data that will be exported via a samba share so size it accordingly. Note that creation of clustered volume groups and logical volumes is to be done on only one of the cluster nodes. After creation, a "service clvmd restart" on all the nodes will refresh this new information and all the nodes will be able to see the logical volumes you just created.
You will be configuring gfs2 filesystems on both these volumes. The only thing you need to do for gfs2 is to run mkfs.gfs2 to make the filesystem. Note: mkfs.gfs2 is to be run on one cluster node only!
In this example, we've used a 100GB (/dev/csmb_vg/csmb_lv) gfs2 filesystem for the samba share and the 1GB (/dev/csmb_vg/ctdb_lv) gfs2 filesystem for ctdb state.
First, we will create the filesystem to host the samba share.
mkfs.gfs2 -j3 -p lock_dlm -t csmb:gfs2 /dev/csmb_vg/csmb_lv
-j : Specifies the number of journals to create in the filesystem. One journal per node, and we have 3 nodes.
-p : lock_dlm is the locking protocol gfs2 uses for inter-node communication.
-t : Is the lock table name and is of the format cluster_name:fs_name. Recall that our cluster name is "csmb" (from cluster.conf) and we use "gfs2" as the name for our filesystem.
The output of this command looks something like this:
This will destroy any data on /dev/csmb_vg/csmb_lv. It appears to contain a gfs2 filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/csmb_vg/csmb_lv Blocksize: 4096 Device Size 100.00 GB (26214400 blocks) Filesystem Size: 100.00 GB (26214398 blocks) Journals: 3 Resource Groups: 400 Locking Protocol: "lock_dlm" Lock Table: "csmb:gfs2" UUID: 94297529-ABG3-7285-4B19-182F4F2DF2D7
Next, we will create the filesystem to host ctdb state information.
mkfs.gfs2 -j3 -p lock_dlm -t csmb:ctdb_state /dev/csmb_vg/ctdb_lv
Note the different lock table name to distinguish this filesystem from the one created above and obviously the different device used for this filesystem. The output will look something like this:
This will destroy any data on /dev/csmb_vg/ctdb_lv. It appears to contain a gfs2 filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/csmb_vg/ctdb_lv Blocksize: 4096 Device Size 1.00 GB (262144 blocks) Filesystem Size: 1.00 GB (262142 blocks) Journals: 3 Resource Groups: 4 Locking Protocol: "lock_dlm" Lock Table: "csmb:ctdb_state" UUID: BCDA8025-CAF3-85BB-B062-CC0AB8849A03
The CTDB config file is located at /etc/sysconfig/ctdb. Four mandatory fields for ctdb operation that need to be configured are:
CTDB_NODES=/etc/ctdb/nodes <br>CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses <br>CTDB_RECOVERY_LOCK="/mnt/ctdb/.ctdb.lock" <br>CTDB_MANAGES_SAMBA=yes <br>CTDB_MANAGES_WINBIND=yes
- CTDB_NODES specifies the location of the file which contains the list of ip addresses of the cluster nodes.
- CTDB_PUBLIC_ADDRESSES specifies the location of the file where a list of ip addresses can be used to export the samba shares exported by this cluster.
- CTDB_RECOVERY_LOCK specifies a lock file that ctdb uses internally for recovery and this file must reside on shared storage such that all the cluster nodes have access to it. In this example, we've used the gfs2 filesystem that will be mounted at /mnt/ctdb on all nodes. This is different from the gfs2 filesystem that will host the samba share that we plan to export. This reclock file is used to prevent split-brain scenarios. With newer versions of CTDB (>= 1.0.112), specifying this file is optional as long as it is substituted with another split-brain prevention mechanism.
- CTDB_MANAGES_SAMBA=yes. Enabling this allows ctdb to start and stop the samba service as it deems necessary to provide service migration/failover etc.
- CTDB_MANAGES_WINBIND=yes. If running on a member server, you will need to set this too.
The contents of the /etc/ctdb/nodes file:
192.168.1.151 192.168.1.152 192.168.1.153
This simply lists the cluster nodes' ip addresses. In this example, we assume that there is only one interface/IP on each node that is used for both cluster/ctdb communication and serving clients. If you have two interfaces on each node and wish to dedicate one set of interfaces for cluster/ctdb communication, use those ip addresses here and make sure the hostnames/ip addresses used in the cluster.conf file are the same.
It is critical that this file is identical on all nodes because the ordering is important and ctdb will fail if it finds different information on different nodes.
The contents of the /etc/ctdb/public_addresses file:
192.168.1.201/24 eth0 192.168.1.202/24 eth0 192.168.1.203/24 eth0
We're using three addresses in our example above which are currently unused on the network. Please choose addresses that can be accessed by the intended clients.
These are the IP addresses that you should configure in DNS for the name of the clustered samba server and are the addresses that CIFS clients will connect to. By using different public_addresses files on different nodes it is possible to partition the cluster into subsets of nodes
For more information on ctdb configuration, look at: http://ctdb.samba.org/configuring.html
The smb.conf located at /etc/samba/smb.conf in this example looks like this:
[global] guest ok = yes clustering = yes netbios name = csmb-server [csmb] comment = Clustered Samba public = yes path = /mnt/gfs2/share writeable = yes
We export a share with name "csmb" located at /mnt/gfs2/share. Recall that this is different from the GFS2 shared filesystem that we used earlier for the ctdb lock file at /mnt/ctdb/.ctdb.lock. We will create the "share" directory in /mnt/gfs2 when we mount it for the first time. clustering = yes instructs samba to use CTDB. netbios name = csmb-server explicitly sets all the nodes to have a common NetBIOS name.
smb.conf should be identical on all the cluster nodes.
Bringing up the cluster, clvmd (optional), gfs2, ctdb and samba
Bringing up the Cluster
The cman init script is the best and recommended way of bringing up the cluster.
"service cman start" on all the nodes will bring up the various components of the cluster.
Note for Fedora installs: NetworkManager is enabled and run by default. This must be disabled in order to run cman.
# service NetworkManager stop
# chkconfig NetworkManager off
You might have to configure network interfaces manually using system-config-network. Also see man ifconfig for more information.
[root@clusmb-01 ~]# service cman start Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ]
Once this starts up successfully, you can verify that the cluster is up by issuing the "cman_tool nodes" command.
[root@clusmb-01 ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 88 2010-10-07 15:44:27 clusmb-01 2 M 92 2010-10-07 15:44:27 clusmb-02 3 M 88 2010-10-07 15:44:27 clusmb-03
The output should be similar on all the nodes. The status field "Sts" value of 'M' denotes that the particular node is a member of the cluster.
You can have a cluster node automatically join the cluster on reboot by having init start it up.
"chkconfig cman on"
Bringing up clvmd
"service clvmd start" on all the nodes should start the clvmd daemon and activate any clustered logical volumes that you might have. Since this is the first time you're bringing up this cluster, you still need to create the logical volumes required for its operation. At this point, you can go back to the previous skipped section GFS2 Configuration to create logical volumes and gfs2 filesystems as described there.
You can have a cluster node automatically start clvmd at system boot time by having init start it up. Make sure you have "cman" configured to start on boot as well.
"chkconfig clvmd on"
Bringing up GFS2
"mount -t gfs2 /dev/csmb_vg/csmb_lv /mnt/gfs2" and "mount -t gfs2 /dev/csmb_vg/ctdb_lv /mnt/ctdb" on all nodes will mount the gfs2 filesystems we created on /dev/csmb_vg/csmb_lv and /dev/csmb_vg/ctdb_lv at the mount points /mnt/gfs2 and /mnt/ctdb respectively. Recall that the "/mnt/gfs2" mountpoint was configured in /etc/samba/smb.conf and "/mnt/ctdb" was configured in /etc/sysconfig/ctbd Once mounted, from a single node, create the share directory /mnt/gfs2/share, set it's permissions according to your requirements and copy on all the data that you wish to export. Since gfs2 is a cluster filesystem, you'll notice that all the other nodes in the cluster also have the same contents in the /mnt/gfs2/ directory. It is possible to make gfs2 mount automatically during boot using "chkconfig gfs2 on". Make sure you have "cman" and "clvmd" also set to start up on boot. Like all other filesystems that can be automounted, you will need to add these lines to /etc/fstab before gfs2 can mount on its own:
/dev/csmb_vg/csmb_lv /mnt/gfs2 gfs2 defaults 0 0 /dev/csmb_vg/ctdb_lv /mnt/ctdb gfs2 defaults 0 0
Bringing up CTDB/Samba
"service ctdb start" on all the nodes will bring up the ctdbd daemon and startup ctdb. Currently, it can take upto a minute for ctdb to stabilize. "ctdb status" will show you how ctdb is doing:
[root@clusmb-01 ~]# ctdb status Number of nodes:3 pnn:0 192.168.1.151 OK (THIS NODE) pnn:1 192.168.1.152 OK pnn:2 192.168.1.153 OK Generation:1410259202 Size:3 hash:0 lmaster:0 hash:1 lmaster:1 hash:2 lmaster:2 Recovery mode:NORMAL (0) Recovery master:0
Since we configured ctdb with "CTDB_MANAGES_SAMBA=yes", ctdb will also start up the samba service on all nodes and export all configured samba shares.
When you see that all nodes are "OK", it's safe to move on to the next step.
Using the Clustered Samba server
Clients can connect to the samba share that we just exported by connecting to one of the ip addresses specified in /etc/ctdb/public_addresses
mount -t cifs //192.168.1.201/csmb /mnt/sambashare -o user=testmonkey
For Cluster/GFS2 related questions please email email@example.com and for Samba/CTDB related questions please email firstname.lastname@example.org