Configuring clustered Samba

From SambaWiki

Setting up a simple CTDB Samba cluster

As of April 2007 you can setup a simple Samba3 or Samba4 CTDB cluster, running either on loopback (with simulated nodes) or on a real cluster with TCP. This page will tell you how to get started.

Clustering Model

The setup instructions on this page are modelled on setting up a cluster of N nodes that function in nearly all respects as a single multi-homed node. So the cluster will export N IP interfaces, each of which is equivalent (same shares) and which offers coherent CIFS file access across all nodes.

The clustering model utilizes IP takeover techniques to ensure that the full set of public IP addresses assigned to services on the cluster will always be available to the clients even when some nodes have failed and become unavailable.

Getting the code

You need two source trees, one is a copy of Samba3 with clustering patches, and the other is the ctdb code itself. Both source trees are stored in bzr repositories. See for more information on bzr.

The fastest way to checkout an initial copy of the Samba3 tree with clustering patches is:

  rsync -avz .

To update this tree when improvements are made in the upstream code do this:

   cd samba_3_0_ctdb
   bzr merge

If you don't have bzr and can't easily install it, then you can instead use the following command to update your tree to the latest version:

   cd samba_3_0_ctdb
   rsync -avz .

To get an initial checkout of the ctdb code do this:

  rsync -avz .

To update this tree when improvements are made in the upstream code do this:

   cd ctdb
   bzr merge

If you don't have bzr and can't easily install it, then you can instead use the following command to update your tree to the latest version:

   cd ctdb
   rsync -avz .

Building the CTDB tree

To build a copy of the CTDB code you should do this:

  cd ctdb
  make install

You need to install ctdb on all nodes of your cluster.

Building the Samba3 tree

To build a copy of Samba3 with clustering and ctdb support you should do this:

   cd samba_3_0_ctdb/source
   ./configure --with-ctdb=/usr/src/ctdb --with-cluster-support --enable-pie=no
   make proto

Once compiled, you should install Samba on all cluster nodes.

The /usr/src/ctdb path should be replaced with the path to the ctdb sources that you downloaded above. If you have installed

Samba Configuration

Next you need to initialise the Samba password database, e.g.

 smbpasswd -a root

Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on for details.

Critical smb.conf parameters

A clustered Samba install must set some specific configuration parameters

* clustering = yes
* idmap backend = tdb2
* private dir = /a/directory/on/your/cluster/filesystem

It is vital that the private directory is on shared storage.

CTDB Cluster Configuration

These are the primary configuration files for CTDB. When CTDB is installed, it will install template versions of these files which you need to edit to suit your system. The current set of config files for CTDB are also available from


This file contains the startup parameters for ctdb. When you installed ctdb, a template config file should have been installed in /etc/sysconfig/ctdb. Edit that file, following the instructions in the template.

The most important options are:


Please check those carefully


This file needs to be created as /etc/ctdb/nodes and contains a list of the private IP addresses that the CTDB daemons will use in your cluster. This should be a private non-routable subnet which is only used for internal cluster traffic. This file must be the same on all nodes in the cluster.

Example :


This file is only required if you plan to use IP takeover. In order to use IP takeover you must specify which interface to use in /etc/sysconfig/ctdb by specifying the PUBLIC_INTERFACE variable. You must also specify the list of public IP addresses to use in this file.

This file contains a list (one for each node) of public cluster addresses. these are the addresses that the SMBD daemons will bind to. This file must contain one address for each node, i.e. it must have the same number of entries as the nodes file.


These are the IP addresses that you should configure in DNS for the name of the clustered samba server and are the addresses that CIFS clients will connect to. The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.

CTDB nodes will only take over IP addresses that are inside the same subnet as its own public IP address. In the example above, nodes 0 and 1 would be able to take over each others public ip and analog for nodes 2 and 3, but node 0 and 1 would NOT be able to take over the IP addresses for nodes 2 or 3 since they are on a different subnet.

Do not assign these addresses to any of the interfaces on the host. CTDB will add and remove these addresses automatically at runtime.


This is a script that is called out to by CTDB when certain events occur to allow for site specific tasks to be performed.

The events currently implemented and called out for are

1, when the node takes over an ip address
2, when the node releases an ip address
3, when recovery has completed and the cluster is reconfigured
4, when the cluster performs a clean shutdown

Please see the service scripts that installed by ctdb in /etc/ctdb/events.d for examples of how to configure other services to be aware of the HA features of CTDB


CTDB defaults to use TCP port 9001 for its traffic. Configuring a different port to use for CTDB traffic is done by adding a ctdb entry to the /etc/services file.

Example: for change CTDB to use port 9999 add the following line to /etc/services

ctdb  9999/tcp

Note: all nodes in the cluster MUST use the same port or else CTDB will not start correctly.

Starting the cluster

Just start the ctdb service on all nodes.

Testing your cluster

Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that

Using ctdb

The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster. If you run it with no options it will provide some terse usage information. The most commonly used commands are:

- ctdb ping
- ctdb status

Using smbcontrol

You can check for connectivity to the smbd daemons on each node using smbcontrol

- smbcontrol smbd ping

Using Samba4 smbtorture

The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:

 svn co svn://

Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:

 smbtorture //localhost/data -Uuser%password  RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60

A suitable unclist.txt is generated in your $PREFIX/lib directory when you run

For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at

Setting up CTDB for clustered NFS

Configure CTDB as above and set it up to use public ipaddresses. Verify that the CTDB cluster works.


Make sure you have the sm-notify tool installed in /usr/sbin. This tool is required so that CTDB will be able to successfully trigger lock recovery after an ip address failover/failback.


Export the same directory from all nodes. Also make sure to specify the fsid export option so that all nodes will present the same fsid to clients. clients can get "upset" if the fsid on a mount suddenly changes.

 /gpfs0/data *(rw,fsid=1235)


This file must be edited to point statd to keep its state directory on shared storage instead of in a local directory. We must also make statd use a fixed port to listen on that is the same for all nodes in the cluster. If we don't specify a fixed port, the statd port will change during failover which causes problems on some clients.

This file should look something like :

 STATD_HOSTNAME="ctdb -P $STATD_SHARED_DIRECTORY/ -H /etc/ctdb/statd-callout -p 97"

The CTDB_MANAGES_NFS line tells the events scripts that CTDB is to manage startup and shutdown of the NFS and NFSLOCK services. With this set to yes, CTDB will start/stop/restart these services as required.

STATD_SHARED_DIRECTORY is the shared directory where statd and the statd-callout script expects that the state variables and lists of clients to notify are found.


Since CTDB will manage and start/stop/restart the nfs and the nfslock services, you must disable them in chkconfig.

 chkconfig nfs off
 chkconfig nfslock off

Statd state directories

For each node, create a state directory on shared storage where each local statd daemon can keep its state information. This needs to be on shared storage since if a node takes over an ip address it needs to find the list of monitored clients to notify. If you have four nodes with the public addresses listed above, this means the following directories needs to be created on shared storage:

 mkdir /gpfs0/nfs-state
 mkdir /gpfs0/nfs-state/
 mkdir /gpfs0/nfs-state/
 mkdir /gpfs0/nfs-state/
 mkdir /gpfs0/nfs-state/

Event scripts

CTDB clustering for NFS relies on two event scripts /etc/ctdb/events.d/nfs and /etc/ctdb/events.d/nfslock. These two scripts are provided by the RPM package and there should not be any need to change them.


Never ever mount the same nfs share on a client from two different nodes in the cluster at the same time. The client side caching in NFS is very fragile and assumes/relies on that a single object can only be seen through one since mount at a time.