Setting up pCIFS using Samba and CTDB
As of April 2007 you can setup a simple Samba3 or Samba4 CTDB cluster, running either on loopback (with simulated nodes) or on a real cluster with TCP. This page will tell you how to get started.
The setup instructions on this page are modelled on setting up a cluster of N nodes that function in nearly all respects as a single multi-homed node. So the cluster will export N IP interfaces, each of which is equivalent (same shares) and which offers coherent CIFS file access across all nodes.
The clustering model utilizes IP takeover techniques to ensure that the full set of public IP addresses assigned to services on the cluster will always be available to the clients even when some nodes have failed and become unavailable.
CTDB is primarily developed and tested on Linux. A subset of features may work on other platforms.
Samba and CTDB must be installed on all nodes. CTDB is part of Samba (>= 4.2.0), so to get CTDB you need to get Samba.
If using Samba >= 4.2.0
If you download source code for Samba >= 4.2.0 then CTDB is included in the
If using Samba < 4.2.0
- Your operating system may include pre-built packages for Samba and CTDB
- If you use pre-built packages then you need to ensure that Samba was built with cluster support
- CTDB will usually be in a separate package to other Samba components called
Building from source code
If using Samba >= 4.2.0
If using Samba >= 4.2.0 then build CTDB with Samba. Add the following options to the
- This enables clustering support in Samba and includes CTDB in the build
- Clustered Samba needs an IDMAP facility so it is worth building a few
Other than that, just follow instructions for building Samba.
If using Samba < 4.2.0
Samba and CTDB will need to be built separately.
In the top level of the CTDB source directory, run the following commands:
./autogen.sh ./configure make make install
If building an older version of Samba for a standalone CTDB release then the above options
./configure should still be added. The
--with-ctdb option is also required, to point to the installed CTDB include files.
Next you need to initialise the Samba password database, e.g.
smbpasswd -a root
Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.
Critical smb.conf parameters
A clustered Samba install must set some specific configuration parameters
netbios name = something * clustering = yes idmap config * : backend = autorid idmap config * : range = 1000000-1999999
- See idmap(8) for more information about the idmap configuration
- netbios name should be the same on all node
If using the Samba registry then these must be set in smb.conf:
CTDB Cluster Configuration
These are the primary configuration files for CTDB. When CTDB is installed, it will install template versions of these files which you need to edit to suit your system. The current set of config files for CTDB are also available in the /usr/src/ctdb/config directory.
CTDB configuration file
The preferred file for CTDB configuration is
/etc/ctdb/ctdbd.conf. Linux distribution-specific configuration files such as
/etc/default/ctdb are also supported. Depending on how you install CTDB, a template configuration file may be installed.
The most important options are:
* CTDB_NODES * CTDB_RECOVERY_LOCK * CTDB_PUBLIC_ADDRESSES * CTDB_MANAGES_SAMBA
Please see ctdbd.conf(5) for more details.
The recovery lock, configured via
CTDB_RECOVERY_LOCK provides important split-brain prevention and is usually configured to point to a locl file in the cluster filesystem. See the RECOVERY LOCK section in ctdb(7) for more details.
CTDB_NODES points to
/etc/ctdb/nodes and contains a list of the private IP addresses that the CTDB daemons will use in your cluster. This should be a private non-routeable subnet which is only used for internal cluster traffic. This file must be the same on all nodes in the cluster.
10.1.1.1 10.1.1.2 10.1.1.3 10.1.1.4
This file is only required if you plan to use IP takeover. The CTDB_PUBLIC_ADDRESSES configuration variable must be set to point to this file, otherwise it will be ignored. The file contains a list of public IP addresses, one per line, each with an optional (comma-separated) list of network interfaces that can have that address assigned. These are the addresses that the SMBD daemons will bind to.
192.168.1.1/24 eth1 192.168.1.2/24 eth1 192.168.2.1/24 eth2 192.168.2.2/24 eth2
If network interfaces are not specified on all lines in the public addresses file then the CTDB_PUBLIC_INTERFACE configuration variable must be used to specify a default interface.
These are the IP addresses that you should configure in DNS for the name of the clustered samba server and are the addresses that CIFS clients will connect to. The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.
Do not manually assign these addresses to any of the interfaces on the host. CTDB will add and remove these addresses automatically at runtime.
There is no built-in restriction on the number of IP addresses or network interfaces that can be used. However, performance limitations (e.g. time taken to calculate IP address distribution, time taken to break TCP connections and delete IPs from interfaces, ...) introduce practical limits. With a small number of nodes it is sensible to plan the IP addresses so that they can be evenly redistributed across subsets of nodes. For example, a 4 node cluster will always be able to evenly distribute 12 public IP addresses (across 4, 3, 2, 1 nodes). Having IP addresses evenly balanced is not a hard requirement but evenly balancing IP addresses is the only method of load balancing used by CTDB.
The public addresses file can differ between nodes, allowing subsets of nodes to host particular public IP addresses. Note that pathological configurations can result in undesirable IP address distribution.
This directory contains event scripts that are called out to by CTDB when certain events occur. Event scripts support health monitoring, service management, IP failover, internal CTDB operations and features. They handle events such as
Please see the service scripts that installed by ctdb in /etc/ctdb/events.d for examples of how to configure other services to be aware of the HA features of CTDB.
Also see /etc/ctdb/events.d/README for additional documentation on how to write and modify event scripts.
CTDB defaults to use IANA assigned TCP port 4379 for its traffic. Configuring a different port to use for CTDB traffic is done by adding a ctdb entry to the /etc/services file.
Example: for change CTDB to use port 9999 add the following line to /etc/services
Note: all nodes in the cluster MUST use the same port or else CTDB will not start correctly.
You need to setup some method for your Windows and NFS clients to find the nodes of the cluster, and automatically balance the load between the nodes. We recommend that you setup a round-robin DNS entry for your cluster, listing all the public IP addresses that CTDB will be managing as a single DNS A record.
You may also wish to setup a static WINS server entry listing all of your cluster nodes IP addresses.
Managing Network Interfaces
The default install of CTDB is able to add/remove IP addresses from your network interfaces using the CTDB_PUBLIC_ADDRESSS option shown above.
For more sophisticated interface management you will need to add a new events script in /etc/ctdb/events.d/.
For example, say you wanted CTDB to add a default route when it brings it up. You could have an event script called /etc/ctdb/events.d/11.route that looks like this:
#!/bin/sh . /etc/ctdb/functions loadconfig ctdb cmd="$1" shift case $cmd in takeip) # we ignore errors from this, as the route might be up already when we're grabbing # a 2nd IP on this interface /sbin/ip route add $CTDB_PUBLIC_NETWORK via $CTDB_PUBLIC_GATEWAY dev $1 2> /dev/null ;; esac exit 0
Then you would put CTDB_PUBLIC_NETWORK and CTDB_PUBLIC_GATEWAY in /etc/sysconfig/ctdb like this:
Filesystem specific configuration
The cluster filesystem you use with ctdb plays a critical role in ensuring that CTDB works seamlessly. Here are some filesystem specific tips
If you are interested in testing a new cluster filesystem with CTDB then we strongly recommend looking at the page on testing filesystems using ping_pong to ensure that the cluster filesystem supports correct POSIX locking semantics.
IBMs GPFS filesystem
The GPFS filesystem (see http://www-03.ibm.com/systems/clusters/software/gpfs.html) is a proprietary cluster filesystem that has been extensively tested with CTDB/Samba. When using GPFS, the following smb.conf settings are recommended
clustering = yes idmap backend = tdb2 fileid:mapping = fsname vfs objects = gpfs fileid gpfs:sharemodes = No force unknown acl user = yes nfs4: mode = special nfs4: chown = yes nfs4: acedup = merge
The ACL related options should only be enabled if you have NFSv4 ACLs enabled on your filesystem
The most important of these options is the "fileid:mapping". You risk data corruption if you use a different mapping backend with Samba and GPFS, because locking wilk break across nodes. NOTE: You must also load "fileid" as a vfs object in order for this to take effect.
A guide to configuring Samba with CTDB and GPFS can be found at Samba CTDB GPFS Cluster HowTo
RedHat GFS filesystem
Red Hat GFS is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer).
The gfs_controld daemon manages mounting, unmounting, recovery and posix locks. Edit /etc/init.d/cman (If using RedHat Cluster Suite) to start gfs_controld with the '-l 0 -o 1' flags to optimize posix locking performance. You'll notice the difference this makes by running the ping_pong test with and without these options.
A complete HowTo document to setup clustered samba with CTDB and GFS2 is here: GFS CTDB HowTo
Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by a number of companies ( Intel, Seagate ) and OpenSFS which is a not for profit organisation.
Tests have been done on Lustre releases of 1.4.x and 1.6.x with CTDB/Samba, The current lustre release is 2.5.2 . When mounting Lustre, an option of "-o flock" should be specified to enable cluster-wide byte range lock among all Lustre clients.
These two versions have differnt mechanisms of configuration and startup. More information is available at http://wiki.lustre.org.
In comparison of Lustre configurating, setting up CTDB/Samba on the two different versions keeps the same way. The following settings are recommended:
clustering = yes idmap backend = tdb2 fileid:mapping = fsname use mmap = no nt acl support = yes ea support = yes
The options of "fileid:mapping" and "use mmap" must be specified to avoid possibe data corruption. The sixth of "nt acl support" is to map the POSIX ACL to Windows NT's format. At the moment, Lustre only supports POSIX ACL.
GlusterFS is a cluster file-system capable of scaling to several peta-bytes that is easy to configure. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It uses Linux File System in Userspace (FUSE) to achieve all this.
NOTE: GlusterFS has not yet had extensive testing but this is currently underway.
Currently from versions 2.0 to 2.0.4 of GlusterFS, it must be patched with:
This is to ensure GlusterFS passes the ping_pong test. This issue is being tracked at:
Update: As of GlusterFS 2.0.6 this has been fixed.
- OCFS2 - see http://oss.oracle.com/projects/ocfs2/
fileid:mapping = fsid vfs objects = fileid
OCFS2 1.4 offers cluster-wide byte-range locking.
Starting the cluster
Just start the ctdb service on all nodes. A sample init script (works for RedHat) is located in /usr/src/ctdb/config/ctdb.init
If you have taken advantage of the ability of CTDB to start other services, then you should disable those other services with chkconfig, or your systems service configuration tool. Those services will instead be started by ctdb using the /etc/ctdb/events.d service scripts.
If you wish to cope with software faults in ctdb, or want ctdb to automatically restart when an administration kills it, then you may wish to add a cron entry for root like this:
* * * * * /etc/init.d/ctdb cron > /dev/null 2>&1
Testing your cluster
Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster. If you run it with no options it will provide some terse usage information. The most commonly used commands are:
- ctdb ping - ctdb status
You can check for connectivity to the smbd daemons on each node using smbcontrol
- smbcontrol smbd ping
Using Samba4 smbtorture
The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:
git clone git://git.samba.org/samba.git cd samba/source4
Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:
smbtorture //localhost/data -Uuser%password RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60
The file unclist.txt should contain a list of share in your cluster (UNC format: //server//share). For example
//node1/data //node2/data //node3/data //node4/data
For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/
Setting up CTDB for clustered NFS
Configure CTDB as above and set it up to use public ipaddresses. Verify that the CTDB cluster works.
Export the same directory from all nodes. Also make sure to specify the fsid export option so that all nodes will present the same fsid to clients. clients can get "upset" if the fsid on a mount suddenly changes.
This file must be edited to point statd to keep its state directory on shared storage instead of in a local directory. We must also make statd use a fixed port to listen on that is the same for all nodes in the cluster. If we don't specify a fixed port, the statd port will change during failover which causes problems on some clients.
This file should look something like :
CTDB_MANAGES_NFS=yes NFS_TICKLE_SHARED_DIRECTORY=/gpfs0/nfs-tickles STATD_PORT=595 STATD_OUTGOING_PORT=596 MOUNTD_PORT=597 RQUOTAD_PORT=598 LOCKD_UDPPORT=599 LOCKD_TCPPORT=599 STATD_SHARED_DIRECTORY=/gpfs0/nfs-state NFS_HOSTNAME="ctdb" STATD_HOSTNAME="$NFS_HOSTNAME -P "$STATD_SHARED_DIRECTORY/$PUBLIC_IP" -H /etc/ctdb/statd-callout -p 97" RPCNFSDARGS="-N 4"
The CTDB_MANAGES_NFS line tells the events scripts that CTDB is to manage startup and shutdown of the NFS and NFSLOCK services. With this set to yes, CTDB will start/stop/restart these services as required.
STATD_SHARED_DIRECTORY is the shared directory where statd and the statd-callout script expects that the state variables and lists of clients to notify are found.
The ip address specified should be the public address of this node.
The reason to specify the port used by the lockmanager is so that the port used by a public address will not change during address failover/failback since this can confuse some clients.
NFS_TICKLE_SHARED_DIRECTORY is where ctdb will store information about which clients have established tcp connections to the cluster. This information is used during failover of ip addresses. This allows the node that takes over an ip address to very quickly 'tickle' and reset any tcp connections for the ip address it took over. The reason to do this is to improve the speed at which a client will detect that the tcp connection for NFS needs to be reestablished and to speed up recovery in the client.
NFS_HOSTNAME is the name that the nfs server will use for the public addresses. This should be the same as the name samba uses. This name must be resolvable into the ip addresses used for public addresses.
The RPCNFSDARGS line is used to disable support for NFSv4 which is not yet supported by CTDB.
Since CTDB will manage and start/stop/restart the nfs and the nfslock services, you must disable them in chkconfig.
chkconfig nfs off chkconfig nfslock off
Statd state directories
For each node, create a state directory on shared storage where each local statd daemon can keep its state information. This needs to be on shared storage since if a node takes over an ip address it needs to find the list of monitored clients to notify. You need to create the directory used to host this statd state on shared storage.
CTDB clustering for NFS relies on two event scripts /etc/ctdb/events.d/60.nfs and /etc/ctdb/events.d/61.nfstickle. These two scripts are provided by the RPM package and there should not be any need to change them.
Never ever mount the same nfs share on a client from two different nodes in the cluster at the same time. The client side caching in NFS is very fragile and assumes/relies on that an object can only be accessed through one single path at a time.