Configuring clustered Samba
- 1 Goal
- 2 Note
- 3 Prerequisites
- 4 Samba Configuration
- 5 Configure CTDB to manage Samba
- 6 Event scripts
- 7 Filesystem specific configuration
- 8 Testing clustered Samba
Configure clustered Samba using a CTDB cluster
This page still contains some details not directly relevant to clustering Samba. The documentation is being cleaned up and restructured.
- Basic CTDB configuration
- Setting up a cluster filesystem
- Configuring the CTDB recovery lock (recommended)
- Adding public IP addresses (or some other failover/load balancing scheme)
Next you need to initialise the Samba password database, e.g.
smbpasswd -a root
Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.
Critical smb.conf parameters
A clustered Samba install must set some specific configuration parameters
netbios name = something clustering = yes idmap config * : backend = autorid idmap config * : range = 1000000-1999999
- See idmap(8) for more information about the idmap configuration
- netbios name should be the same on all nodes
bind interfaces only = yes should not be used when configuring clustered Samba with CTDB public IP addresses. CTDB will start
smbd before public IP addresses are hosted, so
smbd will not listen on any of the public IP addresses. When public IP addresses are eventually hosted,
smbd will not bind to the new addresses.
Using the Samba registry
A recommended way of ensuring that all Samba nodes have the same configuration is to put most configuration into the registry.
This means that
smb.conf can be as simple as:
[global] clustering = yes ctdb:registry.tdb = yes include = registry
The initial contents of the registry can then be placed into a file (say
[global] security = ADS logging = syslog log level = 1 netbios name = test workgroup = SAMBA realm = samba.example.com idmap config * : backend = autorid idmap config * : range = 1000000-1999999
and loaded from one of the nodes:
net conf import tmp.conf
net conf commands such as
net conf addshare can then be used to continue configuration.
Configure CTDB to manage Samba
For CTDB to manage Samba, the
50.samba event script must be enabled
ctdb event script enable legacy 50.samba
This causes CTDB to start and stop Samba at startup and shutdown. It also tells CTDB to monitor Samba.
Similarly, if using
winbind, CTDB should also be configured to manage it:
ctdb event script enable legacy 49.winbind
Please see the
event command in ctdb(1) for more details.
CTDB will manage and start/stop/restart the Samba services, so the operating system should be configured so these are not started/stopped automatically.
Red Hat Linux variants
If using a Red Hat Linux variant, the Samba services are
winbind. Starting them at boot time is not recommended and this can be disabled using
chkconfig smb off chkconfig winbind off
The service names and mechanism for disabling them varies across operating systems.
CTDB clustering for Samba involves the
49.winbind event scripts. These are provided as part of CTDB and do not usually need to be changed.
There are several configuration variables that affect the operation of these scripts. Please see ctdb-script.options(5) for details.
Filesystem specific configuration
The cluster filesystem you use with ctdb plays a critical role in ensuring that CTDB works seamlessly. Here are some filesystem specific tips
If you are interested in testing a new cluster filesystem with CTDB then we strongly recommend looking at the page on testing filesystems using ping_pong to ensure that the cluster filesystem supports correct POSIX locking semantics.
IBM GPFS filesystem
The GPFS filesystem (now known as Spectrum Scale ) is a proprietary cluster filesystem that has been extensively tested with CTDB/Samba. When using GPFS, the following smb.conf settings are recommended
vfs objects = gpfs fileid gpfs:sharemodes = yes fileid:algorithm = fsname force unknown acl user = yes nfs4: mode = special nfs4: chown = yes nfs4: acedup = merge
The ACL related options should only be enabled if you have NFSv4 ACLs enabled on your filesystem
The most important of these options is the "fileid:algorithm". You risk data corruption if you use a different mapping backend with Samba and GPFS, because locking wilk break across nodes. NOTE: You must also load "fileid" as a vfs object in order for this to take effect.
A guide to configuring Samba with CTDB and GPFS can be found at Samba CTDB GPFS Cluster HowTo
RedHat GFS filesystem
Red Hat GFS is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer).
The gfs_controld daemon manages mounting, unmounting, recovery and posix locks. Edit /etc/init.d/cman (If using RedHat Cluster Suite) to start gfs_controld with the '-l 0 -o 1' flags to optimize posix locking performance. You'll notice the difference this makes by running the ping_pong test with and without these options.
A complete HowTo document to setup clustered samba with CTDB and GFS2 is here: GFS CTDB HowTo
Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by a number of companies ( Intel, Seagate ) and OpenSFS which is a not for profit organisation.
Tests have been done on Lustre releases of 1.4.x and 1.6.x with CTDB/Samba, The current lustre release is 2.5.2 . When mounting Lustre, an option of "-o flock" should be specified to enable cluster-wide byte range lock among all Lustre clients.
These two versions have differnt mechanisms of configuration and startup. More information is available at http://wiki.lustre.org.
In comparison of Lustre configurating, setting up CTDB/Samba on the two different versions keeps the same way. The following settings are recommended:
vfs objects = fileid fileid:algorithm = fsname
The options of "fileid:mapping" must be specified to avoid possibe data corruption.
GlusterFS is a cluster file-system capable of scaling to several peta-bytes that is easy to configure. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It uses Linux File System in Userspace (FUSE) to achieve all this.
NOTE: GlusterFS has not yet had extensive testing but this is currently underway.
Currently from versions 2.0 to 2.0.4 of GlusterFS, it must be patched with:
This is to ensure GlusterFS passes the ping_pong test. This issue is being tracked at:
Update: As of GlusterFS 2.0.6 this has been fixed.
- OCFS2 - see http://oss.oracle.com/projects/ocfs2/
vfs objects = fileid fileid:algorithm = fsid
OCFS2 1.4 offers cluster-wide byte-range locking.
Other cluster filesystems
If you can't find documentation about your choice of cluster filesystem and clustered Samba then you might need to work around some limitations.
Inconsistent device numbers
Locking will not work if a cluster filesystem does not provide uniform device numbers across nodes. It testing shows locking problems then you should test device number uniformity of your cluster filesystem.
To work around a lack of device number uniformity, the following settings should be used in the global section of the Samba configuration:
vfs objects = fileid fileid:algorithm = fsname
See vfs_fileid(8) for more information.
Testing clustered Samba
Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
You can check for connectivity to the smbd daemons on each node using smbcontrol
- smbcontrol smbd ping
Using Samba4 smbtorture
The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:
git clone git://git.samba.org/samba.git cd samba/source4
Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:
smbtorture //localhost/data -Uuser%password RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60
The file unclist.txt should contain a list of share in your cluster (UNC format: //server//share). For example
//node1/data //node2/data //node3/data //node4/data
For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/