Configuring clustered Samba: Difference between revisions

From SambaWiki
No edit summary
(114 intermediate revisions by 19 users not shown)
Line 1: Line 1:
= Goal =
= Setting up a simple CTDB Samba cluster =


Configure clustered Samba using a CTDB cluster
As of April 2007 you can setup a simple Samba3 or Samba4 CTDB cluster, running either on loopback (with simulated nodes) or on a real cluster with TCP. This page will tell you how to get started.


= Note =
== Clustering Model ==


This page still contains some details not directly relevant to clustering Samba. The documentation is being cleaned up and restructured.
The setup instructions on this page are modelled on setting up a cluster of N nodes that function in nearly all respects as a single multi-homed node. So the cluster will export N IP interfaces, each of which is equivalent (same shares) and which offers coherent CIFS file access across all nodes.


= Prerequisites =
The clustering model utilizes IP takeover techniques to ensure that the full set of public IP addresses assigned to services on the cluster will always be available to the clients even when some nodes have failed and become unavailable.


* [[Basic CTDB configuration]]
* [[Setting up a cluster filesystem]]
* [[Configuring the CTDB recovery lock]] (recommended)
* [[Adding public IP addresses]] (or some other failover/load balancing scheme)


=Samba Configuration=
== Getting the code ==


Next you need to initialise the Samba password database, e.g.
You need two source trees, one is a copy of Samba3 with clustering patches, and the other is the ctdb code itself. Both source trees are stored in bzr repositories. See http://bazaar-vcs.org/ for more information on bzr.
smbpasswd -a root


Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.
The fastest way to checkout an initial copy of the Samba3 tree with clustering patches is:
rsync -avz samba.org::ftp/unpacked/samba_3_0_ctdb .
To update this tree when improvements are made in the upstream code do this:
cd samba_3_0_ctdb
bzr merge http://samba.org/~tridge/samba_3_0_ctdb
If you don't have bzr and can't easily install it, then you can instead use the following command to update your tree to the latest version:
cd samba_3_0_ctdb
rsync -avz samba.org::ftp/unpacked/samba_3_0_ctdb/ .


==Critical smb.conf parameters==
To get an initial checkout of the ctdb code do this:
rsync -avz samba.org::ftp/unpacked/ctdb .
To update this tree when improvements are made in the upstream code do this:
cd ctdb
bzr merge http://samba.org/~tridge/ctdb
If you don't have bzr and can't easily install it, then you can instead use the following command to update your tree to the latest version:
cd ctdb
rsync -avz samba.org::ftp/unpacked/ctdb/ .


A clustered Samba install must set some specific configuration parameters
== Building the CTDB tree ==


netbios name = something
To build a copy of the CTDB code you should do this:
clustering = yes
cd ctdb
idmap config * : backend = autorid
./autogen.sh
idmap config * : range = 1000000-1999999
./configure
make
make install


NB:
You need to install ctdb on all nodes of your cluster.
* See [https://www.samba.org/samba/docs/man/manpages/idmap_autorid.8.html idmap(8)] for more information about the idmap configuration
* netbios name should be the same on all nodes


Note that <code>bind interfaces only = yes</code> should not be used when configuring clustered Samba with [[Adding public IP addresses|CTDB public IP addresses]]. CTDB will start <code>smbd</code> before public IP addresses are hosted, so <code>smbd</code> will not listen on any of the public IP addresses. When public IP addresses are eventually hosted, <code>smbd</code> will not bind to the new addresses.
== Building the Samba3 tree ==


==Using the Samba registry==
To build a copy of Samba3 with clustering and ctdb support you should do this:
cd samba_3_0_ctdb/source
./autogen.sh
./configure --with-ctdb=/usr/src/ctdb --with-cluster-support --enable-pie=no
make proto
make


A recommended way of ensuring that all Samba nodes have the same configuration is to put most configuration into the registry.
Once compiled, you should install Samba on all cluster nodes.


This means that <code>smb.conf</code> can be as simple as:
The /usr/src/ctdb path should be replaced with the path to the ctdb sources that you downloaded above. If you have installed


[global]
=Samba Configuration=
clustering = yes
ctdb:registry.tdb = yes
include = registry


The initial contents of the registry can then be placed into a file (say <code>tmp.conf</code>):
Next you need to initialise the Samba password database, e.g.
smbpasswd -a root


[global]
Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.
security = ADS
logging = syslog
log level = 1
netbios name = test
workgroup = SAMBA
realm = samba.example.com
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999


and loaded from one of the nodes:
==Critical smb.conf parameters==


net conf import tmp.conf
A clustered Samba install must set some specific configuration parameters


Further <code>net conf</code> commands such as <code>net conf addshare</code> can then be used to continue configuration.
* clustering = yes
* idmap backend = tdb2
* private dir = /a/directory/on/your/cluster/filesystem


= Configure CTDB to manage Samba =
It is vital that the private directory is on shared storage.


For CTDB to manage Samba, the <code>50.samba</code> event script must be enabled
= CTDB Cluster Configuration =


ctdb event script enable legacy 50.samba
These are the primary configuration files for CTDB. When CTDB is installed, it will install template versions of these files which you need to edit to suit your system. The current set of config files for CTDB are also available from http://samba.org/~tridge/ctdb/config


This causes CTDB to start and stop Samba at startup and shutdown. It also tells CTDB to monitor Samba.
=== /etc/sysconfig/ctdb ===


Similarly, if using <code>winbind</code>, CTDB should also be configured to manage it:
This file contains the startup parameters for ctdb. When you installed ctdb, a template config file
should have been installed in /etc/sysconfig/ctdb. Edit that file, following the instructions
in the template.


ctdb event script enable legacy 49.winbind
The most important options are:


Please see the <code>event</code> command in [http://ctdb.samba.org/manpages/ctdb.1.html ctdb(1)] for more details.
* CTDB_RECOVERY_LOCK
* PUBLIC_ADDRESSES
* PUBLIC_INTERFACE
* CTDB_MANAGES_SAMBA


CTDB will manage and start/stop/restart the Samba services, so the operating system should be configured so these are not started/stopped automatically.
Please check those carefully


== Red Hat Linux variants ==


If using a Red Hat Linux variant, the Samba services are <code>smb</code> and <code>winbind</code>. Starting them at boot time is not recommended and this can be disabled using <code>chkconfig</code>.
=== /etc/ctdb/nodes ===


chkconfig smb off
This file needs to be created as /etc/ctdb/nodes and contains a list of the private IP addresses that the CTDB daemons will use in your cluster.
chkconfig winbind off
This should be a private non-routable subnet which is only used for internal cluster traffic.
This file must be the same on all nodes in the cluster.


The service names and mechanism for disabling them varies across operating systems.
Example :
10.1.1.1
10.1.1.2
10.1.1.3
10.1.1.4


= Event scripts =
=== /etc/ctdb/public_addresses ===


CTDB clustering for Samba involves the <code>50.samba</code> and <code>49.winbind</code> event scripts. These are provided as part of CTDB and do not usually need to be changed.
This file is only required if you plan to use IP takeover.
In order to use IP takeover you must specify which interface to use in /etc/sysconfig/ctdb by specifying the PUBLIC_INTERFACE variable.
You must also specify the list of public IP addresses to use in this file.


There are several configuration variables that affect the operation of these scripts. Please see [http://ctdb.samba.org/manpages/ctdb-script.options.5.html ctdb-script.options(5)] for details.
This file contains a list (one for each node) of public cluster addresses. these are the addresses that the SMBD daemons will bind to. This file must contain one address for each node, i.e. it must have the same number of entries as the nodes file.


= Filesystem specific configuration =
Example:
192.168.1.1/24
192.168.1.2/24
192.168.2.1/24
192.168.2.2/24


The cluster filesystem you use with ctdb plays a critical role in ensuring that CTDB works seamlessly.
These are the IP addresses that you should configure in DNS for the name of the clustered samba server and are the addresses that CIFS clients will connect to.
Here are some filesystem specific tips
The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.


If you are interested in testing a new cluster filesystem with CTDB then we strongly recommend looking at the page on testing filesystems using [[ping_pong|ping_pong]] to ensure that the cluster filesystem supports correct POSIX locking semantics.
CTDB nodes will only take over IP addresses that are inside the same subnet as its own public IP address.
In the example above, nodes 0 and 1 would be able to take over each others public ip and analog for nodes 2 and 3, but node 0 and 1 would NOT be able
to take over the IP addresses for nodes 2 or 3 since they are on a different subnet.


== IBM GPFS filesystem ==
Do not assign these addresses to any of the interfaces on the host. CTDB will add and remove these addresses automatically at runtime.


The [https://www.ibm.com/support/knowledgecenter/SSFKCN/gpfs_welcome.html GPFS] filesystem (now known as [https://www-03.ibm.com/systems/storage/spectrum/scale/ Spectrum Scale ]) is a proprietary cluster filesystem that has been extensively tested with CTDB/Samba. When using GPFS, the following smb.conf settings are recommended
=== /etc/ctdb/events ===
This is a script that is called out to by CTDB when certain events occur to allow for site specific tasks to be performed.


vfs objects = gpfs fileid
The events currently implemented and called out for are
1, when the node takes over an ip address
gpfs:sharemodes = yes
2, when the node releases an ip address
3, when recovery has completed and the cluster is reconfigured
fileid:algorithm = fsname
4, when the cluster performs a clean shutdown
force unknown acl user = yes
nfs4: mode = special
nfs4: chown = yes
nfs4: acedup = merge


The ACL related options should only be enabled if you have NFSv4 ACLs enabled on your filesystem
Please see the service scripts that installed by ctdb in /etc/ctdb/events.d for examples of how to configure other services to be aware of the HA features of CTDB


The most important of these options is the "fileid:algorithm". You risk data corruption if you use a different mapping backend with Samba and GPFS, because locking wilk break across nodes. NOTE: You must also load "fileid" as a vfs object in order for this to take effect.
=== /etc/services ===


A guide to configuring Samba with CTDB and GPFS can be found at [[Samba CTDB GPFS Cluster HowTo]]
CTDB defaults to use TCP port 9001 for its traffic.
Configuring a different port to use for CTDB traffic is done by adding a ctdb entry to the /etc/services file.


== RedHat GFS filesystem ==
Example: for change CTDB to use port 9999 add the following line to /etc/services


[http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_File_System/index.html Red Hat GFS] is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer).
ctdb 9999/tcp


The gfs_controld daemon manages mounting, unmounting, recovery and posix locks. Edit /etc/init.d/cman (If using RedHat Cluster Suite) to start gfs_controld with the '-l 0 -o 1' flags to optimize posix locking performance. You'll notice the difference this makes by running the [http://wiki.samba.org/index.php/Ping_pong ping_pong] test with and without these options.
Note: all nodes in the cluster MUST use the same port or else CTDB will not start correctly.


A complete HowTo document to setup clustered samba with CTDB and GFS2 is here: [[GFS CTDB HowTo]]
= Name resolution =


== Lustre filesystem ==
You need to setup some method for your Windows and NFS clients to find the nodes of the cluster, and automatically balance the load between the nodes. We recommend that you setup a round-robin DNS entry for your cluster, listing all the public IP addresses that CTDB will be managing as a single DNS A record.


Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by a number of companies ( [http://www.intel.com/content/www/us/en/software/intel-solutions-for-lustre-software.html Intel], [http://www.seagate.com/products/enterprise-servers-storage/enterprise-storage-systems/clustered-file-systems/ Seagate] ) and [http://opensfs.org/ OpenSFS] which is a not for profit organisation.
You may also wish to setup a static WINS server entry listing all of your cluster nodes IP addresses.


Tests have been done on Lustre releases of 1.4.x and 1.6.x with CTDB/Samba, The current lustre release is 2.5.2 . When mounting Lustre, an option of "-o flock" should be specified to enable cluster-wide byte range lock among all Lustre clients.
== Starting the cluster ==


These two versions have differnt mechanisms of configuration and startup. More information is available at http://wiki.lustre.org.
Just start the ctdb service on all nodes.


In comparison of Lustre configurating, setting up CTDB/Samba on the two different versions keeps the same way. The following settings are recommended:
= Testing your cluster =


vfs objects = fileid
Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
fileid:algorithm = fsname


The options of "fileid:mapping" must be specified to avoid possibe data corruption.
=== Using ctdb ===


== GlusterFS filesystem ==
The ctdb package comes with a utility called ctdb that can be used to view the behaviour of the ctdb cluster. If you run it with no options it will provide some terse usage information. The most commonly used commands are:


[http://www.gluster.org/ GlusterFS] is a cluster file-system capable of scaling to several peta-bytes that is easy to configure. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It uses Linux File System in Userspace (FUSE) to achieve all this.
- ctdb ping
- ctdb status


NOTE: GlusterFS has not yet had extensive testing but this is currently underway.
=== Using smbcontrol ===


Currently from versions 2.0 to 2.0.4 of GlusterFS, it must be patched with:
You can check for connectivity to the smbd daemons on each node using smbcontrol


http://patches.gluster.com/patch/813/
- smbcontrol smbd ping


This is to ensure GlusterFS passes the ping_pong test. This issue is being tracked at:
=== Using Samba4 smbtorture ===


http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=159
The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:


Update: As of GlusterFS 2.0.6 this has been fixed.
svn co svn://svnanon.samba.org/samba/branches/SAMBA_4_0


== OCFS2 ==
Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:


* OCFS2 - see http://oss.oracle.com/projects/ocfs2/
smbtorture //localhost/data -Uuser%password RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60


recommended settings:
The file unclist.txt should contain a list of server names in your cluster prefixed by //. For example


vfs objects = fileid
//node1
fileid:algorithm = fsid
//node2
//node3
//node4


OCFS2 1.4 offers cluster-wide byte-range locking.
For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/


== Other cluster filesystems ==
= Setting up CTDB for clustered NFS =


If you can't find documentation about your choice of cluster filesystem and clustered Samba then you might need to work around some limitations.
Configure CTDB as above and set it up to use public ipaddresses.
Verify that the CTDB cluster works.


=== Inconsistent device numbers ===
== sm-notify ==


Locking will not work if a cluster filesystem does not provide uniform device numbers across nodes. It testing shows locking problems then you should test [[Setting_up_a_cluster_filesystem#Checking_uniformity_of_device_and_inode_numbering|device number uniformity]] of your cluster filesystem.
Make sure you have the sm-notify tool installed in /usr/sbin. You should find that tool in the nfs-util package for your operating system. This tool is required so that CTDB will be able to successfully trigger lock recovery after an ip address failover/failback.


To work around a lack of device number uniformity, the following settings should be used in the global section of the Samba configuration:
== /etc/exports ==


vfs objects = fileid
Export the same directory from all nodes. Also make sure to specify the fsid export option so that all nodes will present the same fsid to clients.
fileid:algorithm = fsname
clients can get "upset" if the fsid on a mount suddenly changes.


See [https://www.samba.org/samba/docs/man/manpages/vfs_fileid.8.html vfs_fileid(8)] for more information.
/gpfs0/data *(rw,fsid=1235)


= Testing clustered Samba =
== /etc/sysconfig/nfs ==


Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
This file must be edited to point statd to keep its state directory on shared storage instead of in a local directory.
We must also make statd use a fixed port to listen on that is the same for all nodes in the cluster.
If we don't specify a fixed port, the statd port will change during failover which causes problems on some clients.


== Using smbcontrol ==
This file should look something like :


You can check for connectivity to the smbd daemons on each node using smbcontrol
CTDB_MANAGES_NFS=yes
STATD_SHARED_DIRECTORY=/gpfs0/nfs-state
STATD_HOSTNAME="ctdb -P $STATD_SHARED_DIRECTORY/192.168.1.1 -H /etc/ctdb/statd-callout -p 97"


- smbcontrol smbd ping
The CTDB_MANAGES_NFS line tells the events scripts that CTDB is to manage startup and shutdown of the NFS and NFSLOCK services.
With this set to yes, CTDB will start/stop/restart these services as required.


== Using Samba4 smbtorture ==
STATD_SHARED_DIRECTORY is the shared directory where statd and the statd-callout script expects that the state variables and lists of clients to notify are found.


The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:
== chkconfig ==


git clone git://git.samba.org/samba.git
Since CTDB will manage and start/stop/restart the nfs and the nfslock services, you must disable them in chkconfig.
cd samba/source4


Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:
chkconfig nfs off
chkconfig nfslock off


smbtorture //localhost/data -Uuser%password RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60
== Statd state directories ==


The file unclist.txt should contain a list of share in your cluster (UNC format: ''//server//share''). For example
For each node, create a state directory on shared storage where each local statd daemon can keep its state information. This needs to be on shared storage since
if a node takes over an ip address it needs to find the list of monitored clients to notify.
If you have four nodes with the public addresses listed above, this means the following directories needs to be created on shared storage:


//node1/data
mkdir /gpfs0/nfs-state
//node2/data
mkdir /gpfs0/nfs-state/192.168.1.1
//node3/data
mkdir /gpfs0/nfs-state/192.168.1.2
//node4/data
mkdir /gpfs0/nfs-state/192.168.2.1
mkdir /gpfs0/nfs-state/192.168.2.2


For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/
== Event scripts ==

CTDB clustering for NFS relies on two event scripts /etc/ctdb/events.d/nfs and /etc/ctdb/events.d/nfslock.
These two scripts are provided by the RPM package and there should not be any need to change them.

== IMPORTANT ==

Never ever mount the same nfs share on a client from two different nodes in the cluster at the same time.
The client side caching in NFS is very fragile and assumes/relies on that an object can only be accessed through one single path at a time.

Revision as of 07:38, 29 September 2020

Goal

Configure clustered Samba using a CTDB cluster

Note

This page still contains some details not directly relevant to clustering Samba. The documentation is being cleaned up and restructured.

Prerequisites

Samba Configuration

Next you need to initialise the Samba password database, e.g.

 smbpasswd -a root

Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.

Critical smb.conf parameters

A clustered Samba install must set some specific configuration parameters

netbios name = something
clustering = yes
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999

NB:

  • See idmap(8) for more information about the idmap configuration
  • netbios name should be the same on all nodes

Note that bind interfaces only = yes should not be used when configuring clustered Samba with CTDB public IP addresses. CTDB will start smbd before public IP addresses are hosted, so smbd will not listen on any of the public IP addresses. When public IP addresses are eventually hosted, smbd will not bind to the new addresses.

Using the Samba registry

A recommended way of ensuring that all Samba nodes have the same configuration is to put most configuration into the registry.

This means that smb.conf can be as simple as:

 [global]
       clustering = yes
       ctdb:registry.tdb = yes
       include = registry

The initial contents of the registry can then be placed into a file (say tmp.conf):

 [global]
       security = ADS
 
       logging = syslog
       log level = 1
 
       netbios name = test
       workgroup = SAMBA
       realm = samba.example.com
 
       idmap config * : backend = autorid
       idmap config * : range = 1000000-1999999

and loaded from one of the nodes:

 net conf import tmp.conf

Further net conf commands such as net conf addshare can then be used to continue configuration.

Configure CTDB to manage Samba

For CTDB to manage Samba, the 50.samba event script must be enabled

 ctdb event script enable legacy 50.samba

This causes CTDB to start and stop Samba at startup and shutdown. It also tells CTDB to monitor Samba.

Similarly, if using winbind, CTDB should also be configured to manage it:

 ctdb event script enable legacy 49.winbind

Please see the event command in ctdb(1) for more details.

CTDB will manage and start/stop/restart the Samba services, so the operating system should be configured so these are not started/stopped automatically.

Red Hat Linux variants

If using a Red Hat Linux variant, the Samba services are smb and winbind. Starting them at boot time is not recommended and this can be disabled using chkconfig.

 chkconfig smb off
 chkconfig winbind off

The service names and mechanism for disabling them varies across operating systems.

Event scripts

CTDB clustering for Samba involves the 50.samba and 49.winbind event scripts. These are provided as part of CTDB and do not usually need to be changed.

There are several configuration variables that affect the operation of these scripts. Please see ctdb-script.options(5) for details.

Filesystem specific configuration

The cluster filesystem you use with ctdb plays a critical role in ensuring that CTDB works seamlessly. Here are some filesystem specific tips

If you are interested in testing a new cluster filesystem with CTDB then we strongly recommend looking at the page on testing filesystems using ping_pong to ensure that the cluster filesystem supports correct POSIX locking semantics.

IBM GPFS filesystem

The GPFS filesystem (now known as Spectrum Scale ) is a proprietary cluster filesystem that has been extensively tested with CTDB/Samba. When using GPFS, the following smb.conf settings are recommended

vfs objects = gpfs fileid

gpfs:sharemodes = yes

fileid:algorithm = fsname

force unknown acl user = yes
nfs4: mode = special
nfs4: chown = yes
nfs4: acedup = merge

The ACL related options should only be enabled if you have NFSv4 ACLs enabled on your filesystem

The most important of these options is the "fileid:algorithm". You risk data corruption if you use a different mapping backend with Samba and GPFS, because locking wilk break across nodes. NOTE: You must also load "fileid" as a vfs object in order for this to take effect.

A guide to configuring Samba with CTDB and GPFS can be found at Samba CTDB GPFS Cluster HowTo

RedHat GFS filesystem

Red Hat GFS is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer).

The gfs_controld daemon manages mounting, unmounting, recovery and posix locks. Edit /etc/init.d/cman (If using RedHat Cluster Suite) to start gfs_controld with the '-l 0 -o 1' flags to optimize posix locking performance. You'll notice the difference this makes by running the ping_pong test with and without these options.

A complete HowTo document to setup clustered samba with CTDB and GFS2 is here: GFS CTDB HowTo

Lustre filesystem

Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by a number of companies ( Intel, Seagate ) and OpenSFS which is a not for profit organisation.

Tests have been done on Lustre releases of 1.4.x and 1.6.x with CTDB/Samba, The current lustre release is 2.5.2 . When mounting Lustre, an option of "-o flock" should be specified to enable cluster-wide byte range lock among all Lustre clients.

These two versions have differnt mechanisms of configuration and startup. More information is available at http://wiki.lustre.org.

In comparison of Lustre configurating, setting up CTDB/Samba on the two different versions keeps the same way. The following settings are recommended:

vfs objects = fileid
fileid:algorithm = fsname

The options of "fileid:mapping" must be specified to avoid possibe data corruption.

GlusterFS filesystem

GlusterFS is a cluster file-system capable of scaling to several peta-bytes that is easy to configure. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It uses Linux File System in Userspace (FUSE) to achieve all this.

NOTE: GlusterFS has not yet had extensive testing but this is currently underway.

Currently from versions 2.0 to 2.0.4 of GlusterFS, it must be patched with:

http://patches.gluster.com/patch/813/

This is to ensure GlusterFS passes the ping_pong test. This issue is being tracked at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=159

Update: As of GlusterFS 2.0.6 this has been fixed.

OCFS2

recommended settings:

vfs objects = fileid
fileid:algorithm = fsid

OCFS2 1.4 offers cluster-wide byte-range locking.

Other cluster filesystems

If you can't find documentation about your choice of cluster filesystem and clustered Samba then you might need to work around some limitations.

Inconsistent device numbers

Locking will not work if a cluster filesystem does not provide uniform device numbers across nodes. It testing shows locking problems then you should test device number uniformity of your cluster filesystem.

To work around a lack of device number uniformity, the following settings should be used in the global section of the Samba configuration:

vfs objects = fileid
fileid:algorithm = fsname

See vfs_fileid(8) for more information.

Testing clustered Samba

Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that

Using smbcontrol

You can check for connectivity to the smbd daemons on each node using smbcontrol

- smbcontrol smbd ping

Using Samba4 smbtorture

The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:

 git clone git://git.samba.org/samba.git
 cd samba/source4

Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:

 smbtorture //localhost/data -Uuser%password  RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60

The file unclist.txt should contain a list of share in your cluster (UNC format: //server//share). For example

//node1/data
//node2/data
//node3/data
//node4/data

For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/