Configuring clustered Samba: Difference between revisions

From SambaWiki
No edit summary
(→‎Critical smb.conf parameters: Drop unwanted * from netbios name)
Line 25: Line 25:
A clustered Samba install must set some specific configuration parameters
A clustered Samba install must set some specific configuration parameters


netbios name = something *
netbios name = something
clustering = yes
clustering = yes
idmap config * : backend = autorid
idmap config * : backend = autorid

Revision as of 03:54, 8 January 2018

Goal

Configure clustered Samba using a CTDB cluster

Note

This page still contains some details not directly relevant to clustering Samba. The documentation is being cleaned up and restructured.

Prerequisites

Samba Configuration

Next you need to initialise the Samba password database, e.g.

 smbpasswd -a root

Samba with clustering must use the tdbsam or ldap SAM passdb backends (it must not use the default smbpasswd backend), or must be configured to be a member of a domain. The rest of the configuration of Samba is exactly as it is done on a normal system. See the docs on http://samba.org/ for details.

Critical smb.conf parameters

A clustered Samba install must set some specific configuration parameters

netbios name = something
clustering = yes
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999

NB:

  • See idmap(8) for more information about the idmap configuration
  • netbios name should be the same on all nodes

Note that bind interfaces only = yes should not be used when configuring clustered Samba with CTDB public IP addresses. CTDB will start smbd before public IP addresses are hosted, so smbd will not listen on any of the public IP addresses. When public IP addresses are eventually hosted, smbd will not bind to the new addresses.

Configure CTDB to manage Samba

For CTDB to manage Samba, the CTDB_MANAGES_SAMBA configuration variable must be set to yes in the ctdbd configuration file.

For example:

 CTDB_MANAGES_SAMBA=yes

This causes CTDB to start and stop Samba at startup and shutdown. It also tells CTDB to monitor Samba.

Similarly, if using winbind, CTDB should also be configured to manage it:

 CTDB_MANAGES_WINBIND=yes

CTDB will manage and start/stop/restart the Samba services, so the operating system should be configured so these are not started/stopped automatically.

Red Hat Linux variants

If using a Red Hat Linux variant, the Samba services are smb and winbind. Starting them at boot time is not recommended and this can be disabled using chkconfig.

 chkconfig smb off
 chkconfig winbind off

The service names and mechanism for disabling them varies across operating systems.

Event scripts

CTDB clustering for Samba involves the 50.samba and 49.winbind event scripts. These are provided as part of CTDB and do not usually need to be changed.

There are several configuration variables that affect the operation of these scripts. Please see ctdbd.conf(5) for details.

Filesystem specific configuration

The cluster filesystem you use with ctdb plays a critical role in ensuring that CTDB works seamlessly. Here are some filesystem specific tips

If you are interested in testing a new cluster filesystem with CTDB then we strongly recommend looking at the page on testing filesystems using ping_pong to ensure that the cluster filesystem supports correct POSIX locking semantics.

IBM GPFS filesystem

The GPFS filesystem (now known as Spectrum Scale ) is a proprietary cluster filesystem that has been extensively tested with CTDB/Samba. When using GPFS, the following smb.conf settings are recommended

clustering = yes
idmap backend = tdb2
fileid:algorithm = fsname
vfs objects = gpfs fileid
gpfs:sharemodes = No
force unknown acl user = yes
nfs4: mode = special
nfs4: chown = yes
nfs4: acedup = merge

The ACL related options should only be enabled if you have NFSv4 ACLs enabled on your filesystem

The most important of these options is the "fileid:algorithm". You risk data corruption if you use a different mapping backend with Samba and GPFS, because locking wilk break across nodes. NOTE: You must also load "fileid" as a vfs object in order for this to take effect.

A guide to configuring Samba with CTDB and GPFS can be found at Samba CTDB GPFS Cluster HowTo

RedHat GFS filesystem

Red Hat GFS is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer).

The gfs_controld daemon manages mounting, unmounting, recovery and posix locks. Edit /etc/init.d/cman (If using RedHat Cluster Suite) to start gfs_controld with the '-l 0 -o 1' flags to optimize posix locking performance. You'll notice the difference this makes by running the ping_pong test with and without these options.

A complete HowTo document to setup clustered samba with CTDB and GFS2 is here: GFS CTDB HowTo

Lustre filesystem

Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by a number of companies ( Intel, Seagate ) and OpenSFS which is a not for profit organisation.

Tests have been done on Lustre releases of 1.4.x and 1.6.x with CTDB/Samba, The current lustre release is 2.5.2 . When mounting Lustre, an option of "-o flock" should be specified to enable cluster-wide byte range lock among all Lustre clients.

These two versions have differnt mechanisms of configuration and startup. More information is available at http://wiki.lustre.org.

In comparison of Lustre configurating, setting up CTDB/Samba on the two different versions keeps the same way. The following settings are recommended:

clustering = yes
idmap backend = tdb2
fileid:mapping = fsname
use mmap = no
nt acl support = yes
ea support = yes

The options of "fileid:mapping" and "use mmap" must be specified to avoid possibe data corruption. The sixth of "nt acl support" is to map the POSIX ACL to Windows NT's format. At the moment, Lustre only supports POSIX ACL.

GlusterFS filesystem

GlusterFS is a cluster file-system capable of scaling to several peta-bytes that is easy to configure. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. GlusterFS is based on a stackable user space design without compromising performance. It uses Linux File System in Userspace (FUSE) to achieve all this.

NOTE: GlusterFS has not yet had extensive testing but this is currently underway.

Currently from versions 2.0 to 2.0.4 of GlusterFS, it must be patched with:

http://patches.gluster.com/patch/813/

This is to ensure GlusterFS passes the ping_pong test. This issue is being tracked at:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=159

Update: As of GlusterFS 2.0.6 this has been fixed.

OCFS2

recommended settings:

fileid:mapping = fsid
vfs objects = fileid

OCFS2 1.4 offers cluster-wide byte-range locking.

Other cluster filesystems

If you can't find documentation about your choice of cluster filesystem and clustered Samba then you might need to work around some limitations.

Inconsistent device numbers

Note: This section probably wants to be in a future page about cluster filesystems and Samba configuration. It can be moved later...

Locking will not work if a cluster filesystem does not provide unique device numbers across nodes.

Consider the following example:

# onnode all stat /clusterfs/testfile

>> NODE: 10.1.1.1 <<
  File: `/clusterfs/testfile'
  Size: 1286700       Blocks: 2514       IO Block: 65536  regular file
Device: 29h/41d    Inode: 35820037    Links: 1
Access: (0774/-rwxrwxr--)  Uid: ( 3535/     foo)   Gid: (  513/Domain Users)
Access: 2016-11-03 19:51:46.000000000 +0000
Modify: 2016-11-01 13:06:04.000000000 +0000
Change: 2016-11-01 13:06:04.000000000 +0000

>> NODE: 10.1.1.2 <<
  File: `/clusterfs/testfile'
  Size: 1286700       Blocks: 2514       IO Block: 65536  regular file
Device: 29h/41d    Inode: 35820037    Links: 1
Access: (0774/-rwxrwxr--)  Uid: ( 3535/     foo)   Gid: (  513/Domain Users)
Access: 2016-11-03 19:51:46.000000000 +0000
Modify: 2016-11-01 13:06:04.000000000 +0000
Change: 2016-11-01 13:06:04.000000000 +0000

>> NODE: 10.1.1.3 <<
  File: `/clusterfs/testfile'
  Size: 1286700       Blocks: 2514       IO Block: 65536  regular file
Device: 26h/38d    Inode: 35820037    Links: 1
Access: (0774/-rwxrwxr--)  Uid: ( 3535/     foo)   Gid: (  513/Domain Users)
Access: 2016-11-03 19:51:46.000000000 +0000
Modify: 2016-11-01 13:06:04.000000000 +0000
Change: 2016-11-01 13:06:04.000000000 +0000

Note that the device numbers are not consistent across nodes. Locks set for the file on the first 2 nodes will not affect the 3rd node.

To work around this, the following settings should be in the global section of the Samba configuration:

vfs objects = fileid
fileid:algorithm = fsname

See vfs_fileid(8) for more information.

Testing clustered Samba

Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that

Using smbcontrol

You can check for connectivity to the smbd daemons on each node using smbcontrol

- smbcontrol smbd ping

Using Samba4 smbtorture

The Samba4 version of smbtorture has several tests that can be used to benchmark a CIFS cluster. You can download Samba4 like this:

 git clone git://git.samba.org/samba.git
 cd samba/source4

Then configure and compile it as usual. The particular tests that are helpful for cluster benchmarking are the RAW-BENCH-OPEN, RAW-BENCH-LOCK and BENCH-NBENCH tests. These tests take a unclist that allows you to spread the workload out over more than one node. For example:

 smbtorture //localhost/data -Uuser%password  RAW-BENCH-LOCK --unclist=unclist.txt --num-progs=32 -t60

The file unclist.txt should contain a list of share in your cluster (UNC format: //server//share). For example

//node1/data
//node2/data
//node3/data
//node4/data

For NBENCH testing you need a client.txt file. A suitable file can be found in the dbench distribution at http://samba.org/ftp/tridge/dbench/