Setting up a cluster filesystem
Set up a clustered file system to be used with CTDB for providing clustered file services.
- How to test if posix locking is supported on the file system?
- Limitations when using clustered file system
Setting up clustered file system has nothing to do with CTDB. This information is provided for completeness. Users should be aware of any limitations of particular clustered file system.
Cluster file systems
Any cluster file system will have some or all of following components:
- Shared or distributed storage
- Kernel or user space file system driver
- User space file system daemon(s)
- User space distributed lock manager
- User space tools for management
Every clustered file system has its quirks and limitations. Some of the file system limitations will affect the configuration of file services (Samba or NFS).
- Does file system provide a consistent view across all the nodes (for example - uniform device and inode numbering) ?
- Does file system provide posix locking semantics (cluster-aware locking)?
- Does file system have specific quorum requirements?
Checking uniformity of device and inode numbering
File services (e.g. Samba or NFS) often generate file identifiers or handles from device and inode numbers. These services may not work correctly if these numbers are not uniform across nodes.
This can be tested using the stat(1) command as follows:
# onnode all stat -c '%d:%i' /clusterfs/testfile >> NODE: 10.1.1.1 << 41:35820037 >> NODE: 10.1.1.2 << 41:35820037 >> NODE: 10.1.1.3 << 38:35820037
Note that the device numbers are not consistent across nodes. File services sometimes provide a way of working around this (e.g. Samba).
Some cluster filesystems (especially some FUSE-based ones) do not provide consistent inode numbers across nodes. There is often no workaround for this.
Checking lock coherence
Clustered Samba has a couple of dependencies on the cluster filesystem:
- If using CTDB recovery lock then lock coherence of the cluster filesystem needs to be confirmed
- Samba, with POSIX locking enabled, requires I/O coherence
Both of these can be checked using the ping_pong tool.
Each clustered file system example will describe how to set up a clustered file system for 3 node cluster. The implementation can be scaled down to 2 nodes or scaled up to more nodes.
GPFS is a proprietary cluster file system from IBM.
GFS2 is a clustered file system supported by Red Hat.
Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments.
GlusterFS is a scalable network file system.
OCFS2 is a general-purpose shared-disk cluster file system for Linux capable of providing both high performance and high availability.