Basic CTDB configuration
Set up a cluster of nodes, each running CTDB.
The cluster will not be useful. In particular:
- Samba will not be managed by CTDB
- There will be no CTDB public IP addresses configured
- There will be no CTDB recovery lock configured
However, all useful configurations are extensions of a default configuration, so it is important to test this first.
CTDB Cluster Configuration
CTDB configuration directory
CTDB's configuration files are stored in
/etc/ctdb/. They may be stored somewhere else, such as
/usr/local/etc/ctdb/, depending on how CTDB was installed. However, for simplicity and brevity, this guide will use
ctdbd configuration file
The CTDB daemon configuration is
ctdb.conf (in the CTDB configuration directory).
Please see ctdb.conf(5) for details of daemon configuration options.
Depending on how CTDB was installed, a template configuration file may be installed. However, for the most basic configuration, the configuration file should be empty or have all lines commented out. This results in a default configuration.
The configuration variable used to specify the nodes file is
CTDB_NODES. When this variable is not set then the
nodes file (in the CTDB configuration directory) is used.
This file contains a list of the private IP addresses that the CTDB daemons will use in your cluster. This should be a private non-routeable subnet which is only used for internal cluster traffic.
10.1.1.1 10.1.1.2 10.1.1.3 10.1.1.4
nodesfile must be the same on all nodes in the cluster
- The order of addresses in this file is significant
- Lines commented with '#' are also significant (and indicate deleted nodes)
- The last line must end with a newline character or the node on that line will fail to initialize
Starting the cluster
This will depend on how CTDB was installed. If installing from source, consider installing the provided init script (
ctdb/config/ctdb.init) or system service file (
ctdb/config/ctdb.system) in the appropriate place. A binary package should already contain the correct method of starting CTDB.
onnode command is a useful way of running a command on all configured nodes. Depending on your installation, you may be able to start CTDB on all nodes by running something like:
onnode -p all service ctdb start
Checking cluster status
ctdb is a command for interacting with
ctdbd. Its uses include control and status. See ctdb(1) for details.
ctdb status command provides basic information about the cluster and the status of the nodes. Output looks like:
Number of nodes:4 vnn:0 10.1.1.1 OK (THIS NODE) vnn:1 10.1.1.2 OK vnn:2 10.1.1.3 OK vnn:3 10.1.1.4 OK Generation:1362079228 Size:4 hash:0 lmaster:0 hash:1 lmaster:1 hash:2 lmaster:2 hash:3 lmaster:3 Recovery mode:NORMAL (0) Recovery master:0
The important parts are in bold:
- All 4 nodes are in a healthy state
- Recovery mode is
NORMAL, which means that the cluster has completed a recovery and is running in a normal fully operational state
Recovery state will briefly change to
RECOVERY when there has been a node failure or something is wrong with the cluster.
If the cluster remains in
RECOVERY state for very long (many seconds) there might be a configuration problem. Check the logs for details.
ctdb ping command ensures the local CTDB daemon is running and shows how many clients are connected.
# onnode -q all ctdb ping response from 0 time=0.000050 sec (2 clients) response from 1 time=0.000154 sec (2 clients) response from 2 time=0.000114 sec (2 clients) response from 3 time=0.000115 sec (2 clients)
The 2 clients in question here are the
ctdb command and CTDB's recovery daemon. In more complex configurations, where Samba is running, there may be many more clients.
The most common reasons for nodes not connecting to each other are:
- A firewall is blocking TCP port 4379, so CTDB daemons on different nodes are unable to communicate
nodesfile is not identical on all nodes
If the cluster status looks good then CTDB can be configured to do useful things.
If not, then check the logs - something like
/var/log/log.ctdb by default.