Samba CTDB GlusterFS Cluster HowTo: Difference between revisions
StefanKania (talk | contribs) No edit summary |
StefanKania (talk | contribs) No edit summary |
||
Line 50: | Line 50: | ||
Because you can set all options to all service-scripts in this file, you don't have to change any of the service-scripts. You will find more information on all options in the manpage <code>man ctdb-script.options</code>. |
Because you can set all options to all service-scripts in this file, you don't have to change any of the service-scripts. You will find more information on all options in the manpage <code>man ctdb-script.options</code>. |
||
===The file nodes=== |
|||
CTDB must know all hosts belonging to it's cluster, in this file you have to put all IPs from the heartbeat network of all nodes. This file must have the same content on all nodes. Just put the two IPs from the two nodes into the file. Here you see the content of the file. |
|||
192.168.57.42 |
|||
192.168.57.43 |
|||
In most distributions the file does'n exists, you have to create it. |
|||
===The file public_addresses=== |
|||
Every time CTDB starts it will provide an IP-address to all nodes in the CTDB-Cluster, this must be an IP-address from the production network. |
|||
After starting the cluster, CTDB will take care of those IP-addresses and will give an IP-address of this list to every CTDB-node. If a CTDB-node crashes CTDB will assign the IP-address, from the crashed node, to another CTDB-node. So every IP-address from this file is always assigned to on of the nodes. |
|||
CTDB is doing the failover for the services. If one node fails the IP-address will switch to one of the remaining nodes. All clients will then reconnect to this node. That`s possible because all nodes have all session-information of all clients. |
|||
For each node you need a <code>public_addresses</code>-file. The files can be different on the nodes, depending to which subnet you would like to assign the node. The example uses just one subnet, so both nodes have identical <code>public_addresses</code>-files. Here you see the content of the file: |
|||
192.168.56.101/24 enp0s8 |
|||
192.168.56.102/24 enp0s8 |
Revision as of 18:38, 22 February 2020
Introduction
CTDB is a clustered database component in clustered Samba that provides a high-availability load-sharing CIFS server cluster.
The main functions of CTDB are:
- Provide a clustered version of the TDB database with automatic rebuild/recovery of the databases upon node failures.
- Monitor nodes in the cluster and services running on each node.
- Manage a pool of public IP addresses that are used to provide services to clients. Alternatively, CTDB can be used with LVS.
Combined with a cluster filesystem CTDB provides a full high-availability (HA) environment for services such as clustered Samba, NFS and other services.
Setting up CTDB
After setting up the cluster filesystem you can set up a CTDB-cluster.
To user CTDB you have to install the ctdb-package for your distribution. After installing the package with all it's dependencies you will find a directory /etc/ctdb
. Inside tis directory you need some configuration files for CTDB.
Let's take a look at the files needed for configuring CTDB.
File | Content |
---|---|
/etc/ctdb/ctdb.conf | Basic configuration |
/etc/ctdb/script.options | Setting options for event-scripts |
/etc/ctdb/nodes | All IP-addresses of all nodes |
/etc/ctdb/public_addresses | Dynamic IP-addresses for all nodes |
The ctdb.conf file
The ctdb.conf
file has changed a lot from the old configuration style (< Samba 4.9). This file will no longer be used to configure the different services managed by CTDB. At the moment the only setting you have to do inside the file is setting up the recovery lock file. This file is used by all nodes to check if it's possible to lock files inside the cluster for exclusive use. If you don't use a recovery lock file your cluster can run into a split brain situation. By default the 'recovery lock' is NOT set. You should not use CTDB without a recovery lock unless you know what you are doing. The variable must point to a file inside your mounted gluster-volume. To use the recovery lock enter the following line into /etc/ctdb/ctdb.conf
on both nodes:
recovery lock = /glusterfs/ctdb.lock
![]() | The recovery lock setting needs to be in the [cluster] section. |
![]() | You don't have to create the recovery lock file, it will be created by CTDB on the first start of the CTDB-daemon. |
The file script.options
All the service CTDB will provide will be started via special scripts. In this file you can set options to the script. An example is shown in the script. There is as section for the service-script 50.samba.options
named CTDB_SAMBA_SKIP_SHARE_CHECK this option by default is set to yes. This means, every time you create a new share CTDB will check if the path exists, if not CTDB will stop. But if you use the vfs-module glusterfs you will have no local path in the share-configuration. The share points to a directory on your gluster-volume, so CTDB can`t check the path. So if you going to use glusterfs you must set this option for Samba to no.
Because you can set all options to all service-scripts in this file, you don't have to change any of the service-scripts. You will find more information on all options in the manpage man ctdb-script.options
.
The file nodes
CTDB must know all hosts belonging to it's cluster, in this file you have to put all IPs from the heartbeat network of all nodes. This file must have the same content on all nodes. Just put the two IPs from the two nodes into the file. Here you see the content of the file.
192.168.57.42 192.168.57.43
In most distributions the file does'n exists, you have to create it.
The file public_addresses
Every time CTDB starts it will provide an IP-address to all nodes in the CTDB-Cluster, this must be an IP-address from the production network.
After starting the cluster, CTDB will take care of those IP-addresses and will give an IP-address of this list to every CTDB-node. If a CTDB-node crashes CTDB will assign the IP-address, from the crashed node, to another CTDB-node. So every IP-address from this file is always assigned to on of the nodes.
CTDB is doing the failover for the services. If one node fails the IP-address will switch to one of the remaining nodes. All clients will then reconnect to this node. That`s possible because all nodes have all session-information of all clients.
For each node you need a public_addresses
-file. The files can be different on the nodes, depending to which subnet you would like to assign the node. The example uses just one subnet, so both nodes have identical public_addresses
-files. Here you see the content of the file:
192.168.56.101/24 enp0s8 192.168.56.102/24 enp0s8