Setting up CTDB for Clustered NFS: Difference between revisions

From SambaWiki
m (MartinSchwenke moved page Setting Up CTDB For Clustered NFS to Setting up CTDB for Clustered NFS: Title looks better...)
(Clean-ups and modernisation)
Line 1: Line 1:
= Assumptions =

This guide is aimed at the Linux kernel NFS daemon.

CTDB can be made to manage another NFS server by using the <code>CTDB_NFS_CALLOUT</code> configuration variable to specify an NFS server-specific call-out.

= First steps =
= First steps =


[CTDB_Setup|Configure CTDB] and set it up to use public Ip addresses. Verify that the CTDB cluster works.
[[CTDB_Setup|Configure CTDB]] and set it up to use public IP addresses. Verify that the CTDB cluster works.


= NFS configuration =
= /etc/exports =


== Exports ==
Export the same directory from all nodes. Also make sure to specify the fsid export option so that all nodes will present the same fsid to clients.
clients can get "upset" if the fsid on a mount suddenly changes.


Requirements:
/gpfs0/data *(rw,fsid=1235)


* NFS exports must be the same on all nodes
= /etc/sysconfig/nfs =
* For each export, the <code>fsid</code> option must be set to the same value on all nodes.


For the Linux kernel NFS server, this is usually in <code>/etc/exports</code>.
This file must be edited to point statd to keep its state directory on shared storage instead of in a local directory.
We must also make statd use a fixed port to listen on that is the same for all nodes in the cluster.
If we don't specify a fixed port, the statd port will change during failover which causes problems on some clients.


Example:
This file should look something like :


/clusterfs0/data *(rw,fsid=1235)
CTDB_MANAGES_NFS=yes
/clusterfs0/misc *(rw,fsid=1237)
NFS_TICKLE_SHARED_DIRECTORY=/gpfs0/nfs-tickles

== Daemon configuration ==

Clustering NFS has some extra requirements compared to running a regular NFS server, so some extra configuration is needed.

* All NFS daemons should run on fixed ports, which should be the same on all cluster nodes. Some clients can become confused if ports change during fail-over.
* NFSv4 should be disabled.
* <code>statd</code> should be configured to use CTDB's high-availability call-out.
* <code>statd</code>'s hostname should be resolvable into the CTDB public IP addresses. It should be the same name used by Samba. This should use the value of NFS_HOSTNAME, since this is used by CTDB's high-availability call-out.

=== Red Hat Linux variants ===

The configuration file will be <code>/etc/sysconfig/nfs</code> and it should look something like:

NFS_HOSTNAME="ctdb"
RPCNFSDARGS="-N 4"
RPCNFSDCOUNT=32
STATD_PORT=595
STATD_PORT=595
STATD_OUTGOING_PORT=596
STATD_OUTGOING_PORT=596
Line 26: Line 47:
LOCKD_UDPPORT=599
LOCKD_UDPPORT=599
LOCKD_TCPPORT=599
LOCKD_TCPPORT=599
STATD_HOSTNAME="$NFS_HOSTNAME"
STATD_SHARED_DIRECTORY=/gpfs0/nfs-state
NFS_HOSTNAME="ctdb"
STATD_HA_CALLOUT="/etc/ctdb/statd-callout"
STATD_HOSTNAME="$NFS_HOSTNAME -P "$STATD_SHARED_DIRECTORY/$PUBLIC_IP" -H /etc/ctdb/statd-callout -p 97"
RPCNFSDARGS="-N 4"


= Configure CTDB to manage NFS =
The CTDB_MANAGES_NFS line tells the events scripts that CTDB is to manage startup and shutdown of the NFS and NFSLOCK services.
With this set to yes, CTDB will start/stop/restart these services as required.


In the CTDB configuration, tell CTDB that you want it to manage NFS:
STATD_SHARED_DIRECTORY is the shared directory where statd and the statd-callout script expects that the state variables and lists of clients to notify are found.


CTDB_MANAGES_NFS=yes
The ip address specified should be the public address of this node.


CTDB will manage and start/stop/restart the NFS services, so the operating system should be configured so these are not started/stopped automatically.
The reason to specify the port used by the lockmanager is so that the port used by a public address will not change during address failover/failback since this can
confuse some clients.


== Red Hat variants ==
NFS_TICKLE_SHARED_DIRECTORY is where ctdb will store information about which clients have established tcp connections to the cluster. This information is used during failover of ip addresses. This allows the node that takes over an ip address to very quickly 'tickle' and reset any tcp connections for the ip address it took over.
The reason to do this is to improve the speed at which a client will detect that the tcp connection for NFS needs to be reestablished and to speed up recovery in the client.


If using a Red Hat variant, the NFS services are <code>nfs</code> and <code>nfslock</code> services. Starting them at boot time is not recommended and this can be disabled using <code>chkconfig</code>.
NFS_HOSTNAME is the name that the nfs server will use for the public addresses. This should be the same as the name samba uses.
This name must be resolvable into the ip addresses used for public addresses.

The RPCNFSDARGS line is used to disable support for NFSv4 which is not yet supported by CTDB.

= chkconfig =

Since CTDB will manage and start/stop/restart the nfs and the nfslock services, you must disable them in chkconfig.


chkconfig nfs off
chkconfig nfs off
chkconfig nfslock off
chkconfig nfslock off


The service names and mechanism for disabling them varies across operating systems.
= Statd state directories =


= Client configuration =
For each node, create a state directory on shared storage where each local statd daemon can keep its state information. This needs to be on shared storage since
if a node takes over an ip address it needs to find the list of monitored clients to notify.
You need to create the directory used to host this statd state on shared storage.


IP addresses, rather than a DNS/host name, should be used when configuring client mounts. NFSv3 locking is heavily tied to IP addresses and can break if a client uses round-robin DNS. This means load balancing for NFS is achieved by hand-distributing public IP addresses across clients.
mkdir /gpfs0/nfs-state


= Event scripts =
== IMPORTANT ==


Never ever mount the same NFS share on a client from two different nodes in the cluster at the same time. The client-side caching in NFS is very fragile and assumes that an object can only be accessed through one single path at a time.
CTDB clustering for NFS relies on two event scripts /etc/ctdb/events.d/60.nfs and /etc/ctdb/events.d/61.nfstickle.
These two scripts are provided by the RPM package and there should not be any need to change them.


= IMPORTANT =
= Event scripts =


CTDB clustering for NFS relies on two event scripts <code>06.nfs</code> and <code>60.nfs</code>. These are provided as part of CTDB and do not usually need to be changed. The NFS server being managed can be changed by providing a call-out and setting the <code>CTDB_NFS_CALLOUT</code> configuration variable to point to it.
Never ever mount the same nfs share on a client from two different nodes in the cluster at the same time.
The client side caching in NFS is very fragile and assumes/relies on that an object can only be accessed through one single path at a time.

Revision as of 05:29, 21 October 2016

Assumptions

This guide is aimed at the Linux kernel NFS daemon.

CTDB can be made to manage another NFS server by using the CTDB_NFS_CALLOUT configuration variable to specify an NFS server-specific call-out.

First steps

Configure CTDB and set it up to use public IP addresses. Verify that the CTDB cluster works.

NFS configuration

Exports

Requirements:

  • NFS exports must be the same on all nodes
  • For each export, the fsid option must be set to the same value on all nodes.

For the Linux kernel NFS server, this is usually in /etc/exports.

Example:

 /clusterfs0/data *(rw,fsid=1235)
 /clusterfs0/misc *(rw,fsid=1237)

Daemon configuration

Clustering NFS has some extra requirements compared to running a regular NFS server, so some extra configuration is needed.

  • All NFS daemons should run on fixed ports, which should be the same on all cluster nodes. Some clients can become confused if ports change during fail-over.
  • NFSv4 should be disabled.
  • statd should be configured to use CTDB's high-availability call-out.
  • statd's hostname should be resolvable into the CTDB public IP addresses. It should be the same name used by Samba. This should use the value of NFS_HOSTNAME, since this is used by CTDB's high-availability call-out.

Red Hat Linux variants

The configuration file will be /etc/sysconfig/nfs and it should look something like:

 NFS_HOSTNAME="ctdb"
 RPCNFSDARGS="-N 4"
 RPCNFSDCOUNT=32
 STATD_PORT=595
 STATD_OUTGOING_PORT=596
 MOUNTD_PORT=597
 RQUOTAD_PORT=598
 LOCKD_UDPPORT=599
 LOCKD_TCPPORT=599
 STATD_HOSTNAME="$NFS_HOSTNAME"
 STATD_HA_CALLOUT="/etc/ctdb/statd-callout"

Configure CTDB to manage NFS

In the CTDB configuration, tell CTDB that you want it to manage NFS:

 CTDB_MANAGES_NFS=yes

CTDB will manage and start/stop/restart the NFS services, so the operating system should be configured so these are not started/stopped automatically.

Red Hat variants

If using a Red Hat variant, the NFS services are nfs and nfslock services. Starting them at boot time is not recommended and this can be disabled using chkconfig.

 chkconfig nfs off
 chkconfig nfslock off

The service names and mechanism for disabling them varies across operating systems.

Client configuration

IP addresses, rather than a DNS/host name, should be used when configuring client mounts. NFSv3 locking is heavily tied to IP addresses and can break if a client uses round-robin DNS. This means load balancing for NFS is achieved by hand-distributing public IP addresses across clients.

IMPORTANT

Never ever mount the same NFS share on a client from two different nodes in the cluster at the same time. The client-side caching in NFS is very fragile and assumes that an object can only be accessed through one single path at a time.

Event scripts

CTDB clustering for NFS relies on two event scripts 06.nfs and 60.nfs. These are provided as part of CTDB and do not usually need to be changed. The NFS server being managed can be changed by providing a call-out and setting the CTDB_NFS_CALLOUT configuration variable to point to it.