- 1 CTDB Project
- 2 Project Tasks
- 2.1 Hardware acceleration
- 2.2 Flesh out CTDB API
- 2.3 Code s3/s4 opendb and brlock on top of ctdb api
- 2.4 Code CTDB api on top of dumb tdb
- 2.5 Prototype CTDB library on top of UDP/TCP
- 2.6 Setup standalone test environment
- 2.7 Code CTDB test suite
- 2.8 Flesh out recovery part of ctdb protocol
- 2.9 Work out details for persistent tdbs
- 2.10 Wireshark dissector
This project aims to produce an implementation of the CTDB protocol described in the Samba & Clustering page
Andrew Tridgell Alexander Bokovoy Aleksey Fedoseev Jim McDonough Peter Somogyi Volker Lendecke Ronnie Sahlberg Sven Oehme
Setting Up CTDB
See the CTDB Setup page for instructions on setting up the current CTDB code, using either loopback networking or a TCP/IP network
27th April 2007: First usable version available
The initial work will focus on an implementation as part of tdb itself. Integration with the Samba source tree will happen at a later date. Work will probably happen in a bzr tree, but the details have not been worked out yet. Check back here for updates.
(note: Peter is looking at this one)
We want CTDB to be very fast on hardware that supports fast messaging. In particular we are interested in good use of infiniband adapters, where we expect to get messaging latencies of the order of 3 to 5 microseconds.
From discussions so far it looks like the 'verbs' API, perhaps with a modification to allow us to hook it into epoll(), will be the right choice. Basic information on this API is available at https://openib.org/tiki/tiki-index.php
The basic features we want from a messaging API are:
- low latency. We would like to get it down to just a few microseconds per message. Messages will vary in size, but typically be small (say between 64 and 512 bytes).
- non-blocking. We would really like an API that hooks into poll, so we can use epoll(), poll() or select().
- If we can't have an API that hooks into poll() or epoll(), then a callback or signal based API would do if the overheads are small enough. In the same code we also need to be working on a unix domain socket (datagram socket) so we'd like the overhead of dealing with both the infiniband messages and the local datagrams to be low.
- What we definately don't want to use is an API that chews a lot of CPU. So we don't want to be spinning in userspace on a set a mapped registers in the hope that a message might come along. The CPU will be needed for other tasks. Using mapped registers for send would probably be fine, but we'd probably need some kernel mediated mechanism for receive unless you can suggest a way to avoid it.
- ideally we'd have reliable delivery, or at least be told when delivery has failed on a send, but if that is too expensive then we'll do our own reliable delivery mechanism.
- we need to be able to add/remove nodes from the cluster. The Samba clustering code will have its own recovery protocol.
- a 'message' like API would suite us better than a 'remote DMA' style API, unless the remote DMA API is significantly more efficient. Ring buffers would be fine.
An abstract interface can be found here: CTDB_Project_ibwrapper Please note this interface should be able to cover more possible implementations.
TODOs regarding this interface:
- verify implementability
Flesh out CTDB API
(note: Alexander and Aleksey are looking at this)
By this I mean the C api in the "Clustered TDB API" section of the wiki page. The API as given there now is missing some pieces, and I think can be greatly improved.
This is likely to feed back into the CTDB protocol description as well. Ideally we'd get rid of these calls:
CTDB_REQ_FETCH_LOCKED CTDB_REPLY_FETCH_LOCKED CTDB_REQ_UNLOCK CTDB_REPLY_UNLOCK
assuming we can demonstrate they aren't needed. I also think we can combine the CTDB_REQ_CONDITIONAL_APPEND and the CTDB_REQ_FETCH call into a single CTDB_REQ_REQUEST call which takes a key, a blob of data and a condition ID as input, and returns a blob of data and a status code as output. For a fetch call the input blob of data would be zero length.
Code s3/s4 opendb and brlock on top of ctdb api
The opendb and brlock databases have now been converted to and demonstrated using CTDB.
Code CTDB api on top of dumb tdb
The ctdb branch for samba3 now implements a simple api ontop of CTDB. The record header has been expanded to contain a "dmaster" field which allows the samba daemon to determine whether the current version of this record is held locally in the local tdb and if so samba daemon will access the record immediately without any involvement of the ctdb daemon.
If the record is not stored locally, samba will request that the ctdb daemon will locate the most current version of the record in the cluster and transfer it to the local tdb before the samba daemon will access it the normal way out of the locally held tdb.
Prototype CTDB library on top of UDP/TCP
(note: tridge is looking at this task)
The initial implementation of the CTDB protocol will be on top of UDP/TCP
Status: prototype work on this has begun. You can watch progress at http://build.samba.org/?tree=ctdb&function=Recent+Checkins
Setup standalone test environment
This test environment is meant for non-clustered usage, instead emulating a cluster using IP on loopback. It will need to run multiple instances talking over 127.0.0.X interfaces. This will involve some shell scripting, plus some work on adding/removing nodes from the cluster. It might be easiest to add a CTDB protocol request asking a node to 'go quiet', then asking it to become active again later to simulate a node dying and coming back.
Code CTDB test suite
(note: jim is looking at this one)
This reflects the fact that I want this project to concentrate on building ctdb on tdb + messaging, and not concentrate on the "whole problem" involving Samba until later. We'll do a basic s3/s4 backend implementation to make sure the ideas can work, but I want the major testing effort to involve simple tests directly against the ctdb API. It will be so much easier to simulate exotic error conditions that way.
Flesh out recovery part of ctdb protocol
An initial recovery mechanism is implemented for CTDB using a separate recovery daemon that connects to each CTDB daemon as a client. To use a recovery daemon you need to specify '--recovery-daemon' as a command line option when starting the ctdb daemon.
One of the nodes will by an election process become the recovery master which is the node that will monitor the cluster and drive the recovery process when required. This election process is currently based on the VNN number of the node and the lowest VNN number becomes the recovery master.
The recovery daemon that is designated the recovery master will continuously monitor the cluster and verify that the cluster information is consistent.
The recovery process consists of :
- Freezing the cluster.
This includes locking all local tdb databases to prevent any clients from accessing the databases while recovery is in progress.
- Verifying that all active nodes have all databases created. And if required create the missing databases.
- Pull all records from all databases on all remote nodes and merge these records onto the local tdb databases on the node that is the recovery master.
Merging of records are based on RSN numbers.
- After merging, Push all records out to all remote nodes.
- Build and distribute a new mapping for the lmaster role for all records (the vnn map)
- Create a new generation number for the cluster and distribute to all nodes.
- Update all local and remote nodes to mark the recovery master as the current dmaster for all records.
- Thawing the cluster.
At a later stage this recovery mechanism should be merged into the CTDB daemon itself.
Work out details for persistent tdbs
this will need some more thought - its not our top priority, but eventually the long lived databases will matter.
There is a basic dissector for CTDB in current SVN for wireshark. This dissector follows development and changes of the protocol.