Adding public IP addresses

From SambaWiki
Jump to: navigation, search

Introduction

CTDB can manage a pool of public IP addresses that are distributed across nodes of a cluster. This allows CTDB to perform failover of connections and load balance between nodes. Public IP addresses allow a cluster of nodes that to function as a single multi-homed node, in nearly all respects. This allows the cluster to offer coherent services (e.g. SMB, NFS) across all nodes.

The clustering model utilises IP takeover techniques to ensure that the full set of public IP addresses assigned to services on the cluster will always be available to the clients even when some nodes have failed and become unavailable.

Alternatively, CTDB can be configured to use LVS for failover and load balancing. However, this is not as well tested as the approach described here.

Prerequisites

CTDB configuration file

The CTDB_PUBLIC_ADDRESSES configuration variable must be set in the ctdbd configuration fileto point to a public addresses file containing the public IP address configuration. This file is usually called public_addresses and should reside in the CTDB configuration directory.

For example:

 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses

Please see ctdbd.conf(5) for more details.

Public addresses file

The file contains a list of public IP addresses, one per line, each with an optional (comma-separated) list of network interfaces that can have that address assigned. These are the addresses that clients should be configured to connect to.

For example:

 192.168.1.1/24 eth1
 192.168.1.2/24 eth1
 192.168.2.1/24 eth2
 192.168.2.2/24 eth2

If network interfaces are not specified on all lines in the public addresses file then the CTDB_PUBLIC_INTERFACE configuration variable must be used to specify a default interface.

  • The CTDB cluster utilises IP takeover techniques to ensure that all the public IP addresses will always be available to clients, as long as at least one node in the cluster is available,
  • Do not manually assign the public IP addresses to interfaces on any node. CTDB will add and remove these addresses automatically at runtime.
  • There is no built-in restriction on the number of IP addresses or network interfaces that can be used. However, performance limitations (e.g. time taken to calculate IP address distribution, time taken to break TCP connections and delete IPs from interfaces, ...) introduce practical limits.
  • It is sensible to plan the public IP addresses so that they can be evenly redistributed across subsets of nodes. For example, a 4 node cluster will always be able to evenly distribute 12 public IP addresses (across 4, 3, 2, 1 nodes). Having IP addresses evenly balanced is not a hard requirement but evenly balancing IP addresses is the only method of load balancing used by CTDB.
  • The public addresses file can differ between nodes, allowing subsets of nodes to host particular public IP addresses. Note that pathological configurations can result in undesirable IP address distribution.

Testing

ctdb ip shows the lists public IP addresses and that node that is hosting them.

For example:

 # ctdb ip
 Public IPs on node 0
 192.168.1.1 0
 192.168.1.2 1
 192.168.2.1 2
 192.168.2.2 3
  • A value of -1 for a node number indicates that an address is not currently hosted.
  • This command only shows public IP addresses defined on the current node. If different groups of public IP addresses are defined on different nodes then use ctdb ip all to show addresses defined on all nodes.

Name resolution

Round-robin DNS

You can configure the public IP addresses to correspond to a single DNS name. This can allow some clients to be configured to connect to the DNS name. Clients should then connect to different public IP addresses in a round-robin manner.

WINS

It is also possible to setup a static WINS server entry listing the cluster's public IP addresses.

Connectivity and routing

CTDB's management of public IP addresses can affect connectivity to infrastructure (e.g. DNS, DC, LDAP, NIS) that is accessed via the public/client network.

  • When a node is not hosting any public IP addresses (e.g. when unhealthy due to monitoring failure or at at startup) then services on the node may not be able to reach infrastructure required for the node to pass monitoring or for services to start successfully.
  • For complex network topologies, it may not be possible to correctly route replies to packets sent to public IP addresses.

There are several mechanisms for avoiding and working around connectivity and routing issues.

Static IP addresses

One simple way of ensuring that a node can route to public/client network infrastructure is to assign a static IP address on relevant interfaces, using the configuration mechanism provided by the operating system. Such static addresses must not overlap with the public IP addresses that are dynamically assigned by CTDB.

Static routes

If routes are required that depend on public IP addresses then these may need to be re-added every time CTDB moves public IP addresses. CTDB provides a 11.routing event script that processes a static-routes file in the CTDB configuration directory when required. The format of this file is:

 INTERFACE NETWORK/MASK GATEWAY

This adds a route to NETWORK/MASK using GATEWAY via INTERFACE.

For example:

 bond1 10.3.3.25/32 10.5.0.1
 bond1 10.3.3.0/24 10.5.0.254
 bond2 0.0.0.0/0 10.254.0.1

Will cause:

  • a host route to be added to IP address 10.3.3.25 via 10.5.0.1 on bond1
  • a network route to be added for prefix 10.3.3.0/24 via 10.0.254 on bond1
  • a default route to be added via 10.254.0.1 on bond2

Adding these routes may silently fail, depending on absence of available local addresses.

NAT gateway

If static IP addresses are not being used to guarantee connectivity to the public/client network then CTDB's NAT gateway feature can be used to assign a single extra NAT gateway public IP address. This IP address will be dynamically hosted on a NAT gateway master node selected by CTDB, depending on node states. The NAT gateway master node will be able to communicate directly via the NAT gateway public IP address. Other nodes will communicate via the NAT gateway master nodes.

This is implemented using the 11.natgw event script.

See the NAT GATEWAY section in ctdb(7) for details.

Policy routing

Public IP addresses may be spread across several different networks (or VLANs) and it may not be possible to route packets from these public addresses via the system´s default route. Therefore, CTDB has support for policy routing via the 13.per_ip_routing event script. This allows routing to be specified for packets sourced from each public address. The routes are added and removed as CTDB moves public addresses between nodes.

See the POLICY ROUTING section in ctdb(7) for details.