GlusterFS

From SambaWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Fundamentals

GlusterFS is a free and open source scalable filesystem it can be used for cloud storage or to store data in a local network. It can be used to set up an active-active filesystem cluster with failover and loadbalancing via DNS-round robin. Together with CTDB it is possible to build a fileserver for a network with the following advantages:

  • Expandable without downtime
  • Mount Gluster volumes via the network
  • Posix ACL support
  • Different configurations possible (depending of your needs)
  • Self-healing
  • Support of snapshots if LVM2 thinly provisioned is used for the bricks

The different configurations available are:

  • Replicated Volume
  • Distributed Volume
  • Striped Volume
  • Replicated-Distributed Volume
  • Dispersed Volume

To read more about the different configurations see:



What you need

  • Two hosts with two network cards
  • An empty partition of 2GB to create the volume on each host
  • Two IP addresses from your production network
  • Two IP addresses for the heartbeat network
  • The GlusterFS packages version 7.x

Hostnames and IPs

Here you see two tables with the used IP-addresses on both hosts.

production network

If a client should connect to the Gluster-cluster an IP-address from the production network is used.

Hostname IP-address
cluster-01 192.168.56.101
cluster-02 192.168.56.102

Heartbeat network

The heartbeat network is only for the communication between the Gluster-nodes

Hostname IP-address
c-01 192.168.57.101
c-02 192.168.57.102

The mountpoints

you need two mountpoints, one for the physical brick and one for the volume.

Mountpoint What to mount
/gluster The brick on each node
/glusterfs For the volume on each node

Setting up the LVM-partition

The first step will be, setting up the replicated Gluster-Volume with two nodes. As an example a partition with 2GB is used.

root@cluster-01:~# fdisk /dev/sdc

root@cluster-01:~# apt install lvm2 thin-provisioning-tools

root@cluster-01:~# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created.

root@cluster-01:~# vgcreate glustergroup /dev/sdc1
  Volume group "glustergroup" successfully created

root@cluster-01:~# lvcreate -L 1950M -T glustergroup/glusterpool
  Using default stripesize 64,00 KiB
  Rounding up size to full physical extent 1,91 GiB
  Logical volume "glusterpool" created.

root@cluster-01:~# lvcreate -V 1900M -T glustergroup/glusterpool -n glusterv1
  Using default stripesize 64,00 KiB.
  Logical volume "glusterv1" created.
 
root@cluster-01:~# mkfs.xfs /dev/glustergroup/glusterv1

root@cluster-01:~# mkdir /gluster

root@cluster-01:~# mount /dev/glustergroup/glusterv1 /gluster

root@cluster-01:~# echo /dev/glustergroup/glusterv1 /gluster xfs defaults 0 0 >> /etc/fstab

root@cluster-01:~# mkdir /gluster/brick

Do all the steps on both nodes.

Creating the peer pool

Before you can create the volume you have to set up a peer pool, by adding the two hosts as peer to the pool In next listing you will see the commands to add the second gluster-node to the pool. You have to do this on the first of the first gluster-host:

root@cluster-01:~# gluster peer probe c-02
peer probe: success. 

If you try to add the peer and you get one of the following error messages:

root@cluster-01:~# gluster peer probe c-02
Connection failed. Please check if gluster daemon is operational.

root@cluster-01:~# gluster peer probe c-02
peer probe: failed: Probe returned with Transport endpoint is not connected

The first error message will point you to a not running glusterd on the host you are trying to add the peer. Restart the the daemon

systemctl restart glusterd

The second error message will point to a not running daemon on the peer you are trying to add to the pool. Restart the glusterd on the other node.

If you could add the node c-02 on the node c-01, add the host c-01 to the trusted pool on node c-02

root@cluster-02:~# gluster peer probe c-01
peer probe: success. 

Now you can check the status of each node and take a look at the list of all nodes with the \texttt{gluster}-command as you can see in listing~\ref{list-g-pool}: \begin{lstlisting}[captionpos=b,label=list-g-pool,caption=Listing the nodes and the pool] root@cluster-01:~# gluster peer status Number of Peers: 1

Hostname: c-02 Uuid: aca7d361-51df-4d1f-9b0f-4cf494029f21 State: Peer in Cluster (Connected)

root@cluster-02:~# gluster peer status Number of Peers: 1

Hostname: c-01.heartbeat.net Uuid: adafbf93-e716-4d99-bf89-e8044d57e3aa State: Peer in Cluster (Connected) Other names: c-01

root@cluster-02:~# gluster pool list UUID Hostname State adafbf93-e716-4d99-bf89-e8044d57e3aa c-01.heartbeat.net Connected aca7d361-51df-4d1f-9b0f-4cf494029f21 localhost Connected \end{lstlisting} On each host you will find the information of the peer in \textsf{/var/lib/glusterd/peers/$<UUID>$}.

Now you have all the peers added to the pool , you will need for the gluster-volume.