GlusterFS: Difference between revisions
StefanKania (talk | contribs) |
StefanKania (talk | contribs) |
||
Line 39: | Line 39: | ||
! style="text-align:left;"| Hostname |
! style="text-align:left;"| Hostname |
||
! IP-address |
! IP-address |
||
! Network name |
|||
|- |
|- |
||
|cluster-01 |
|cluster-01 |
||
|192.168.56.101 |
|192.168.56.101 |
||
|example.net |
|||
|- |
|- |
||
|cluster-02 |
|cluster-02 |
||
|192.168.56.102 |
|192.168.56.102 |
||
|example.net |
|||
|- |
|- |
||
|} |
|} |
Revision as of 18:25, 21 February 2020
Fundamentals
GlusterFS is a free and open source scalable filesystem it can be used for cloud storage or to store data in a local network. It can be used to set up an active-active filesystem cluster with failover and loadbalancing via DNS-round robin. Together with CTDB it is possible to build a fileserver for a network with the following advantages:
- Expandable without downtime
- Mount Gluster volumes via the network
- Posix ACL support
- Different configurations possible (depending of your needs)
- Self-healing
- Support of snapshots if LVM2 thinly provisioned is used for the bricks
The different configurations available are:
- Replicated Volume
- Distributed Volume
- Striped Volume
- Replicated-Distributed Volume
- Dispersed Volume
To read more about the different configurations see:
![]() | This article is part of CTDB-setup so it just shows how to setup a replicated volume to be used with CTDB. The setup will be a two node replicated volume with 2GB diskspace, so it will be easy to reproduce the setup. |
What you need
- Two hosts with two network cards
- An empty partition of 2GB to create the volume on each host
- Two IP addresses from your production network
- Two IP addresses for the heartbeat network
- The GlusterFS packages version 7.x
Hostnames and IPs
Here you see two tables with the used IP-addresses on both hosts.
production network
If a client should connect to the Gluster-cluster an IP-address from the production network is used.
Hostname | IP-address | Network name |
---|---|---|
cluster-01 | 192.168.56.101 | example.net |
cluster-02 | 192.168.56.102 | example.net |
Heartbeat network
The heartbeat network is only for the communication between the Gluster-nodes
Hostname | IP-address |
---|---|
c-01 | 192.168.57.101 |
c-02 | 192.168.57.102 |
The mountpoints
you need two mountpoints, one for the physical brick and one for the volume.
Mountpoint | What to mount |
---|---|
/gluster | The brick on each node |
/glusterfs | For the volume on each node |
Setting up the LVM-partition
![]() | Be sure that you are working with the right partition, you will lose all data if you choose the wrong partition. |
The first step will be, setting up the replicated Gluster-Volume with two nodes. As an example a partition with 2GB is used.
root@cluster-01:~# fdisk /dev/sdc root@cluster-01:~# apt install lvm2 thin-provisioning-tools root@cluster-01:~# pvcreate /dev/sdc1 Physical volume "/dev/sdc1" successfully created. root@cluster-01:~# vgcreate glustergroup /dev/sdc1 Volume group "glustergroup" successfully created root@cluster-01:~# lvcreate -L 1950M -T glustergroup/glusterpool Using default stripesize 64,00 KiB Rounding up size to full physical extent 1,91 GiB Logical volume "glusterpool" created. root@cluster-01:~# lvcreate -V 1900M -T glustergroup/glusterpool -n glusterv1 Using default stripesize 64,00 KiB. Logical volume "glusterv1" created. root@cluster-01:~# mkfs.xfs /dev/glustergroup/glusterv1 root@cluster-01:~# mkdir /gluster root@cluster-01:~# mount /dev/glustergroup/glusterv1 /gluster root@cluster-01:~# echo /dev/glustergroup/glusterv1 /gluster xfs defaults 0 0 >> /etc/fstab root@cluster-01:~# mkdir /gluster/brick
Do all the steps on both nodes.
Creating the peer pool
![]() | Make sure that you are using the the hostnames from the heartbeat network, to be sure that the communication between the nodes is using the heartbeat network. |
Before you can create the volume you have to set up a peer pool, by adding the two hosts as peer to the pool In next listing you will see the commands to add the second gluster-node to the pool. You have to do this on the first of the first gluster-host:
root@cluster-01:~# gluster peer probe c-02 peer probe: success.
If you try to add the peer and you get one of the following error messages:
root@cluster-01:~# gluster peer probe c-02 Connection failed. Please check if gluster daemon is operational. root@cluster-01:~# gluster peer probe c-02 peer probe: failed: Probe returned with Transport endpoint is not connected
The first error message will point you to a not running glusterd on the host you are trying to add the peer. Restart the the daemon
systemctl restart glusterd
The second error message will point to a not running daemon on the peer you are trying to add to the pool. Restart the glusterd on the other node.
If you could add the node c-02 on the node c-01, add the host c-01 to the trusted pool on node c-02
root@cluster-02:~# gluster peer probe c-01 peer probe: success.
Now you can check the status of each node and take a look at the list of all nodes with the gluster-command
root@cluster-01:~# gluster peer status Number of Peers: 1 Hostname: c-02 Uuid: aca7d361-51df-4d1f-9b0f-4cf494029f21 State: Peer in Cluster (Connected)
root@cluster-02:~# gluster peer status Number of Peers: 1 Other names: c-02 Hostname: c-01.heartbeat.net Uuid: adafbf93-e716-4d99-bf89-e8044d57e3aa State: Peer in Cluster (Connected) Other names: c-01
root@cluster-02:~# gluster pool list UUID Hostname State adafbf93-e716-4d99-bf89-e8044d57e3aa c-01.heartbeat.net Connected aca7d361-51df-4d1f-9b0f-4cf494029f21 localhost Connected
On each host you will find the information of the peer in /var/lib/glusterd/peers/<UUID>
Now you have all the peers added to the pool, you will need for the gluster-volume.