Difference between revisions of "GlusterFS"

(The mountpoints)
Line 75: Line 75:
 
|-
 
|-
 
|}
 
|}
 +
 +
==Setting up the LVM-partition==
 
The first step will be, setting up the replicated Gluster-Volume with two nodes. As an example a partition with 2GB is used.  
 
The first step will be, setting up the replicated Gluster-Volume with two nodes. As an example a partition with 2GB is used.  
  
Line 87: Line 89:
 
   Volume group "glustergroup" successfully created
 
   Volume group "glustergroup" successfully created
 
   
 
   
root@cluster-01:~# lvcreate -L 1950M -T glustergroup/glusterpool
+
root@cluster-01:~# lvcreate -L 1950M -T glustergroup/glusterpool
 
   Using default stripesize 64,00 KiB
 
   Using default stripesize 64,00 KiB
 
   Rounding up size to full physical extent 1,91 GiB
 
   Rounding up size to full physical extent 1,91 GiB

Revision as of 17:58, 21 February 2020

Fundamentals

GlusterFS is a free and open source scalable filesystem it can be used for cloud storage or to store data in a local network. It can be used to set up an active-active filesystem cluster with failover and loadbalancing via DNS-round robin. Together with CTDB it is possible to build a fileserver for a network with the following advantages:

  • Expandable without downtime
  • Mount Gluster volumes via the network
  • Posix ACL support
  • Different configurations possible (depending of your needs)
  • Self-healing
  • Support of snapshots if LVM2 thinly provisioned is used for the bricks

The different configurations available are:

  • Replicated Volume
  • Distributed Volume
  • Striped Volume
  • Replicated-Distributed Volume
  • Dispersed Volume

To read more about the different configurations see:



What you need

  • Two hosts with two network cards
  • An empty partition of 2GB to create the volume on each host
  • Two IP addresses from your production network
  • Two IP addresses for the heartbeat network
  • The GlusterFS packages version 7.x

Hostnames and IPs

Here you see two tables with the used IP-addresses on both hosts.

production network

If a client should connect to the Gluster-cluster an IP-address from the production network is used.

Hostname IP-address
cluster-01 192.168.56.101
cluster-02 192.168.56.102

Heartbeat network

The heartbeat network is only for the communication between the Gluster-nodes

Hostname IP-address
c-01 192.168.57.101
c-02 192.168.57.102

The mountpoints

you need two mountpoints, one for the physical brick and one for the volume.

Mountpoint What to mount
/gluster The brick on each node
/glusterfs For the volume on each node

Setting up the LVM-partition

The first step will be, setting up the replicated Gluster-Volume with two nodes. As an example a partition with 2GB is used.

root@cluster-01:~# fdisk /dev/sdc

root@cluster-01:~# apt install lvm2 thin-provisioning-tools

root@cluster-01:~# pvcreate /dev/sdc1

 Physical volume "/dev/sdc1" successfully created.

root@cluster-01:~# vgcreate glustergroup /dev/sdc1

 Volume group "glustergroup" successfully created

root@cluster-01:~# lvcreate -L 1950M -T glustergroup/glusterpool

 Using default stripesize 64,00 KiB
 Rounding up size to full physical extent 1,91 GiB
 Logical volume "glusterpool" created.

root@cluster-01:~# lvcreate -V 1900M -T glustergroup/glusterpool -n glusterv1

 Using default stripesize 64,00 KiB.
 Logical volume "glusterv1" created.
 

root@cluster-01:~# mkfs.xfs /dev/glustergroup/glusterv1

root@cluster-01:~# mkdir /gluster2

root@cluster-01:~# mount /dev/glustergroup/glusterv1 /gluster2

root@cluster-01:~# echo /dev/glustergroup/glusterv1 /gluster2 xfs defaults 0 0 >> /etc/fstab

root@cluster-01:~# mkdir /gluster2/brick