6.0: DRBD: Difference between revisions

From SambaWiki
m (6.0: DRBD moved to 6.0. DRBD)
No edit summary
Line 1: Line 1:
[[1.0: Configuring Samba]]
[[1.0. Configuring Samba]]


[[2.0: Configuring LDAP]]
[[2.0. Configuring LDAP]]


[[3.0: Initialization LDAP Database]]
[[3.0. Initialization LDAP Database]]


[[4.0: User Management]]
[[4.0. User Management]]


[[5.0: Heartbeat HA Configuration]]
[[5.0. Heartbeat HA Configuration]]


[[6.0: DRBD]]
[[6.0. DRBD]]


[[7.0: BIND DNS]]
[[7.0. BIND DNS]]


----
----
Line 17: Line 17:




== [[6.1 Requirements]] ==
== [[6.1. Requirements]] ==




Line 31: Line 31:
If you are updating your kernel or version of DRBD, make sure DRBD is stopped on both machines.
If you are updating your kernel or version of DRBD, make sure DRBD is stopped on both machines.
Never attempt to run different versions of DRBD, this means both machines need the same kernel.
Never attempt to run different versions of DRBD, this means both machines need the same kernel.



== [[6.1 Requirements]] ==



You will need to install the DRBD kernel Module. We will build our own RPM kernel modules so it is optimized for our architecture.
You will need to install the DRBD kernel Module. We will build our own RPM kernel modules so it is optimized for our architecture.
Line 50: Line 45:




'''Step1'''
'''Step1.'''


Get a serial cable and connect it to each nodes com1 port.
Get a serial cable and connect it to each nodes com1 port.
Line 58: Line 53:


'''Step2'''
'''Step2.'''


You may have to repeat the below a couple of times in rapid succession to see the output on node1.
You may have to repeat the below a couple of times in rapid succession to see the output on node1.
Line 66: Line 61:




== [[6.2 Installation]] ==
== [[6.2. Installation]] ==




'''Step1'''
'''Step1.'''


Extract the latest stable version of DRBD.
Extract the latest stable version of DRBD.
Line 78: Line 73:




'''Step2'''
'''Step2.'''


It is nice to make your own rpm for your distribution. It makes upgrades seamless.
It is nice to make your own rpm for your distribution. It makes upgrades seamless.
Line 88: Line 83:


'''Step3'''
'''Step3.'''


[root@node1 drbd-0.7.20]# cd dist RPMS/i386/
[root@node1 drbd-0.7.20]# cd dist RPMS/i386/
Line 98: Line 93:


'''Step4'''
'''Step4.'''


We will now install DRBD and our Kernel module which we built earlier.
We will now install DRBD and our Kernel module which we built earlier.
Line 106: Line 101:


'''Step5'''
'''Step5.'''


Login to node 2 the backup domain controller and do the same.
Login to node 2 the backup domain controller and do the same.
Line 112: Line 107:




== [[6.3 Configuration]] ==
== [[6.3. Configuration]] ==




Line 120: Line 115:




'''Step1'''
'''Step1.'''


We are going to create a partition on /dev/hdd1 using fdisk.
We are going to create a partition on /dev/hdd1 using fdisk.
Line 162: Line 157:


'''Step2'''
'''Step2.'''


Now login to node2 the backup domain controller and fdisk /dev/hdd1 as per above; or your chosen device.
Now login to node2 the backup domain controller and fdisk /dev/hdd1 as per above; or your chosen device.
Line 168: Line 163:




== [[6.3.1 drbd.conf]] ==
== [[6.3.1. drbd.conf]] ==




Line 174: Line 169:


'''Step1'''
'''Step1.'''


The below file is fairly self explanatory, you see the real disk link to the DRBD kernel module device.
The below file is fairly self explanatory, you see the real disk link to the DRBD kernel module device.
Line 210: Line 205:


'''Step2'''
'''Step2.'''


[root@node1]# scp /etc/drbd.conf root@node2:/etc/
[root@node1]# scp /etc/drbd.conf root@node2:/etc/
Line 216: Line 211:




== [[6.3.2 Initialization ]] ==
== [[6.3.2. Initialization ]] ==




Line 232: Line 227:


'''Step2'''
'''Step2.'''


[root@node1]# service drbd status
[root@node1]# service drbd status
Line 244: Line 239:




'''Step3'''
'''Step3.'''


Stop the heartbeat service on both nodes.
Stop the heartbeat service on both nodes.




'''Step4'''
'''Step4.'''


We are now telling DRBD to make node1 the primary drive.
We are now telling DRBD to make node1 the primary drive.
Line 265: Line 260:


'''Step5'''
'''Step5.'''


Create a filesystem on our RAID devices.
Create a filesystem on our RAID devices.
Line 273: Line 268:




== [[6.4 Testing]] ==
== [[6.4. Testing]] ==




Line 279: Line 274:




'''Step1'''
'''Step1.'''


Start the heartbeat service on both nodes.
Start the heartbeat service on both nodes.




'''Step2'''
'''Step2.'''


On node1 we can see the status of DRBD.
On node1 we can see the status of DRBD.
Line 309: Line 304:




'''Step3'''
'''Step3.'''


Now let’s check the mount point we created in the heartbeat haresources file.
Now let’s check the mount point we created in the heartbeat haresources file.
Line 325: Line 320:


'''Step4'''
'''Step4.'''


Login to node1 and execute the following command; once heartbeat is stopped it should only take a few seconds to migrate the services to node2.
Login to node1 and execute the following command; once heartbeat is stopped it should only take a few seconds to migrate the services to node2.
Line 345: Line 340:




'''Step5'''
'''Step5.'''


Now let’s check that status of DRBD on node2; we can see it has changed state and become the primary.
Now let’s check that status of DRBD on node2; we can see it has changed state and become the primary.
Line 371: Line 366:




'''Step5'''
'''Step6.'''


Finally start the heartbeat service on node1 and be sure that all processes migrate back.
Finally start the heartbeat service on node1 and be sure that all processes migrate back.

Revision as of 02:24, 26 January 2007

1.0. Configuring Samba

2.0. Configuring LDAP

3.0. Initialization LDAP Database

4.0. User Management

5.0. Heartbeat HA Configuration

6.0. DRBD

7.0. BIND DNS



6.1. Requirements

DRBD Configuration

Primary

Secondary

DRBD is a kernel module which has the ability to network 2 machines to provide Raid1 over LAN. It is assumed that we have two identical drives in both machines; all data on this device will be destroyed.

If you are updating your kernel or version of DRBD, make sure DRBD is stopped on both machines. Never attempt to run different versions of DRBD, this means both machines need the same kernel.

You will need to install the DRBD kernel Module. We will build our own RPM kernel modules so it is optimized for our architecture.

I have tested many different kernels with DRBD, some are not stable so you will need to check Google to make sure your kernel is compatible with the particular DRBD release, most of the time this isn’t an issue.

Both the following kernels are recommended for Fedora Core 4; up to version drbd-0.7.23 I have used.

kernel-smp-2.6.14-1.1656_FC4 kernel-smp-2.6.11-1.1369_FC4


Please browse this list http://www.linbit.com/support/drbd-current/ and look for packages available.


Step1.

Get a serial cable and connect it to each nodes com1 port. Execute the following; you may see a lot of garbage on the screen.

[root@node1 ~]# cat </dev/ttyS0 

Step2.

You may have to repeat the below a couple of times in rapid succession to see the output on node1.

[root@node2 ~]# echo hello >/dev/ttyS0


6.2. Installation

Step1.

Extract the latest stable version of DRBD.

[root@node1 stable]# tar zxvf drbd-0.7.20.tar.gz
[root@node1 stable]# cd drbd-0.7.20
[root@node1 drbd-0.7.20]#


Step2.

It is nice to make your own rpm for your distribution. It makes upgrades seamless.

This will give us a RPM build specifically to our kernel, it may take some time.

[root@node1 drbd-0.7.20]# make
[root@node1 drbd-0.7.20]# make rpm


Step3.

[root@node1 drbd-0.7.20]# cd dist RPMS/i386/

[root@node1 i386]# ls
drbd-0.7.20-1.i386.rpm
drbd-debuginfo-0.7.20-1.i386.rpm
drbd-km-2.6.14_1.1656_FC4smp-0.7.20-1.i386.rpm


Step4.

We will now install DRBD and our Kernel module which we built earlier.

[root@node1 i386]# rpm -Uvh drbd-0.7.20-1.i386.rpm drbd-debuginfo-0.7.20-1.i386.rpm 
 drbd-km-2.6.14_1.1656_FC4smp-0.7.20-1.i386.rpm


Step5.

Login to node 2 the backup domain controller and do the same.


6.3. Configuration

In the example throughout this document we have linked /dev/hdd1 to /dev/drbd; your however may be a different device, it could be SCSI.

All data on the device /dev/hdd will be destroyed.


Step1.

We are going to create a partition on /dev/hdd1 using fdisk.

[root@node1]# fdisk /dev/hdd1

Command (m for help): m
Command action

  a   toggle a bootable flag
  b   edit bsd disklabel
  c   toggle the dos compatibility flag
  d   delete a partition
  l   list known partition types
  m   print this menu
  n   add a new partition
  o   create a new empty DOS partition table
  p   print the partition table
  q   quit without saving changes
  s   create a new empty Sun disklabel
  t   change a partition's system id
  u   change display/entry units
  v   verify the partition table
  w   write table to disk and exit
  x   extra functionality (experts only)

Command (m for help): d

No partition is defined yet!

Command (m for help): n
Command action
e   extended
p   primary partition (1-4) p
Partition number (1-4): 1
First cylinder (1-8677, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-8677, default 8677):
Using default value 8677
Command (m for help): w

Step2.

Now login to node2 the backup domain controller and fdisk /dev/hdd1 as per above; or your chosen device.


6.3.1. drbd.conf

Create this file on both you master and slave server, it should be identical however it is not a requirement. As long as the partition size is the same any mount point can be used.


Step1.

The below file is fairly self explanatory, you see the real disk link to the DRBD kernel module device.

[root@node1]# vi /etc/drbd.conf

# Datadrive (/data) /dev/hdd1 80GB

resource drbd1 {
 protocol C;
 disk {
   on-io-error panic;
 }
 net {
   max-buffers 2048;
   ko-count 4;
   on-disconnect reconnect;
 }
 syncer {
   rate 700000;
 }
 on node1 {
   device    /dev/drbd0;
   disk      /dev/hdd1;
   address   10.0.0.1:7789;
   meta-disk internal;
 }
 on node2 {
   device    /dev/drbd0;
   disk      /dev/hdd1;
   address   10.0.0.2:7789;
   meta-disk internal;
 }
}

Step2.

[root@node1]# scp /etc/drbd.conf root@node2:/etc/


6.3.2. Initialization

In the following steps we will configure the disks to synchronize and choose a master node.

Step1

On the Primary Domain Controller

[root@node1]# service drbd start

On the Backup Domain Controller

[root@node2]# service drbd start


Step2.

[root@node1]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.17 (api:77/proto:74)
SVN Revision: 2093 build by root@node1, 2006-04-23 14:40:20
0: cs:Connected st:Secondary/Secondary ld:Inconsistent
   ns:25127936 nr:3416 dw:23988760 dr:4936449 al:19624 bm:1038 lo:0 pe:0 ua:0 ap:0

You can see both devices are ready, and waiting for a Primary drive to be activated which will do an initial synchronization to the secondary device.


Step3.

Stop the heartbeat service on both nodes.


Step4.

We are now telling DRBD to make node1 the primary drive.

[root@node1]#  drbdadm -- --do-what-I-say primary all
[root@node1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.23 (api:79/proto:74)
SVN Revision: 2686 build by root@node1, 2007-01-23 20:26:13
0: cs:SyncSource st:Primary/Secondary ld:Consistent
   ns:67080 nr:85492 dw:91804 dr:72139 al:9 bm:268 lo:0 pe:30 ua:2019 ap:0
       [==>.................] sync'ed: 12.5% (458848/520196)K
       finish: 0:01:44 speed: 4,356 (4,088) K/sec

Step5.

Create a filesystem on our RAID devices.

[root@node1]# mkfs.ext3 /dev/drbd0


6.4. Testing

We have a 2 node cluster replicating data, its time to test a failover.


Step1.

Start the heartbeat service on both nodes.


Step2.

On node1 we can see the status of DRBD.

[root@node1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.23 (api:79/proto:74)
0: cs:Connected st:Primary/Secondary ld:Consistent
   ns:1536 nr:0 dw:1372 dr:801 al:4 bm:6 lo:0 pe:0 ua:0 ap:0
[root@node1 ~]#

On node2 we can see the status of DRBD.

[root@node2 ~]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.23 (api:79/proto:74)
SVN Revision: 2686 build by root@node2, 2007-01-23 20:26:03
0: cs:Connected st:Secondary/Primary ld:Consistent
   ns:0 nr:1484 dw:1484 dr:0 al:0 bm:6 lo:0 pe:0 ua:0 ap:0
[root@node2 ~]#

That all looks good; we can see the devices are consistent and ready for use.


Step3.

Now let’s check the mount point we created in the heartbeat haresources file.

We can see heartbeat has successfully mounted “/dev/drbd0 to the /data directory” of course your device will not have any data on it yet.

[root@node1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      35G   14G   20G  41% /
/dev/hdc1              99M   21M   74M  22% /boot
/dev/shm              506M     0  506M   0% /dev/shm
/dev/drbd0             74G   37G   33G  53% /data
[root@node1 ~]#


Step4.

Login to node1 and execute the following command; once heartbeat is stopped it should only take a few seconds to migrate the services to node2.


[root@node1 ~]# service heartbeat stop
Stopping High-Availability services:
                                         [  OK  ]

[root@node1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.23 (api:79/proto:74)
SVN Revision: 2686 build by root@node1, 2007-01-23 20:26:13
0: cs:Connected st:Secondary/Primary ld:Consistent
   ns:5616 nr:85492 dw:90944 dr:2162 al:9 bm:260 lo:0 pe:0 ua:0 ap:0

We can see drbd change state to secondary on node1.


Step5.

Now let’s check that status of DRBD on node2; we can see it has changed state and become the primary.

[root@node2 ~]# service drbd status
drbd driver loaded OK; device status:
version: 0.7.23 (api:79/proto:74)
 SVN Revision: 2686 build by root@node2, 2007-01-23 20:26:03
0: cs:Connected st:Primary/Secondary ld:Consistent
   ns:4 nr:518132 dw:518136 dr:17 al:0 bm:220 lo:0 pe:0 ua:0 ap:0
1: cs:Connected st:Primary/Secondary ld:Consistent
   ns:28 nr:520252 dw:520280 dr:85 al:0 bm:199 lo:0 pe:0 ua:0 ap:0

Check that node2 has mounted the device.

[root@node2 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      35G   12G   22G  35% /
/dev/hdc1              99M   17M   78M  18% /boot
/dev/shm              506M     0  506M   0% /dev/shm
/dev/hdh1             111G   97G  7.6G  93% /storage
/dev/drbd0             74G   37G   33G  53% /data
[root@node2 ~]#


Step6.

Finally start the heartbeat service on node1 and be sure that all processes migrate back.