This article will explain how to configure data replication/synchronize using DRBD application. According this article : http://drbd.linbit.com/
DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1.
For configure DRBD, we should have at least 2 machines and have 2 harddrive on each machines. One harddrive will be configured as OS system and one harddrive will be configured as harddrive for replication with other harddrive on other machine
In this guidance, i am build 2 systems for replication. The systems using CentOS 6 64 Bit. For easy understanding, this is my information system
# Server 1 Hostname : node1 Domain : imanudin.net IP Address : 192.168.80.91 # Server 2 Hostname : node2 Domain : imanudin.net IP Address : 192.168.80.92 # Second Harddrive 1 GB on each machines for testing Purpose
# Configure Network
First, we must configure network on CentOS. Assuming name of your network interface is eth0. Do the following configuration on all nodes (node1 and node2) and adjust on node2
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none IPADDR=192.168.80.91 NETMASK=255.255.255.0 DNS1=192.168.80.91 GATEWAY=192.168.80.11 DNS2=192.168.80.11 DNS3=8.8.8.8 USERCTL=no
Restart network service and setup for automatic boot on all nodes (node1 and node2)
service network restart chkconfig network on
# Configure Disable Selinux & Firewall on all nodes (node1 and node2)
Open file /etc/sysconfig/selinux and change SELINUX=enforcing become SELINUX=disabled. Also disable some service such as iptables and ip6tables.
setenforce 0 service iptables stop service ip6tables stop chkconfig iptables off chkconfig ip6tables off
# Configure /etc/hosts and hostname on all nodes (node1 and node2)
Open file /etc/hosts and configure as follows on all nodes (node1 and node2)
127.0.0.1 localhost 192.168.80.91 node1.imanudin.net node1 192.168.80.92 node2.imanudin.net node2
Do the following command as root and open file /etc/sysconfig/network to change hostname
– On node1
hostname node1.imanudin.net vi /etc/sysconfig/network
Change HOSTNAME so that like below :
NETWORKING=yes HOSTNAME=node1.imanudin.net
– On node2
hostname node2.imanudin.net vi /etc/sysconfig/network
Change HOSTNAME so that like below :
NETWORKING=yes HOSTNAME=node2.imanudin.net
# Update repos and install packages DRBD on all nodes (node1 and node2)
wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm yum update yum install kmod-drbd83 drbd83-utils
# Configure DRBD
– Configure file /etc/drbd.conf (enough on node1 only)
vi /etc/drbd.conf
fill with the following line
global { dialog-refresh 1; usage-count yes; minor-count 5; } common { syncer { rate 10M; } } resource r0 { startup { wfc-timeout 30; outdated-wfc-timeout 20; degr-wfc-timeout 120; } protocol C; disk { on-io-error detach; } syncer { rate 10M; al-extents 257; } on node1.imanudin.net { device /dev/drbd0; address 192.168.80.91:7788; meta-disk internal; disk /dev/sdb; } on node2.imanudin.net { device /dev/drbd0; address 192.168.80.92:7788; meta-disk internal; disk /dev/sdb; } }
Note :
r0 is resources name for DRBD.
/dev/sdb is second drive on each machines that will configured for DRBD. Please check with fdisk -l command to make sure the name of second drive. Recommended capacity harddrive on each machines are same. If different, the bigger harddrive will adjust with the lower harddrive.
Recommended using 2 NIC for each machines. 1 NIC for communication with clients and 1 NIC for communication between servers using Cross cable (DRBD communication)
# Copy drbd.conf from node1 to node2 (run the following command on node1)
scp /etc/drbd.conf root@node2:/etc/drbd.conf
# Create metadata on all nodes (node1 and node2)
Run the following command
drbdadm create-md r0
# Start DRBD services on all nodes (node1 and node2)
Run the following command
service drbd start chkconfig drbd on
# Check DRBD status all nodes (node1 and node2)
service drbd status
# Configure primary DRBD on node1
Run the following command only at node1
drbdsetup /dev/drbd0 primary --overwrite-data-of-peer
Please wait until 100% synchronize devices among node1 and node2. You also could check the progress with the following command
service drbd status watch service drbd status
After finish synchronization, you will see Connected and UpToDate among node1 and node2
drbd driver loaded OK; device status: version: 8.4.5 (api:1/proto:86-101) GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by phil@Build64R6, 2014-10-28 10:32:53 m:res cs ro ds p mounted fstype 0:r0 Connected Primary/Secondary UpToDate/UpToDate C
# Format device DRBD
In this section, i am using ext3. You can using other format such as xfs, ext4 and other
mkfs.ext3 /dev/drbd0
Hooray, finally DRBD has been finished configured and connected each other
Good luck and hopefully useful 😀
Let’s see the video on Youtube
hi iman,
can u explain resource and block device is on DRBD?
@Rom : Hi Rom,
Resources only the name for resources. You can change it with the other name such as data-drbd or something else.
Block device is member harddisk in the resources
Thanks Iman.. It is wonderful and very useful.
Hi Iman , How to make multiple resource group on same nodes.
Please help
Hi,
You only need to add at the bottom in configuration /etc/drbd.conf
Note : Please change r0 with other name. For example is r1 and adjust harddisk who will configured as DRBD
Getting This Error:- Please help
drbd.d/zimbramysql.res:21: Parse error: ‘disk | device | address | meta-disk | flexible-meta-disk’ expected,
but got ‘volume’
Hi,
Are you change any configuration formerly?
Hi,
I have done all the commands on both servers after that i have mount primary disk to my data folder it was successfully but i have mount secondary disk to my data folder then i got error:-
mount: /dev/drbd0 is write-protected, mounting read-only
mount: mount /dev/drbd0 on /vol failed: Wrong medium type
I am not able to mount secondary disk
I am using Centos 7
Regards,
Vipin
Hi Vipin,
The secondary device cannot mounting. Only primary device who can mounted.
Please try the next guidance in here : https://imanudin.net/2015/03/23/testing-data-replicationsynchronize-on-drbd/
Hi Imran,
I want to sync my device at all time any other way to do the same.
Or can we make both device primary and sync this?
Regards,
Vipin
Hi Vipin,
Please try GlusterFS to do that : https://www.gluster.org/
hy iman, can i create this in container openVZ?
i’m using virtual disk.
thanks
Hi Iman,
I created DRBD setup with one node as primary and the other node as secondary. tested if data is replicated by manually and was successful.
what should be done to ensure there is an automatic failover of the replicated data to node 2, if node 1 is powered off ?
Hi Sachin,
You can see the example at this link : https://imanudin.net/2015/03/24/how-to-install-configure-zimbra-high-availability-ha/
Thank you very much brother…! .
Master to Master DRBD Sample:
global {
dialog-refresh 1;
usage-count no;
minor-count 5;
}
common {
syncer {
rate 1000M;
}
}
resource r0 {
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 120;
become-primary-on both;
}
protocol C;
disk {
on-io-error detach;
}
net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
syncer {
rate 1000M;
al-extents 257;
}
on nfs-active.example.com {
device /dev/drbd0;
address 192.168.0.31:7788;
meta-disk internal;
disk /dev/volg1/lv_data;
}
on nfs-passive.example.com {
device /dev/drbd0;
address 192.168.0.32:7788;
meta-disk internal;
disk /dev/volg1/lv_data;
}
}
#include “drbd.d/global_common.conf”;
#include “drbd.d/*.res”;
Hi Chanaka,
Thanks for the share 😉
hi iman,
i have created two nodes, on node1 i m not able start DRBD service, i m getting the following error.
[root@node1 ~]# systemctl start drbd
Job for drbd.service failed because the control process exited with error code. See “systemctl status drbd.service” and “journalctl -xe” for details.
when i tried to check the status of DRBD services then i m getting this error
[root@node1 ~]# systemctl status drbd.service
● drbd.service – DRBD — please disable. Unless you are NOT using a cluster manager.
Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2017-04-21 17:13:54 IST; 7min ago
Process: 6971 ExecStart=/lib/drbd/drbd start (code=exited, status=5)
Main PID: 6971 (code=exited, status=5)
Apr 21 17:13:54 node1.server.com systemd[1]: Starting DRBD — please disable….
Apr 21 17:13:54 node1.server.com drbd[6971]: Can not load the drbd module.
Apr 21 17:13:54 node1.server.com systemd[1]: drbd.service: main process exit…D
Apr 21 17:13:54 node1.server.com systemd[1]: Failed to start DRBD — please ….
Apr 21 17:13:54 node1.server.com systemd[1]: Unit drbd.service entered faile….
Apr 21 17:13:54 node1.server.com systemd[1]: drbd.service failed.
regards
amit halder
Hi Amit,
DRBD module not loaded
Please check and make sure DRBD module has been loaded
Hi Amit,
DRBD works for centos 7?
and other question
Can I work with lvm volumes on DRBD?
Regards
Hi Andres,
Yes, DRBD should work on CentOS 7 and you can use LVM as DRBD Devices
Hello,
when I restart the server the /dev/drbd0 does not start, I need to mount it manually, this is my configuration files, I preciate your advice:
/etc/drbd.conf
global { usage-count no; }
common { syncer { rate 35M; } }
resource r0 {
protocol C;
startup {
wfc-timeout 15;
become-primary-on both;
}
net {
cram-hmac-alg sha1;
shared-secret “secreto”;
allow-two-primaries;
}
on zimbra1 {
device /dev/drbd0;
disk /dev/sda4;
address 172.16.0.3:7788;
meta-disk internal;
}
on zimbra2 {
device /dev/drbd0;
disk /dev/sda4;
address 172.16.0.4:7788;
meta-disk internal;
}
/etc/fstab
# /etc/fstab: static file system information.
#
# Use ‘blkid’ to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda2 during installation
UUID=f995174f-9231-487f-8d62-7acc23654f3a / ext4 errors=remount-ro 0 1
# swap was on /dev/sda3 during installation
UUID=716c65c0-2a8d-4b01-bacf-897272362db1 none swap sw 0 0
/dev/drbd0 /opt ocfs2 noauto,noatime,nodiratime,_netdev 0 0
Best Regards;
Hi Giovanni,
Please try to use my drbd configuration from here : https://imanudin.net/2015/03/22/how-to-configure-data-replicationsynchronize-on-centos-6-using-drbd/
Hello,
i got this error , i am install on centos 7
[root@node1 ~]# drbdadm create-md r0
/etc/drbd.conf:12: Parse error: ‘a syncer option keyword’ expected,
but got ‘rate’
—————————– conf file is
#include “drbd.d/global_common.conf”;
#include “drbd.d/*.res”;
global {
dialog-refresh 1;
usage-count yes;
minor-count 5;
}
common {
syncer {
rate 10M;
}
}
resource r0 {
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 120;
}
protocol C;
disk {
on-io-error detach;
}
syncer {
rate 10M;
al-extents 257;
}
on node1.imanudin.net {
device /dev/drbd0;
address 10.10.3.111:7788;
meta-disk internal;
disk /dev/sdb;
}
on node2.imanudin.net {
device /dev/drbd0;
address 10.10.3.7:7788;
meta-disk internal;
disk /dev/sdb;
}
}
Hi Ashok,
I am have not try on CentOS 7. So that i am cannot give you the solution. But, can you test on CentOS 6 whether still problem or not?
Good article, iman.
“Iman tak dapat dijual beli….
Ia tiada di tepian pantai…”
good article with clear steps .
with existing data on device , how can one do the setup with drbd – so that data on device stays intact
Hi Sandeep,
You should use second harddisk to sync between harddisk on another server. Then, you can rsync all data from existing into DRBD devices
hi
i am going to deploye as per your steps on centos 7 .i got this error can you please tell me how to get out from here.
[root@cloud1 ~]# drbdadm create-md r0
drbd.d/global_common.conf:5: conflicting use of global section ‘global’ …
/etc/drbd.conf:3: global section ‘global’ first used here.
[root@cloud1 ~]#
Hi,
Your problem is here