How To Configure Data Replication/Synchronize on CentOS 6 Using DRBD

Home » Linux » How To Configure Data Replication/Synchronize on CentOS 6 Using DRBD
Linux 19 Comments

This article will explain how to configure data replication/synchronize using DRBD application. According this article : http://drbd.linbit.com/

DRBD® refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1.

For configure DRBD, we should have at least 2 machines and have 2 harddrive on each machines. One harddrive will be configured as OS system and one harddrive will be configured as harddrive for replication with other harddrive on other machine

drbd-topology

In this guidance, i am build 2 systems for replication. The systems using CentOS 6 64 Bit. For easy understanding, this is my information system

# Server 1
Hostname   : node1
Domain     : imanudin.net
IP Address : 192.168.80.91

# Server 2
Hostname   : node2
Domain     : imanudin.net
IP Address : 192.168.80.92

# Second Harddrive 1 GB on each machines for testing Purpose

# Configure Network

First, we must configure network on CentOS. Assuming name of your network interface is eth0. Do the following configuration on all nodes (node1 and node2) and adjust on node2

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=192.168.80.91
NETMASK=255.255.255.0
DNS1=192.168.80.91
GATEWAY=192.168.80.11
DNS2=192.168.80.11
DNS3=8.8.8.8
USERCTL=no

Restart network service and setup for automatic boot on all nodes (node1 and node2)

service network restart
chkconfig network on

# Configure Disable Selinux & Firewall on all nodes (node1 and node2)

Open file /etc/sysconfig/selinux and change SELINUX=enforcing become SELINUX=disabled. Also disable some service such as iptables and ip6tables.

setenforce 0
service iptables stop
service ip6tables stop
chkconfig iptables off
chkconfig ip6tables off

# Configure /etc/hosts and hostname on all nodes (node1 and node2)

Open file /etc/hosts and configure as follows on all nodes (node1 and node2)

127.0.0.1     localhost
192.168.80.91 node1.imanudin.net node1
192.168.80.92 node2.imanudin.net node2

Do the following command as root and open file /etc/sysconfig/network to change hostname

On node1

hostname node1.imanudin.net
vi /etc/sysconfig/network

Change HOSTNAME so that like below :

NETWORKING=yes
HOSTNAME=node1.imanudin.net

On node2

hostname node2.imanudin.net
vi /etc/sysconfig/network

Change HOSTNAME so that like below :

NETWORKING=yes
HOSTNAME=node2.imanudin.net

# Update repos and install packages DRBD on all nodes (node1 and node2)

wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm
yum update
yum install kmod-drbd83 drbd83-utils

# Configure DRBD

– Configure file /etc/drbd.conf (enough on node1 only)

vi /etc/drbd.conf

fill with the following line

global {
dialog-refresh 1;
usage-count yes;
minor-count 5;
}
common {
syncer {
rate 10M;
}
}
resource r0 {
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 120;
}
protocol C;
disk {
on-io-error detach;
}
syncer {
rate 10M;
al-extents 257;
}
on node1.imanudin.net {
device /dev/drbd0;
address 192.168.80.91:7788;
meta-disk internal;
disk /dev/sdb;
}
on node2.imanudin.net {
device /dev/drbd0;
address 192.168.80.92:7788;
meta-disk internal;
disk /dev/sdb;
}
}

Note :

r0 is resources name for DRBD.

/dev/sdb is second drive on each machines that will configured for DRBD. Please check with fdisk -l command to make sure the name of second drive. Recommended capacity harddrive on each machines are same. If different, the bigger harddrive will adjust with the lower harddrive.

Recommended using 2 NIC for each machines. 1 NIC for communication with clients and 1 NIC for communication between servers using Cross cable (DRBD communication)

# Copy drbd.conf from node1 to node2 (run the following command on node1)

scp /etc/drbd.conf root@node2:/etc/drbd.conf

# Create metadata on all nodes (node1 and node2)

Run the following command

drbdadm create-md r0

# Start DRBD services on all nodes (node1 and node2)
Run the following command

service drbd start
chkconfig drbd on

# Check DRBD status all nodes (node1 and node2)

service drbd status

# Configure primary DRBD on node1
Run the following command only at node1

drbdsetup /dev/drbd0 primary --overwrite-data-of-peer

Please wait until 100% synchronize devices among node1 and node2. You also could check the progress with the following command

service drbd status
watch service drbd status

After finish synchronization, you will see Connected and UpToDate among node1 and node2

drbd driver loaded OK; device status:
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by phil@Build64R6, 2014-10-28 10:32:53
m:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C

# Format device DRBD

In this section, i am using ext3. You can using other format such as xfs, ext4 and other

mkfs.ext3 /dev/drbd0

Hooray, finally DRBD has been finished configured and connected each other

Good luck and hopefully useful 😀

Let’s see the video on Youtube

19 thoughts on - How To Configure Data Replication/Synchronize on CentOS 6 Using DRBD

  • @Rom : Hi Rom,

    Resources only the name for resources. You can change it with the other name such as data-drbd or something else.

    Block device is member harddisk in the resources

    • Hi,

      You only need to add at the bottom in configuration /etc/drbd.conf

      resource r1 {
      startup {
      wfc-timeout 30;
      outdated-wfc-timeout 20;
      degr-wfc-timeout 120;
      }
      protocol C;
      disk {
      on-io-error detach;
      }
      syncer {
      rate 10M;
      al-extents 257;
      }
      on node1.imanudin.net {
      device /dev/drbd0;
      address 192.168.80.91:7788;
      meta-disk internal;
      disk /dev/sdb;
      }
      on node2.imanudin.net {
      device /dev/drbd0;
      address 192.168.80.92:7788;
      meta-disk internal;
      disk /dev/sdb;
      }
      }
      

      Note : Please change r0 with other name. For example is r1 and adjust harddisk who will configured as DRBD

  • Getting This Error:- Please help

    drbd.d/zimbramysql.res:21: Parse error: ‘disk | device | address | meta-disk | flexible-meta-disk’ expected,
    but got ‘volume’

  • Hi,

    I have done all the commands on both servers after that i have mount primary disk to my data folder it was successfully but i have mount secondary disk to my data folder then i got error:-

    mount: /dev/drbd0 is write-protected, mounting read-only
    mount: mount /dev/drbd0 on /vol failed: Wrong medium type

    I am not able to mount secondary disk
    I am using Centos 7

    Regards,
    Vipin

  • Hi Imran,

    I want to sync my device at all time any other way to do the same.

    Or can we make both device primary and sync this?

    Regards,
    Vipin

  • Hi Iman,

    I created DRBD setup with one node as primary and the other node as secondary. tested if data is replicated by manually and was successful.

    what should be done to ensure there is an automatic failover of the replicated data to node 2, if node 1 is powered off ?

  • Thank you very much brother…! .

    Master to Master DRBD Sample:

    global {
    dialog-refresh 1;
    usage-count no;
    minor-count 5;
    }
    common {
    syncer {
    rate 1000M;
    }
    }
    resource r0 {
    startup {
    wfc-timeout 30;
    outdated-wfc-timeout 20;
    degr-wfc-timeout 120;
    become-primary-on both;
    }
    protocol C;
    disk {
    on-io-error detach;
    }
    net {
    allow-two-primaries;
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
    }
    syncer {
    rate 1000M;
    al-extents 257;
    }
    on nfs-active.example.com {
    device /dev/drbd0;
    address 192.168.0.31:7788;
    meta-disk internal;
    disk /dev/volg1/lv_data;
    }
    on nfs-passive.example.com {
    device /dev/drbd0;
    address 192.168.0.32:7788;
    meta-disk internal;
    disk /dev/volg1/lv_data;
    }
    }

    #include “drbd.d/global_common.conf”;
    #include “drbd.d/*.res”;

  • hi iman,
    i have created two nodes, on node1 i m not able start DRBD service, i m getting the following error.

    [root@node1 ~]# systemctl start drbd
    Job for drbd.service failed because the control process exited with error code. See “systemctl status drbd.service” and “journalctl -xe” for details.

    when i tried to check the status of DRBD services then i m getting this error

    [root@node1 ~]# systemctl status drbd.service
    ● drbd.service – DRBD — please disable. Unless you are NOT using a cluster manager.
    Loaded: loaded (/usr/lib/systemd/system/drbd.service; disabled; vendor preset: disabled)
    Active: failed (Result: exit-code) since Fri 2017-04-21 17:13:54 IST; 7min ago
    Process: 6971 ExecStart=/lib/drbd/drbd start (code=exited, status=5)
    Main PID: 6971 (code=exited, status=5)

    Apr 21 17:13:54 node1.server.com systemd[1]: Starting DRBD — please disable….
    Apr 21 17:13:54 node1.server.com drbd[6971]: Can not load the drbd module.
    Apr 21 17:13:54 node1.server.com systemd[1]: drbd.service: main process exit…D
    Apr 21 17:13:54 node1.server.com systemd[1]: Failed to start DRBD — please ….
    Apr 21 17:13:54 node1.server.com systemd[1]: Unit drbd.service entered faile….
    Apr 21 17:13:54 node1.server.com systemd[1]: drbd.service failed.

    regards
    amit halder

    • Hi Amit,

      DRBD module not loaded

      Apr 21 17:13:54 node1.server.com drbd[6971]: Can not load the drbd module.
      

      Please check and make sure DRBD module has been loaded

      lsmod | grep -i drbd
      modprobe drbd
      

LEAVE A COMMENT