Zimbra

How To Install & Configure Zimbra High Availability (HA)

In the previous articles, i’ve been explain how to install and configure Zimbra on CentOS 6 or CentOS 7, how to install and configure online failover/failback on CentOS 6 using Heartbeat and how to install and configure data replication on CentOS 6 using DRBD. All above guidance could be combined to get Zimbra High Availability. For online failover/failback, you could using Heartbeat. For data replication, you could using DRBD. Heartbeat + DRBD will produce High Availability (HA). The following is guidance to configure Zimbra HA

Step by step to configure Zimbra HA

For the Linux systems, i am using CentOS 6 64 Bit. For easy understanding, this is my information system

# Server 1
Hostname   : node1
Domain     : imanudin.net
IP Address : 192.168.80.91

# Server 2
Hostname   : node2
Domain     : imanudin.net
IP Address : 192.168.80.92

# Alias IP
Hostname   : mail
Domain     : imanudin.net
IP Address : 192.168.80.93

Alias IP will be used for access clients/users. This alias IP will be configured online failover

# install Zimbra on CentOS 6 on all nodes (node1 and node2) as described at this link : How To Install Zimbra 8.6 on CentOS 6. Please note some information below

– Please change name of each nodes refers into mail.imanudin.net when installing Zimbra

– Set IP address of each nodes refers into mail.imanudin.net include DNS and /etc/hosts

# Stop Zimbra and DNS services on all nodes (node1 and node2)

su - zimbra -c "zmcontrol stop"
service named stop
chkconfig zimbra off
chkconfig named off

# After installed Zimbra, install and configure Heartbeat on all nodes (node1 and node2) as described at this link : How To Configure Online Failover/Failback on CentOS 6 Using Heartbeat

# After installed Heartbeat and online failover/failback working fine, then install DRBD for data replication on all nodes (node1 and node2) as described at this link : How To Configure Data Replication/Synchronize on CentOS 6 Using DRBD

# Testing data replication DRBD that has been worked : Testing Data Replication/Synchronize on DRBD

# After DRBD has been worked, copy file/folder /opt/zimbra into DRBD devices.

Do the following command only at node1

– Rysnc Zimbra

drbdadm primary r0
mount /dev/drbd0 /mnt/tmp
rsync -avP --exclude=data.mdb /opt/ /mnt/tmp

data.mdb will be huge if copied by rsync so that take a long time. For the tricks, use cp for copy data.mdb to DRBD devices 😀

– Copy data.mdb

cp /opt/zimbra/data/ldap/mdb/db/data.mdb /mnt/tmp/zimbra/data/ldap/mdb/db/data.mdb
chown zimbra.zimbra /mnt/tmp/zimbra/data/ldap/mdb/db/data.mdb

# Umount DRBD devices after rsync file/folder Zimbra at node1

umount /dev/drbd0

# Move folder /opt existing to another folder, do the following command on all nodes (node1 and node2)

mv /opt /backup­opt
mkdir /opt

# Configure /etc/hosts and dns records on all nodes (node1 and node2)

vi /etc/hosts

so that like below

127.0.0.1       localhost
192.168.80.91   node1.imanudin.net   node1
192.168.80.92   node2.imanudin.net   node2
192.168.80.93   mail.imanudin.net    mail
vi /var/named/db.imanudin.net

change IP address of mail so that refers into IP 192.168.80.93. See the following example

$TTL 1D
@       IN SOA  ns1.imanudin.net. root.imanudin.net. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
@       IN      NS      ns1.imanudin.net.
@       IN      MX      0 mail.imanudin.net.
ns1     IN      A       192.168.80.91
mail    IN      A       192.168.80.93

# Configure file /etc/ha.d/haresources on all nodes (node1 and node2)

vi /etc/ha.d/haresources

so that like below

node1.imanudin.net IPaddr::192.168.80.93/24/eth0:0 drbddisk::r0 Filesystem::/dev/drbd0::/opt::ext3 named zimbra

# Stop service Heartbeat on node2 and then node1

service heartbeat stop

# Start service Heartbeat on node1 and then node2

service heartbeat start

TESTING HA

– Failover

After Zimbra running well on node1, please stop service Heartbeat on node1 or force off machine

service heartbeat stop

All services that taken over by Heartbeat will automatically stopped and taken over by node2. How long node2 can start all services worked again, depends how long start services (named and zimbra)

– Failback

Please start again service Heartbeat on node1 or power on machine

service heartbeat start

All running services on node2 will automatically stopped and taken over by node1

Hooray, finally you could build Zimbra HA with DRBD+Heartbeat

For log information about HA process, you can see at /var/log/ha-log

Good luck and hopefully useful 😀

121 comments

  1. Halo mas imanudin, nama saya Dekri, mau bertanya :
    – Jika server nya berbeda data center dan pastinya berbeda ip address, itu bagaimana ya mas?
    – Selain menggunakan Heartbeat, pacemaker dan DRBD apakah ada software HA lainya ya mas?
    Terimakasih

    1. Hi mas Dekri,
      – Bisa komunikasi langsung menggunakan IP public mas. Untuk DRBD ataupun untuk Heartbeat nya
      – Dibagian Heartbeat, bisa diganti ucast nya menjadi IP public lawannya
      – Untuk software HA yang lain, saya belum tahu lagi mas

  2. halo, sya alvian, saat sya lakukan mount /dev/drbd0 /opt, direktori zimbranya malah hilang dan sya tidak bisa mengakses zimbranya, salahnya dimana ya…., trimaksih….

    1. Hi mas Alvian,
      Folder nya tidak hilang. Melainkan diganti dengan device drbd yang di mounting. Coba drbd nya mount dulu ke folder lain dan salin seluruh isi /opt/zimbra ke folder hasil mounting DRBD. Bisa dilihat pada bagian ini “– Rysnc Zimbra”

  3. halo mas iman,, waktu penginstalan zimbranya apakah di node ke dua install seperti biasa atau dummy instalation ya_./install.sh -s), dan waktu mengintsall apakah file /optnya perlu di mount dlu k DRBD

    thnx….
    need help for my homework :))

  4. Hai mas aku mau nanya,, aku cluster dan penginstalan sudah selesai, tapi dalam beberapa menit clusternya down dengan log seperti ini
    ResourceManager(default)[93104]: 2019/09/06_03:18:00 ERROR: Return code 1 from /etc/init.d/zimbra
    ResourceManager(default)[93104]: 2019/09/06_03:18:00 CRIT: Giving up resources due to failure of zimbra
    ResourceManager(default)[93104]: 2019/09/06_03:18:00 info: Releasing resource group: vn-zmb-cltr-01-uph IPaddr::10.12.1.161/24/ens160:0 drbddisk::r0 Filesystem::/dev/drbd0::/opt::ext3 zimbra
    ResourceManager(default)[93104]: 2019/09/06_03:18:00 info: Running /etc/init.d/zimbra stop
    Mungkin bsa dbantu mas??

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.