How to Set Up a Multi-Node Ceph Cluster Using the Dashboard GUI

Posted by

If you have limited resources, you can follow this guide: “How to Set Up a Ceph Cluster on a Single Node”

If you want to install on Proxmox VE single node, you can follow this guide: “Proxmox + Ceph: Single Node Installation for Test Lab or Home Setup”

If you want to install ceph cluster using CLI, you can follow this guide: “How to Set Up a Multi-Node Ceph Cluster Using the Command Line”

In this guide, I will create a multi-node Ceph cluster on Rocky Linux 9

System requirements:
– Minimum 2 disk per-node (1 disk for OS and 1 disk for OSD)
– OS Rocky Linux 9
– The IP addresses that will be used are 192.168.1.51 to 192.168.1.53

# Preparation

Adjust /etc/hosts file on each nodes

– Configure /etc/hosts

127.0.0.1   localhost 
192.168.1.51	ceph1
192.168.1.52	ceph2
192.168.1.53	ceph3

– Configure Hostname

Run on node1

hostnamectl set-hostname ceph1

Run on node2

hostnamectl set-hostname ceph2

Run on node3

hostnamectl set-hostname ceph3

– Disable Firewall

Run the following command on all nodes

sed -i s/'SELINUX='/'#SELINUX='/g /etc/selinux/config
echo 'SELINUX=disabled' >> /etc/selinux/config
setenforce 0
service firewalld stop
service iptables stop
service ip6tables stop
systemctl disable firewalld
systemctl disable iptables
systemctl disable ip6tables

# Install Dependencies

Run the following command on all nodes

dnf install podman python3 lvm2

# Install cephadm

Run the following command on all nodes

CEPH_RELEASE=19.2.1
curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
chmod +x cephadm

Note: For the latest version (CEPH_RELEASE), please check here: https://docs.ceph.com/en/latest/releases/#active-releases

# Add ceph squid repo

Run the following command on all nodes

./cephadm add-repo --release squid
./cephadm install
cephadm install ceph-common

Run this command on node1 only

# Boostrap ceph

cephadm bootstrap \
--cluster-network 192.168.1.0/24 \
--mon-ip 192.168.1.51 \
--dashboard-password-noupdate \
--initial-dashboard-user admin \
--initial-dashboard-password ceph

Below is the example when bootstrap process is completed

Fetching dashboard port number...
Ceph Dashboard is now available at:

             URL: https://ceph1:8443/
            User: admin
        Password: ceph

Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/e8e018b8-0fd2-11f0-bcca-bc2411e54572/config directory
You can access the Ceph CLI as following in case of multi-cluster or non-default config:

        sudo /usr/sbin/cephadm shell --fsid e8e018b8-0fd2-11f0-bcca-bc2411e54572 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Or, if you are only running a single cluster on this host:

        sudo /usr/sbin/cephadm shell 

Bootstrap complete.

# Copy ceph public key

Run this command on node1

ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3

# Add Host/Node to the cluster

Log in to ceph dashboard with username admin and password ceph (https://192.168.1.51:8443)

Select menu Cluster | Hosts. Fill in the Hostname, Network address, and select _admin on Labels section. Then click Add Host. Do the same thing for the other node (ceph3)

# Add Ceph Monitor

Select menu Administration | Services | mon | Edit. On Hosts section, select ceph1, ceph2, and ceph3. In the Count section, change the value to 3. Then click Edit Service

# Add Ceph Manager

Select menu Administration | Services | mgr | Edit. In the Hosts section, select ceph1, ceph2, and ceph3. In the Count section, change the value to 3. Then click Edit Service

# Add OSD

Select menu Cluster | OSDs | Create. By default, Ceph will automatically choose all available disks. You can click Advanced Mode to view all available disks. Then click Create OSDs

OSD display after being created

# Test by creating a pool

Select menu Cluster | Pools | Create. In the Name section, Fill in the field with the desired pool name. example: rbdpool. In the Applications section, select rbd. Then click Create Pool

# Test by creating a RADOS Block Device

Select menu Block | Images | Create. In the Name section, fill in the field with the desired block image name. Example: vm-100. In the Pool section, select rbdpool the pool name that has been created previously. In the Size section, for example, fill in 30 GiB. Then click Create RBD

Congratulations, you have successfully created a multi-node Ceph cluster via the GUI. This Ceph cluster can be used for testing integration with Proxmox VE or other platforms.

Good Luck 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.