If you have limited resources, you can follow this guide: “How to Set Up a Ceph Cluster on a Single Node”
If you want to install on Proxmox VE single node, you can follow this guide: “Proxmox + Ceph: Single Node Installation for Test Lab or Home Setup”
In this guide, I will create a multi-node Ceph cluster on Rocky Linux 9
System requirements:
– Minimum 2 disk per-node (1 disk for OS and 1 disk for OSD)
– OS Rocky Linux 9
– IP address that will be used 192.168.1.41 – 192.168.1.43
# Preparation
Adjust the /etc/hosts file on all nodes
– Configure /etc/hosts
127.0.0.1 localhost 192.168.1.41 ceph1 192.168.1.42 ceph2 192.168.1.43 ceph3
– Configure Hostname
Run on node1
hostnamectl set-hostname ceph1
Run on node2
hostnamectl set-hostname ceph2
Run on node3
hostnamectl set-hostname ceph3
– Disable Firewall
Run the following command on all nodes
sed -i s/'SELINUX='/'#SELINUX='/g /etc/selinux/config echo 'SELINUX=disabled' >> /etc/selinux/config setenforce 0 service firewalld stop service iptables stop service ip6tables stop systemctl disable firewalld systemctl disable iptables systemctl disable ip6tables
# Install Dependencies
Run the following command on all nodes
dnf install podman python3 lvm2
# Install cephadm
Run the following command on all nodes
CEPH_RELEASE=19.2.1
curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
chmod +x cephadm
Note: For the latest version CEPH_RELEASE), please check from this link: https://docs.ceph.com/en/latest/releases/#active-releases
# Add ceph squid repo
Run the following command on all nodes
./cephadm add-repo --release squid ./cephadm install cephadm install ceph-common
Run on node1
# Boostrap ceph
cephadm bootstrap \ --cluster-network 192.168.1.0/24 \ --mon-ip 192.168.1.41 \ --dashboard-password-noupdate \ --initial-dashboard-user admin \ --initial-dashboard-password ceph
The following is an example output after the bootstrap process has completed.
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://ceph1:8443/
User: admin
Password: ceph
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/e8e018b8-0fd2-11f0-bcca-bc2411e54572/config directory
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid e8e018b8-0fd2-11f0-bcca-bc2411e54572 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Bootstrap complete.
# Copy ceph public key
Run on node1
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2 ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3
# Add Host/Node to the cluster
Add (node2 and node3) to the Ceph Cluster. Run the following command on node1
ceph orch host add ceph2 192.168.1.42 --labels _admin ceph orch host add ceph3 192.168.1.43 --labels _admin
# Add Ceph Monitor
Run the following command on node1
ceph orch apply mon --unmanaged ceph orch daemon add mon ceph2:192.168.1.42 ceph orch daemon add mon ceph3:192.168.1.43 ceph orch apply mon --placement="ceph1,ceph2,ceph3"
# Add Ceph Manager
Run the following command on node1
ceph orch apply mgr --placement="ceph1,ceph2,ceph3" ceph orch apply mgr 3
# Add OSD
Make the available disk (second disk) on each node as OSD. Just run the following command on node1
ceph orch daemon add osd ceph1:/dev/sdb ceph orch daemon add osd ceph2:/dev/sdb ceph orch daemon add osd ceph3:/dev/sdb
Run ceph -s or ceph -w to check the cluster status, and make sure the health status shows HEALTH_OK.
# Test access to the ceph dashboard
Log in to the ceph dashboard with username admin and password ceph
# Test creating a pool
ceph osd pool create rbdpool ceph osd pool application enable rbdpool rbd
The above command will create a pool with name rbdpool. The type application for that pool is rbd
# Test creating a RADOS Block Device
rbd create -p rbdpool vm-100 --size 30G rbd create -p rbdpool vm-200 --size 50G rbd list -p rbdpool rbd info -p rbdpool vm-100
[root@ceph1 ~]# rbd info -p rbdpool vm-100
rbd image 'vm-100':
size 30 GiB in 7680 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 37e1c5dad18a
block_name_prefix: rbd_data.37e1c5dad18a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Wed Apr 2 22:13:28 2025
access_timestamp: Wed Apr 2 22:13:28 2025
modify_timestamp: Wed Apr 2 22:13:28 2025
Good Luck 🙂

