For lab or testing purposes, it’s possible to deploy a Ceph cluster on a single machine.
We can integrate it with Proxmox VE as Storage backend
System requirement:
– 4 disk (1 disk for OS and 3 for OSDs)
– Operating System Rocky Linux 9
– The IP address used in this article is: 192.168.1.31
# System Preparation
– Configure /etc/hosts
127.0.0.1 localhost 192.168.1.31 ceph-singlenode.imanudin.web.id ceph-singlenode
– Configure Hostname
hostnamectl set-hostname ceph-singlenode.imanudin.web.id
– Disable Firewall
sed -i s/'SELINUX='/'#SELINUX='/g /etc/selinux/config echo 'SELINUX=disabled' >> /etc/selinux/config setenforce 0 service firewalld stop service iptables stop service ip6tables stop systemctl disable firewalld systemctl disable iptables systemctl disable ip6tables
# Install Dependencies
dnf install podman python3 lvm2
# Install cephadm
CEPH_RELEASE=19.2.1
curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
chmod +x cephadm
Note: For the latest version (CEPH_RELEASE), you can check here: https://docs.ceph.com/en/latest/releases/#active-releases
# Add repo ceph squid
./cephadm add-repo --release squid ./cephadm install cephadm install ceph-common
For the latest and stable version (CEPH_RELEASE), you can check here: : https://docs.ceph.com/en/latest/releases/#active-releases
# Boostrap ceph
cephadm bootstrap \ --cluster-network 192.168.1.0/24 \ --mon-ip 192.168.1.31 \ --dashboard-password-noupdate \ --initial-dashboard-user admin \ --initial-dashboard-password ceph \ --allow-fqdn-hostname \ --single-host-defaults
Below is an example of the process.
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://ceph-singlenode.imanudin.web.id:8443/
User: admin
Password: ceph
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/e8e018b8-0fd2-11f0-bcca-bc2411e54572/config directory
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /usr/sbin/cephadm shell --fsid e8e018b8-0fd2-11f0-bcca-bc2411e54572 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Bootstrap complete.
# Setup OSDs
Make the 3 available disks as OSDs.
ceph orch apply osd --all-available-devices
Check the status using the ceph -s or ceph -w command, and ensure health is HEALTH_OK
# Perform a login test on the dashboard.
Use admin as the username and ceph as the password to log in to the Ceph dashboard, based on the bootstrap process above.
# Test create pool
ceph osd pool create rbdpool ceph osd pool application enable rbdpool rbd
The above command will create a pool named rbdpool, with the application type set to rbd.
# Test create RADOS Block Device
rbd create -p rbdpool vm-100 --size 30G rbd create -p rbdpool vm-200 --size 50G rbd list -p rbdpool rbd info -p rbdpool vm-100
[root@ceph-singlenode ~]# rbd info -p rbdpool vm-100
rbd image 'vm-100':
size 30 GiB in 7680 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 37e1c5dad18a
block_name_prefix: rbd_data.37e1c5dad18a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Wed Apr 2 22:13:28 2025
access_timestamp: Wed Apr 2 22:13:28 2025
modify_timestamp: Wed Apr 2 22:13:28 2025
Congratulations, you have successfully created a Ceph cluster with a single node (single machine). This Ceph cluster can be used for testing integration with Proxmox VE or other platforms.
Good Luck 🙂


ketika install ceph-common gak error kah mas?
saya nyoba kok selalu conflict di libcrypto.so.3 nya,,dependency nya minta libcrypto.so.3(OPENSSL_3.4.0)(64bit) tapi yang terinstall di kernel nya belum sampe ke versi itu, ada saran mas?saya sudah coba build openssl yg terbaru tp tetep muncul error nothing provides libcrypto.so.3(OPENSSL_3.4.0)(64bit) needed by ceph-common-
Hi mas Tantio,
So far tidak ada error tersebut, mas