Run Ceph on One Node in Proxmox – Is It Possible? Here’s How
Let say you have a single machine Proxmox VE and have several harddisk. Let say the total of disk are 4 disks. 1 disk for OS, and 3 disks for ceph OSD.
I am newbie and I want to know how to install and configure Ceph on Proxmox VE.
This guide will explain how to install Ceph cluster directly on single Proxmox VE machine. With this approach, you only need 2 machine if you want to try Ceph RBD Mirroring
If you want to install a Ceph cluster on a single machine using different OS, please check this guide: “How to Set Up a Ceph Cluster on a Single Node”
# Install and Configure Ceph
Go to Datacenter | PVE Host | Ceph | Install/Configure Ceph. You can use reef or squid (latest) for ceph version. Next and Finish
# Add OSD
Go to Datacenter | PVE Host | Ceph | OSD | Create: OSD
Select available disk one by one to create OSD
# Monitoring Ceph
Go to Datacenter | PVE Host | Ceph. You will see the Ceph status like below. The Ceph status is HEALTH_WARN
# Reconfigure crush map
From CLI, Do the following process
Extract crush map:
ceph osd getcrushmap -o crushmap.cm
Decompile crushmap for easy modification
crushtool --decompile crushmap.cm -o crushmap.txt
Modify the CRUSH map in the file crushmap.txt and change the lines with host to osd in the rules section.
# rules
rule replicated_rule {
id 0
type replicated
step take default
step chooseleaf firstn 0 type osd # <- change in this line
step emit
}
# end crush map
Recompile crush map:
crushtool --compile crushmap.txt -o new_crushmap.cm
Update Ceph cluster with a new crush map:
ceph osd setcrushmap -i new_crushmap.cm
Check CEPH status via dashboard or via CLI
ceph -s
root@pve1:~# ceph -s
cluster:
id: c638916b-9ad8-4739-bebe-c9037f31eb26
health: HEALTH_OK
Ceph status become HEALTH_OK now
# Add Pool
Go to Datacenter | PVE Host | Ceph | Pools | Create
# Place VM on RBD Pool
When create a new VM, select RBD Pool that has been created
Good Luck 🙂
Source: https://www.redhat.com/en/blog/ceph-cluster-single-machine






