Proxmox & CEPH

What's your prefered method to install and run Ceph storage in your environment


  • Total voters
    17
  • Poll closed .
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
Is the context of the poll limited to PVE? Would rook/k8s not be an option?
Would it be ok for you to just vote for "Separate Ceph ..."? Or do you want me to change the poll? However, I also noted some typos and unfortunately I an not solve them @Patrick . Is that something mods can do?
 
Last edited:

gb00s

Well-Known Member
Jul 25, 2018
1,190
602
113
Poland
... I have some nodes that have better reliability and power backup than others.. so if I have a compute node crash not worried about.. unlike a ceph node crashing... (That's just me and my setup).
That's a valid point.

I'm very interested in optimizing Ceph performance in terms of how you divide Ceph services to different nodes depending on the size and overall estimated workload. Just starting with CPUs. I read high clock CPU's are the once to be used today vs pure core count. On the other hand I read you still need min 8-12 cores for 'acceptable' performance if you are running several services combined on a node. The more OSD services you are running, the more cores you need. I'm also interested how Ceph reacts to traditional caching.
 

Terry Wallace

PsyOps SysOp
Aug 13, 2018
200
125
43
Central Time Zone
ceph can be tuned for either throughput or capacity, I assume with the VM's you favor throughput, in which case the WAL disks are the things you want to hit first. Thats why i use the optane's for.. insanely high write life and io response. I put one optane in a box the (280gig) and use the 10gig per a TB of storage for the WAL value. so that covers 28 TB of storage per node. * 6 nodes = 168 TB /3 for crush layout = 56 TB of VM volumes.
 

oneplane

Well-Known Member
Jul 23, 2021
845
484
63
We've also started to take Ceph out of Proxmox and either using Rook with Kubernetes, or using Harvester and Longhorn which skips Ceph altogether.
 
  • Like
Reactions: tsteine

geonap

Member
Mar 2, 2021
76
75
18
i am limited in the amount of servers i can use, there isn't enough power to be bringing up storage only nodes so i can only take advantage of the N4 backplanes to use the p4610's, the p4800x goes int the chassis in a funny way.

i'm just hoping that using the ceph nodes as compute nodes as well doesn't affect performance too much.
 

tsteine

Active Member
May 15, 2019
171
83
28
I recently migrated away from ESXi and vSAN to Kubevirt and rook orchestrated ceph running on kubernetes.

I wasn't particularly happy about SUSE Harvester's opinionated approach forcing you to use Longhorn for storage, so I rolled my own cluster on bog standard ubuntu and RKE2, then installing Kubevirt on it, and deploying rook ceph on the cluster with each host supplying nvme drives.

I find it works quite well, it has live migration with sr-iov network devices, qemu-kvm VMs, kubernetes scheduling for workloads, and HCI ceph. That being said, I would absolutely not recommend doing this for any production workload if you're completely green with Kubernetes. If you are green, Harvester is for you.