Different disk sets for Proxmox Ceph pools?

Discussion in 'Linux Admins, Storage and Virtualization' started by chilipepperz, May 29, 2019.

  1. chilipepperz

    chilipepperz Active Member

    Mar 17, 2016
    Likes Received:
    So I'm using Proxmox VE 5.4. I have a Ceph cluster made up of hard drives with some SSDs for caching. Great!

    I've also got a bunch of NVMe SSDs across the nodes. There are too many for simply being used as cache devices. I want to use them as an all-NVMe Ceph pool.

    If I add them to the Ceph by adding OSD via CLI or GUI I'm getting them added to the main pool.

    Is there a way to add NVMe devices to a different set in a different pool so they're not combined with the slower pool?
  2. MikeWebb

    MikeWebb Member

    Jan 28, 2018
    Likes Received:
    I see that ceph now has a nvme, ssd and hdd type. a quick google resulted in a few hits that showed how to create crushmaps and rules for device type pools. Sorry I can't help more, I'm trying to not go down toooo far the ceph rabbit hole. I'm focusing on OSD nodes with mixed ssd and hdd but sounds like issues needing similar solutions.
  3. Patrick

    Patrick Administrator
    Staff Member

    Dec 21, 2010
    Likes Received:
    I think @MikeWebb has the right idea for now.

    I agree @chilipepperz that Ceph admin in Proxmox having a feature like this would be useful. Ceph handles tiering, but if you want to explicitly play with this, you need to go outside the normal offering.
Similar Threads: Different disk
Forum Title Date
Linux Admins, Storage and Virtualization SOLVED: duplicated zpool via send/recv, disk usage appears to be incorrect Oct 28, 2019
Linux Admins, Storage and Virtualization Increasing the LVM disk size Aug 29, 2019
Linux Admins, Storage and Virtualization ZFS insufficient replicas on replaced disk May 28, 2019
Linux Admins, Storage and Virtualization Size on disk is 2.3X file size Mar 9, 2019
Linux Admins, Storage and Virtualization Which disks for new array? Aug 6, 2018

Share This Page