Ceph low performance

Discussion in 'Linux Admins, Storage and Virtualization' started by sander9, Sep 17, 2018.

  1. sander9

    sander9 New Member

    Joined:
    Nov 20, 2017
    Messages:
    4
    Likes Received:
    0
    Hello,

    We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using.

    So is there any way we can improve with configuration changes?

    The performance we get from inside Virtual Machines are about:
    Sequentieel: Read 642,6 MB/s Write 459,8MB/s
    4 kb Single Thread: Read 4,342 MB/s Write 15,45 MB/s

    The hardware we are using:
    4 X OSD Node, per node:
    - 96GB Ram
    - 2 x 6 Core (with HT) 2,6 GHz
    - 6 x SM863 960GB (single BlueStore OSD per SSD)
    - 2 x 10GB SFP+ (1 x 10GB for storage and 1 x 10GB for replication)

    3 x Monitoring Node, per node:
    - 4GB RAM
    - Dual Core CPU (with HT)
    - Single 120 GB Intel Enterprise SSD
    - 2 x 1 GB Network (Active/Backup)

    Replication/size: 2
    Ceph Version: 12.2.8
    Jumbo Frames enabled
    Logging options from Ceph disabled in ceph.conf (this improves it a little bit)

    All Proxmox nodes are connected with 1 x 10GB SFP+

    Is there any configuration / setting we can change to improve performance? Or is this max we can get with this hardware? Especially 4K read / writes are slow.

    I also be thinking if it would help to add 2 OSD nodes with each a fast NVME SSD and make it a Cache pool before the normale SSD pool? Ore will this make it even slower?

    Thank you already,

    Kind regards,

    Sander
     
    #1
  2. sander9

    sander9 New Member

    Joined:
    Nov 20, 2017
    Messages:
    4
    Likes Received:
    0
    4K i only get 406 IOPs Write and 835 IOPs Read.

    I think with my configuration this could be way more!?
     
    #2
Similar Threads: Ceph performance
Forum Title Date
Linux Admins, Storage and Virtualization Ceph blustore over RDMA performance gain Jun 2, 2018
Linux Admins, Storage and Virtualization CEPH write performance pisses me off! Jan 25, 2017
Linux Admins, Storage and Virtualization ceph backfill problem Sep 20, 2018
Linux Admins, Storage and Virtualization Ceph iSCSI Target with Proxmox VE implementation ? Jul 18, 2018
Linux Admins, Storage and Virtualization Anybody seen this container/ceph-fuse bug? Apr 8, 2018

Share This Page