New ceph cluster -recommendations?

Discussion in 'Linux Admins, Storage and Virtualization' started by mbosma, Sep 5, 2019.

  1. mbosma

    mbosma Member

    Joined:
    Dec 4, 2018
    Messages:
    31
    Likes Received:
    1
    I'm about to build my first production ceph cluster after goofing around in the lab for a while.
    I'd like some recommendations on hardware and setup before rushing to buy hardware and making wrong decisions.

    My goal is to start small with 3 nodes to start using ceph for daily tasks and start expanding it as I need more performance and storage.

    The hw setup:
    1x Intel Scalable Silver 4109T
    2x 32GB DDR4 2666Mhz
    1x Supermicro CSE-116AC2-R706WB
    1x Supermicro X11SPW-TF-B
    1x LSI 9300-8I
    1x Intel XL710-QDA2
    8x Samsung SSD PM883 960gb
    1x Samsung PM983 960gb

    Networking:
    2x Juniper QFX5100-24Q-3AFO

    Server Setup:
    - OS on the PM983 (ubuntu or debian)
    - PM883 as OSDs
    - 2x40gbit bond with vlans for public and private ceph networking
    - 2x1gbit bond for management
    - Replication will be set to max=3 and min=2.
    - Failure domain = host

    My first concern is having a lack of cpu power for putting all ceph services on these 2 nodes.
    Would this 8 core cpu suffice or should I look into getting a cpu with more cores?

    I've read mixed opinions about using 3 node clusters, does anyone mind giving their advice about this?

    I hope you guys can give me some advice on this setup or ceph in general.
    If you need any further information, I'll be happy to provide.

    Michael
     
    #1
Similar Threads: ceph cluster
Forum Title Date
Linux Admins, Storage and Virtualization Different disk sets for Proxmox Ceph pools? May 29, 2019
Linux Admins, Storage and Virtualization CEPH: switching from HDD to SSD - HW recommendations May 23, 2019
Linux Admins, Storage and Virtualization Ceph Benchmark request /NVME Feb 22, 2019
Linux Admins, Storage and Virtualization ceph backfill problem Sep 20, 2018
Linux Admins, Storage and Virtualization Ceph low performance Sep 17, 2018

Share This Page