Ceph blustore over RDMA performance gain

Discussion in 'Linux Admins, Storage and Virtualization' started by EluRex, Jun 2, 2018.

  1. EluRex

    EluRex Active Member

    Joined:
    Apr 28, 2015
    Messages:
    198
    Likes Received:
    69
    I want to share following testing with you

    4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD.
    1. OSD: st6000nm0034
    2. block.db & block.wal device: Samsung sm961 512GB
    3. NIC: Mellanox Connectx3 VPI dual port 40 Gbps
    4. Switch: Mellanox sx6036T
    5. Network: IPoIB separated public network & cluster network
    This shows ceph over RDMA is successfully enabled
    [​IMG]

    Ceph over RDMA - rados bench -p rbd 60 write -b 4M -t 16
    [​IMG]
    2454.72 MB/s

    Standard TCP/IP - rados bench -p rbd 60 write -b 4M -t 16
    [​IMG]
    2053.9 MB/s

    Total performance gain is about 25%

    Total pool performance with 4 tests running - rados bench -p rbd 60 write -b 4M -t 16
    upload_2018-6-2_21-11-30.png
    4856.72 MB/s
     
    #1
    anoother, _alex, whitey and 5 others like this.
Similar Threads: Ceph blustore
Forum Title Date
Linux Admins, Storage and Virtualization Anybody seen this container/ceph-fuse bug? Apr 8, 2018
Linux Admins, Storage and Virtualization Ceph SSD Recommendations Mar 20, 2018
Linux Admins, Storage and Virtualization Proxmox Ceph hardware question Mar 12, 2018
Linux Admins, Storage and Virtualization How to use multiple keystone authentication ceph rgw Mar 5, 2018
Linux Admins, Storage and Virtualization PETASAN HA iSCSI Opensource free solution with CEPH Mar 4, 2018

Share This Page