Hi there,
Already almost hijacked the thread of boe so I thought I start a new one here. I wanted to learn / experiment and tryout building a ceph cluster as a storage array network for my ESXi compute cluster lab. So my idea was since I already have 24x 8TB HDDs from WD (buy one more so its even around the 5 servers) around to get 5x DL380p Gen8 (they are quite inexpensive these days) equip each of them with:
5x WD RED 8TB (Ceph Storage Disk)
1x Samsung 512GB 850 Pro SSD or Samsung 500GB 960 Evo (Ceph Cache Disk)
1x HP InfiniBand FDR/Ethernet 10/40Gb 2-port 544FLR-QSFP
1x HPE Dual 120GB Value Endurance Solid State Drives M.2 Enablement Kit 777894-B21 (RHEL or CentOS System Disk)
and for everything to connect:
2x Mellanox SX6036 36ports QSFP 56Gb/s managed
After setting up the Ceph Cluster I would use iSCSI Multipath to connect it to my ESXi Compute Cluster which is not yet 40Gbps but with the iSCSI Multipathing I would at least be able to use the Full Dual 10Gbps I have.
Possible gotcha / pitfalls:
I also attached a little PDF which I draw in visio as an idea how to connect it
Already almost hijacked the thread of boe so I thought I start a new one here. I wanted to learn / experiment and tryout building a ceph cluster as a storage array network for my ESXi compute cluster lab. So my idea was since I already have 24x 8TB HDDs from WD (buy one more so its even around the 5 servers) around to get 5x DL380p Gen8 (they are quite inexpensive these days) equip each of them with:
5x WD RED 8TB (Ceph Storage Disk)
1x Samsung 512GB 850 Pro SSD or Samsung 500GB 960 Evo (Ceph Cache Disk)
1x HP InfiniBand FDR/Ethernet 10/40Gb 2-port 544FLR-QSFP
1x HPE Dual 120GB Value Endurance Solid State Drives M.2 Enablement Kit 777894-B21 (RHEL or CentOS System Disk)
and for everything to connect:
2x Mellanox SX6036 36ports QSFP 56Gb/s managed
After setting up the Ceph Cluster I would use iSCSI Multipath to connect it to my ESXi Compute Cluster which is not yet 40Gbps but with the iSCSI Multipathing I would at least be able to use the Full Dual 10Gbps I have.
Possible gotcha / pitfalls:
- Where do I put the 2.5" Drive if I go with that option instead of the NVME?
- Does the HP 544FLR which seams to be a ConnectX-3 really work with that switch?
- HP Controller to the Backplane is only with 2 cables which makes me thinking about 2x4 SAS Lanes, so what if I attach 12 SATA disks? Will it even work or will it underperform?
- HP 420i Controller has a HBA mode which I hope will accept non HP HDDs
- Never done ceph before, so I am not sure if 512GB is enough cache for that many 8TB disks? Would the NVME change "much" to the SSDs performance wise in Ceph?
I also attached a little PDF which I draw in visio as an idea how to connect it
Attachments
-
464.1 KB Views: 22