If you had a two-node Supermicro chassis, with 16x Enterprise SSDs, what kind of NAS or fail-over Storage/Performance setup would you do?
/TL;DR
So, I've been curating hardware for a network build using Proxmox and Ceph (small 240GB SATA Enterprise SSDs for fsync requirements for Ceph for the block.db, + HDDs backend). I have three low-end servers to use for the cluster. However, last week someone donated a Supermicro dual-node server with 24x drive-bays. Figured I'd throw in some ram and just use them as two more Proxmox compute nodes.
The machine arrived and... whoa, 16x SAS2 Enterprise 800 GB SSDs!
Ceph needs at least 3x nodes, 5x to be safe. However:
- None of the other 3x nodes have SAS nor can fit the 2.5" drives
- The new 2-node chassis can't fit 3.5" drives
IOW, I can't find a mix of hardware that would match across all 5x nodes.
That like... uh, blows the entire plan outta the water for Ceph - as I can't make a 5x node Ceph cluster that matches. LOL Well, I can still do the original 3-node Ceph cluster. Just I don't know how to fully utilize this new dual-node server.
Besides a simple mirror of VMs/data in a failover, what kind of setup can you think of that fully utilize this stack of SSDs? Is there some vSAN/Ceph-like block layer I could use? I know something like Portworx for Kubernetes could use them as block devices, abiet with the same high-risks of running a 2-node Ceph/vSAN cluster (and I don't plan on deploying Kubernetes for this client).
/TL;DR
So, I've been curating hardware for a network build using Proxmox and Ceph (small 240GB SATA Enterprise SSDs for fsync requirements for Ceph for the block.db, + HDDs backend). I have three low-end servers to use for the cluster. However, last week someone donated a Supermicro dual-node server with 24x drive-bays. Figured I'd throw in some ram and just use them as two more Proxmox compute nodes.
The machine arrived and... whoa, 16x SAS2 Enterprise 800 GB SSDs!
Ceph needs at least 3x nodes, 5x to be safe. However:
- None of the other 3x nodes have SAS nor can fit the 2.5" drives
- The new 2-node chassis can't fit 3.5" drives
IOW, I can't find a mix of hardware that would match across all 5x nodes.
That like... uh, blows the entire plan outta the water for Ceph - as I can't make a 5x node Ceph cluster that matches. LOL Well, I can still do the original 3-node Ceph cluster. Just I don't know how to fully utilize this new dual-node server.
Besides a simple mirror of VMs/data in a failover, what kind of setup can you think of that fully utilize this stack of SSDs? Is there some vSAN/Ceph-like block layer I could use? I know something like Portworx for Kubernetes could use them as block devices, abiet with the same high-risks of running a 2-node Ceph/vSAN cluster (and I don't plan on deploying Kubernetes for this client).