I'm running Ceph and trying to move towards NVMe for future expansion. I'm not super-concerned with performance, and at least initially they'll probably be mixed with enterprise SATA SSDs - it just seems like the trend is towards NVMe and the cost of a U.2 SSD isn't really any higher than any other format.
Since I'm running Ceph my goal is to minimize the overhead for the hosts, and to a point more hosts are better than less. Servers actually marketed for hosting NVMes seem to still be pretty pricey.
Main requirements are at least 4-8GB RAM per NVMe, SFP+ support (can be via NIC), then as many enterprise (PLP, decent write endurance, used is fine) NVMes as possible for the host overhead. CPU isn't much of a concern for these, and integrated graphics would be preferred just to keep the PCIe slots free.
I notice I can get used 64GB workstations like the Z640 for maybe $200 or so with enough PCIe to handle a SFP+ NIC and at least one HBA, and there are a couple of options for putting NVMes in a 16x slot. These include U.2 NVMe adapters, M.2 adapters, and so on - typically 4 per 16x slot. Getting 4 large u.2 SSDs in a single host would be quite a bit of capacity. There are also proper HBAs that will do U.2 with all the modes, though really I just care about NVMe (the SAS drives don't seem to be any cheaper). The Z640 at least seems to support bifurcation, which is probably something to keep an eye on for the cheaper adapter options.
Is this the best way to go for something like this? It seems like the systems targeted at U.2 storage are still new enough to be very expensive, even used, though some of those solutions can fit a very large number of drives and of course tons of RAM.
Since I'm running Ceph my goal is to minimize the overhead for the hosts, and to a point more hosts are better than less. Servers actually marketed for hosting NVMes seem to still be pretty pricey.
Main requirements are at least 4-8GB RAM per NVMe, SFP+ support (can be via NIC), then as many enterprise (PLP, decent write endurance, used is fine) NVMes as possible for the host overhead. CPU isn't much of a concern for these, and integrated graphics would be preferred just to keep the PCIe slots free.
I notice I can get used 64GB workstations like the Z640 for maybe $200 or so with enough PCIe to handle a SFP+ NIC and at least one HBA, and there are a couple of options for putting NVMes in a 16x slot. These include U.2 NVMe adapters, M.2 adapters, and so on - typically 4 per 16x slot. Getting 4 large u.2 SSDs in a single host would be quite a bit of capacity. There are also proper HBAs that will do U.2 with all the modes, though really I just care about NVMe (the SAS drives don't seem to be any cheaper). The Z640 at least seems to support bifurcation, which is probably something to keep an eye on for the cheaper adapter options.
Is this the best way to go for something like this? It seems like the systems targeted at U.2 storage are still new enough to be very expensive, even used, though some of those solutions can fit a very large number of drives and of course tons of RAM.