I've used my "roll your own" system for providing block storage to ESXi hosts using the iSCSI target daemon on CentOS for about 8 years now, but I'm looking to roll out a new physical box and would like some advice on alternatives.
Where this system lets me down is:
Where this system lets me down is:
- There is no good storage tiering system. I tried several alternatives over the years, but discarded all of them because they all caused the underlying storage to become unavailable if the SSD cache failed in any way.
- When I started, I used real hardware RAID (LSI cards with battery backed cache) because software versions didn't seem ready for primetime.
- Because of both of the above, it's not as scalable or as inexpensive as I'd like for the performance I get.
- I don't want a fully-packaged system that includes both hardware and software. I want to build from parts, since I have chassis with plenty of drive bays and plenty of HBAs and RAID cards, along with a reasonable amount of core parts (motherboard, CPU, RAM, NICs). What I don't have, I can buy (like Optane). So, I really want software recommendations.
- I only care about storage...I have compute taken care of with ESXi managed by vCenter and don't want to move it to another platform. I have no problem using an "all-in-one" software that doesn't require me to use its compute system to provide storage to other systems.
- The software needs to be at least very cheap ($100/node or so), with no pricing based on total storage. Free is of course better, and I'd prefer open source, but high-quality closed-source freeware beats average open source.
- In the long term, it would be nice to be able to scale horizontally, so that multiple nodes could be controlled via a single interface. It would also be useful if I could use spare controllers/disks on my compute nodes as storage in the same system.