So, I've been trying to figure out a problem with my lab design. I currently have a 2-node ESXi vSAN setup with 4x 365GB FIO2, 4x 500GB Cons S.2, and 4x Intel S3610 400GB. Great. But...I'm expanding and based on what I could get my hands on, I will be expanding using Hyper-V and SCVMM. Also great. The problem comes in with what I have to create my S2D cluster. Just a disclaimer...what I'm about to discuss I would never do in a production environment (if it can even be done). I have a lot of experience with S2D and it's been great for most of the people I've worked with. I always set it up with a traditional all-flash or hybrid using physical disks. I've never ventured away from that. For my home lab...I'm looking to make the most out of my setup (and money) while still maintaining performance AND redundancy. I have two physical servers that will act as clustered storage. Each has 3x 1.6TB P3608 (w/ essentially 2x 765GB Intel NVME), 6x Intel 1.6TB S3610, 12x 500 GB Cons S.2). For the most part, that will do fine. I can lose an entire server and still have storage available. However...I've been looking at doing something kinda radical and strange but I think technically it would work and I would gain higher storage efficiency and still maintain resiliency and most of the performance. I want to create 16 virtual machines spread across the two current ESX nodes (2 VM each) and S2D nodes (6 VM each) for a total of 16 VM-based SOFS with 1.0-1.4 TB each. The reason I'm doing this is that I need to be able to lose an entire server for maintenance at a given time...and I was going to use parity+mirroring which gives me more efficiency but requires a minimum of 4 nodes in order to work correctly. The diagram below (thrown together quick so try to ignore some of the naming issues) highlights what I'm after...
The goal with this is so that I can gain some redundancy. For example...if I lose a host (or two even)...one hypervisor, in theory could handle most of the additional virtual machines (although with much heavier load placed on the underlying disks). The VM disk sizes will be lowered to make sure that in the event of a failure, I have storage available to handle the additional machines. My question is...would this work? I have seen many multi-failure scenarios where a 16 node cluster could tolerate some pretty crazy stuff but those are built to support Microsoft configurations. This would not be for many reasons. I'm looking at it and at first glance it would make sense and that it would work. I know it's crazy...and might just be stupid but I'm curious if anyone else has any thoughts about it.
The goal with this is so that I can gain some redundancy. For example...if I lose a host (or two even)...one hypervisor, in theory could handle most of the additional virtual machines (although with much heavier load placed on the underlying disks). The VM disk sizes will be lowered to make sure that in the event of a failure, I have storage available to handle the additional machines. My question is...would this work? I have seen many multi-failure scenarios where a 16 node cluster could tolerate some pretty crazy stuff but those are built to support Microsoft configurations. This would not be for many reasons. I'm looking at it and at first glance it would make sense and that it would work. I know it's crazy...and might just be stupid but I'm curious if anyone else has any thoughts about it.