I hear you; I wanted a small storage spaces config for a backup unit I was building in my lab but it just didn't make sense compared to whacking in a local RAID in if one accepts a single node with resilience as opposed to two nodes without local resilience. I'm somewhat surprised the Starwind vSAN supports h/w RAID - I really need to look at that at some stage.Agree, I should have elaborated on this point. My concern is not having one node as a failure domain which is completely normal for 2-node configuration, but the fact that all disks in the cluster are also fallen under the FTT=1. For example, with HPE VSA or StarWind vSAN I can configure the local redundancy using hardware RAID controller. Imagine 2-node all-flash setup where you can totally lose one of the nodes and 2 disks in the other node with the little to no impact on production availability.
I know, that S2D features the self-healing capability using the "Reserved Capacity" mechanism. However, until the rebalancing is finished, the production would have zero redundancy.
That's why I think the "good-enough" S2D cluster starts from 4 all-flash nodes in mixed-resiliency configuration (mirror+parity).
I run a HP VSA in my lab; its RAID-10 throughout