This is a viable option since the $200/year VMUG EvalExperience includes VSAN AFA.No free ones to my knowledge...
VMware VSAN?
I am running vSAN 6.x AFA as well on my 3-node vSphere Enterprise Plus cluster, SCREAMS w/ a s3700 200GB for cache tier and s3610 800GB for capacity tier.This is a viable option since the $200/year VMUG EvalExperience includes VSAN AFA.
Side note: I'm running VSAN AFA on 3 nodes, each with 1 Samsung 950 Pro (write/performance tier) and 4 Samsung PM853Ts (read/capacity tier). The 950s aren't DC or Enterprise class and lack PLP (power loss protection), but I have the cluster on UPS (and it's just a home lab)
Actually that is exactly the architecture of Nutanix (at least when not on Acropolis), and I'm reasonably certain also a few other HyperConverged stacks. Any HyperConverged stack that supports vSphere as a hypervisor (except VSAN) runs in that mode, as no-one except VMware can add that kind of functionality at the host level. Most of the stacks that support HyperV also run in that model - I believe mostly because they are cross-hypervisor platforms and keeping their intelligence isolated to a VM is what allows them to support multiple hypervisors easily.Well that's insane for sure, pass thruing to back, sounds like recipe for disaster.Or if you want to build your own there are a lot of options out there. You could even do something crazy if you wanted like setup 4 HyperV nodes, pass-through disks in each node to some type of linux, create a Ceph/GlusterFS/other storage cluster across those VMs, and present that storage back to all 4 hosts as shared storage.
Out of curiosity, is there any real issue with this? It doesn't use the Mechanical stuff till it runs out of SSD and it's purely on the data heatmap. If you're doing DB stuff it should be even less of an issue. Also considering they put out this "All Flash" system back in 2014 there must be some way to run an all flash array. Perhaps check the forums?each node must contain 1+1 SSD/HDD. It appears that there is no way to run all-flash.
Actually, that's incorrect. For vSphere we use Direct Path I/O. Much different than Passthrough Disk with Hyper-V. We do PCI-E passthrough to each of our controller VMs for direct storage access.Actually that is exactly the architecture of Nutanix (at least when not on Acropolis), and I'm reasonably certain also a few other HyperConverged stacks. Any HyperConverged stack that supports vSphere as a hypervisor (except VSAN) runs in that mode, as no-one except VMware can add that kind of functionality at the host level. Most of the stacks that support HyperV also run in that model - I believe mostly because they are cross-hypervisor platforms and keeping their intelligence isolated to a VM is what allows them to support multiple hypervisors easily.
If you ever want to look at a a really convoluted IO path, get a good technical deep-dive on Compellent FluidCache working on top of ESX.
I never went into detail on how the disks are passed through to the VM, only that they are. Whether passing individual drives under HyperV or an entire SAS HBA under ESX the architecture remains the same and is as I described above - the drives are passed to the VM(s) where software pools them together (and adds redundancy and all the other fancy bits), and presents a shared volume back to the host(s). From a high-level point-of-view they all work the same.Actually, that's incorrect. For vSphere we use Direct Path I/O. Much different than Passthrough Disk with Hyper-V. We do PCI-E passthrough to each of our controller VMs for direct storage access.
The max drive capacity per node is 4 drives. The reason for this is that Community Edition does "LUN" passthrough and not PCI passthrough which impacts queue depth greatly.Can it run six SSD per node with four node then?
The DataPath resiliency is the same, but not the core of passing physical hardware to the CVM, passthrough disks do not use the native storage driver like VT-D allows since you're passing the actual controller to the VM.I never went into detail on how the disks are passed through to the VM, only that they are. Whether passing individual drives under HyperV or an entire SAS HBA under ESX the architecture remains the same and is as I described above - the drives are passed to the VM(s) where software pools them together (and adds redundancy and all the other fancy bits), and presents a shared volume back to the host(s). From a high-level point-of-view they all work the same.
Yes, there are performance implications in how you accomplish that, and VT-d is good. But its still a somewhat odd IO path, especially if you understand how the Nutanix CVM Autopathing feature works.
You can do what you want...again..I urge you guys/gals to research and see what CE offers. You can run 1, 3 or 4 nodes and yes it provides HA etc. I'm not saying performance is bad by any means but don't expect it to compare to a real Nutanix solution based on the limitations I outlined.You mean we can't put all heavy load it and push the system to the limit with full HA?!
BTW, Derrick Ho is awesome!