Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

miraculix

Active Member
Mar 6, 2015
116
25
28
No free ones to my knowledge...
VMware VSAN?
This is a viable option since the $200/year VMUG EvalExperience includes VSAN AFA.

Side note: I'm running VSAN AFA on 3 nodes, each with 1 Samsung 950 Pro (write/performance tier) and 4 Samsung PM853Ts (read/capacity tier). The 950s aren't DC or Enterprise class and lack PLP (power loss protection), but I have the cluster on UPS (and it's just a home lab)
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
You might also want to at least test Proxmox VE 4.0. It is much less fancy than other options but has Ceph, GlusterFS and ZFS on Linux for primary storage, then KVM as a hypervisor. The 7-node cluster hosting STH and the forums right now is all NVMe, SAS and SATA SSD based and using a mix of NVMe ZFS mirrors and Ceph on the SAS/ SATA SSDs.

It is Debian based so certainly more of an open source experiment rather than one from a big vendor.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
This is a viable option since the $200/year VMUG EvalExperience includes VSAN AFA.

Side note: I'm running VSAN AFA on 3 nodes, each with 1 Samsung 950 Pro (write/performance tier) and 4 Samsung PM853Ts (read/capacity tier). The 950s aren't DC or Enterprise class and lack PLP (power loss protection), but I have the cluster on UPS (and it's just a home lab)
I am running vSAN 6.x AFA as well on my 3-node vSphere Enterprise Plus cluster, SCREAMS w/ a s3700 200GB for cache tier and s3610 800GB for capacity tier.
 
  • Like
Reactions: T_Minus

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
There are actually quite a few options out there for building your own HyperConverged cluster on top of open source components. And besides being free, most of the open-source options have few if any restrictions on how they are configured (and yes, it also allows you to create some very bad configurations - they do give you enough rope to hang yourself). Two solutions come to mind - Proxmox 4 (as @Patrick mentioned above) gives you a KVM+Ceph hyper-converged cluster with some other interesting features (LXC containers, ZFS, etc.), or oVirt (open-source equivalent of RedHat Enterprise Virtualization) gives you KVM+GlusterFS and with a self-hosted engine also fits into the HyperConverged space - its also what I've now got running in my basement.

Or if you want to build your own there are a lot of options out there. You could even do something crazy if you wanted like setup 4 HyperV nodes, pass-through disks in each node to some type of linux, create a Ceph/GlusterFS/other storage cluster across those VMs, and present that storage back to all 4 hosts as shared storage.

Really - HyperConverged is just the multi-node version of the All-in-One systems that have been popular for home use (the storage part being multi-node as well).
 
  • Like
Reactions: T_Minus and Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Just wanted to add on to what @TuxDude mentioned:
  • Proxmox 4 is really just an easier way to get up and running with a KVM/ LXC setup. The web GUI is really nice. There are a LOT of things I do not like, but after using it for years, the newer versions is much better and it does save a lot of time trying to cobble stuff together.
  • One other EXTREMELY interesting idea here: build an OpenStack cluster. Ubuntu Autopilot or Mirantis should get you up and running fairly quickly. Learning OpenStack is a much more marketable skill than learning Proxmox IMO.
Let's put it this way, I bought a C6220 and have a fifth node just for that project.
 
  • Like
Reactions: T_Minus

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Agreeing with Patrick (with a small twist):
  • If your goal is a really functional, simple to manage and operate Hyperconverged lab, Proxmox 4 with 3-5 nodes (though for a cluster that small I'd set it up with Gluster rather than Ceph as Ceph doesn't really shine until you get into 10+ nodes and 30+ OSDs)
  • If your goal is a cluster to learn marketable skills set up Openstack on a 5-node cluster (or - to really learn - set it up with full HA on at least 7 nodes).
  • But if you want something practical for home/SMB, an All-in-one (single node hyperconverged?) with better hardware is probably the best answer.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Or if you want to build your own there are a lot of options out there. You could even do something crazy if you wanted like setup 4 HyperV nodes, pass-through disks in each node to some type of linux, create a Ceph/GlusterFS/other storage cluster across those VMs, and present that storage back to all 4 hosts as shared storage.
Well that's insane for sure, pass thruing to back, sounds like recipe for disaster.
Actually that is exactly the architecture of Nutanix (at least when not on Acropolis), and I'm reasonably certain also a few other HyperConverged stacks. Any HyperConverged stack that supports vSphere as a hypervisor (except VSAN) runs in that mode, as no-one except VMware can add that kind of functionality at the host level. Most of the stacks that support HyperV also run in that model - I believe mostly because they are cross-hypervisor platforms and keeping their intelligence isolated to a VM is what allows them to support multiple hypervisors easily.

If you ever want to look at a a really convoluted IO path, get a good technical deep-dive on Compellent FluidCache working on top of ESX.
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
each node must contain 1+1 SSD/HDD. It appears that there is no way to run all-flash.
Out of curiosity, is there any real issue with this? It doesn't use the Mechanical stuff till it runs out of SSD and it's purely on the data heatmap. If you're doing DB stuff it should be even less of an issue. Also considering they put out this "All Flash" system back in 2014 there must be some way to run an all flash array. Perhaps check the forums?
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
You CAN run all flash with NutanixCE. I've done it and had a single node system up for a month before I moved to a three node hybrid in favor of more capacity.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Actually that is exactly the architecture of Nutanix (at least when not on Acropolis), and I'm reasonably certain also a few other HyperConverged stacks. Any HyperConverged stack that supports vSphere as a hypervisor (except VSAN) runs in that mode, as no-one except VMware can add that kind of functionality at the host level. Most of the stacks that support HyperV also run in that model - I believe mostly because they are cross-hypervisor platforms and keeping their intelligence isolated to a VM is what allows them to support multiple hypervisors easily.

If you ever want to look at a a really convoluted IO path, get a good technical deep-dive on Compellent FluidCache working on top of ESX.
Actually, that's incorrect. For vSphere we use Direct Path I/O. Much different than Passthrough Disk with Hyper-V. We do PCI-E passthrough to each of our controller VMs for direct storage access.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
Actually, that's incorrect. For vSphere we use Direct Path I/O. Much different than Passthrough Disk with Hyper-V. We do PCI-E passthrough to each of our controller VMs for direct storage access.
I never went into detail on how the disks are passed through to the VM, only that they are. Whether passing individual drives under HyperV or an entire SAS HBA under ESX the architecture remains the same and is as I described above - the drives are passed to the VM(s) where software pools them together (and adds redundancy and all the other fancy bits), and presents a shared volume back to the host(s). From a high-level point-of-view they all work the same.

Yes, there are performance implications in how you accomplish that, and VT-d is good. But its still a somewhat odd IO path, especially if you understand how the Nutanix CVM Autopathing feature works.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Can it run six SSD per node with four node then?
The max drive capacity per node is 4 drives. The reason for this is that Community Edition does "LUN" passthrough and not PCI passthrough which impacts queue depth greatly.

I did run 6 Samsung 843T's on my single not box and didn't have any issues.

I want to point out as I talked about in other posts. Community Edition is meant for people to get their hands on as close to a Nutanix experience as we could provide, on a broad range of hardware and platforms in a non-production setting.

It's not meant for production use as it's clearly stated on our website. If this is a lab..then have at it..but don't expect production like performance or reliability based on how CE is architected.
 
Last edited:

markpower28

Active Member
Apr 9, 2013
413
104
43
You mean we can't put all heavy load it and push the system to the limit with full HA?!

BTW, Derrick Ho is awesome! :)
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
I never went into detail on how the disks are passed through to the VM, only that they are. Whether passing individual drives under HyperV or an entire SAS HBA under ESX the architecture remains the same and is as I described above - the drives are passed to the VM(s) where software pools them together (and adds redundancy and all the other fancy bits), and presents a shared volume back to the host(s). From a high-level point-of-view they all work the same.

Yes, there are performance implications in how you accomplish that, and VT-d is good. But its still a somewhat odd IO path, especially if you understand how the Nutanix CVM Autopathing feature works.
The DataPath resiliency is the same, but not the core of passing physical hardware to the CVM, passthrough disks do not use the native storage driver like VT-D allows since you're passing the actual controller to the VM.

It's because we replicate data within the cluster we are able to accomplish DataPath redundancy, hence the VMs/Hypervisor where the CVM has failed still has access to storage without having to go through a APD or PDL scenario, and allows for continuation of IO to occur within the default timeout values at the Hypervisor level without the need for an HA event..etc.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
You mean we can't put all heavy load it and push the system to the limit with full HA?!

BTW, Derrick Ho is awesome! :)
You can do what you want...again..I urge you guys/gals to research and see what CE offers. You can run 1, 3 or 4 nodes and yes it provides HA etc. I'm not saying performance is bad by any means but don't expect it to compare to a real Nutanix solution based on the limitations I outlined.