Alternatives to vmWare vSan for hyperconverged environment (Home)

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
Yes but that one will not support ESX as Hypervisor and nvme drives seem to be rather slow. So possible solution for some, not for me unfortunately.
Besides that I am not really happy with their terms&conditions (web account login, all statistics are sent to them), but that's another story;)
 
  • Like
Reactions: cheezehead

Connorise

Member
Mar 2, 2017
62
11
8
29
US. Cambridge
If you are looking for a high-performance setup using compact (2-3 node) deployment, check out Storage Spaces Direct or smaller vSAN vendors mentioned before, who care about data locality vs common "cross-node" erasure coding approach.
 

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
I have not taken a look at SSDirect since I have only seen complaints about speed, so did not take it into consideration.
Which other vSan vendor in particular do you refer to? I thought I had checked/discussed most of them?
 

Evan

Well-Known Member
Jan 6, 2016
3,149
530
113
I thought SSDirect was good however the need for enterprise licensing seems to upset people but the reality is it's probably well worth it if running lots of windows , if lots of Linux then less sense.
 

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
Ok, will have a look :) Need to check whats included in the DevNetwork licenses

Edit:
Ah per Core Datacenter Edition licenses - thats probably going to be difficult at home

Also dont have 4 nvme drives per server ;)
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
2,118
562
113
31
Germany
@Rand__ People complain about the write performance with parity spaces (think raid 5/6, z1/2/3), mirror spaces (think raid 10) don't have that problem (but require more ssds/hdds > higher costs).
 

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
Ahh ok, thanks for clearing that up
Didnt read the complaints to be honest b/c I never considered it until now
 

Connorise

Member
Mar 2, 2017
62
11
8
29
US. Cambridge
@Rand__ well, yes.. Storage Spaces Direct in parity resiliency is pathetically weak in terms of performance unless you are going to throw in an enormous amount of NVMe or flash cache to fully cover your working set. However, S2D performs quite good in "2-way mirror" mode. The only major drawback in 2-node S2D deployment is the lack of redundancy (FTT=1). You can fix this by adding the third node and do the "3-way mirror" though.

As for smaller vendors, I'm referring mostly to StarWind vSAN, which is a performance beast even in 2-node configuration. HPE VSA is also great, but it's inability to run in kernel mode leads to noticeable performance compromises. Anyway, from what I have tested/learned, you should not consider VMware VSAN if you are building a performance-oriented cluster. VSAN has other benefits, but performance is not one of them.
 

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
How difficult is the StarWind cli for the free version? Can one setup using trial (gui) then stay on free and only use cli?

Edit:
Found it:
Compared to StarWind Virtual SAN Commercial version, Free version provides access to StarWind Management Console for a period of 30 days after installation. After this period, StarWind Free can be managed through the WindowsPowerShell or SCVMM just like you would manage Microsoft Storage Spaces Direct.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
Trying to get a feel for their offering, but without registering its difficult. Most data is in some whitepaper or the other...

looking for info on
hardware recommendations (ie how can I reuse which existing components)
-can I use the linux appliance with free or do I need windows
-does the linux appliance support IB devices /RDMA
-Which tiering do they have
-Do I need Raid controller or not ...

I get that they want contact info but some info should be available without - they are almost as closed mouthed as nutanix from my initial impression
 

ecosse

Active Member
Jul 2, 2013
381
66
28
@Rand__ well, yes.. Storage Spaces Direct in parity resiliency is pathetically weak in terms of performance unless you are going to throw in an enormous amount of NVMe or flash cache to fully cover your working set. However, S2D performs quite good in "2-way mirror" mode. The only major drawback in 2-node S2D deployment is the lack of redundancy (FTT=1). You can fix this by adding the third node and do the "3-way mirror" though.

As for smaller vendors, I'm referring mostly to StarWind vSAN, which is a performance beast even in 2-node configuration. HPE VSA is also great, but it's inability to run in kernel mode leads to noticeable performance compromises. Anyway, from what I have tested/learned, you should not consider VMware VSAN if you are building a performance-oriented cluster. VSAN has other benefits, but performance is not one of them.
Spaces Direct - My mind still can't get around a 2 node system having one failure domain.

Not convinced on the kernel mode point, there have been a number of discussions / posts / stats that appear to debunk that particular claim. I've no actual experience other than to note that my HPE VSA with tiered storage is faster than my all flash 4 node VSAN at this point in time. Which kind of alludes to your last point :)

In Kernel or Not In Kernel – This Is The Hyperconverged Question
 

Rand__

Well-Known Member
Mar 6, 2014
4,634
919
113
Is Adaptive Optimization a must have feature on HPE VSA?
Just found the free for first TB version;) Hewlett Packard Enterprise

Edit: Looks like it :/
Adaptive Optimization (AO) is HPE’s sub-volume automated storage tiering feature. Using AO, you can store the most active part of your volume on fast disk or SSD to make it readily accessible. The remainder of your volume is stored on a less expensive, typically slower, higher capacity disk. The data is balanced between tiers automatically and continuously in real-time. To get the AO feature, purchase StoreVirtual VSA Ready Nodes or a full StoreVirtual VSA license (10TB or 50TB).
 

Connorise

Member
Mar 2, 2017
62
11
8
29
US. Cambridge
Spaces Direct - My mind still can't get around a 2 node system having one failure domain.
Agree, I should have elaborated on this point. My concern is not having one node as a failure domain which is completely normal for 2-node configuration, but the fact that all disks in the cluster are also fallen under the FTT=1. For example, with HPE VSA or StarWind vSAN I can configure the local redundancy using hardware RAID controller. Imagine 2-node all-flash setup where you can totally lose one of the nodes and 2 disks in the other node with the little to no impact on production availability.

I know, that S2D features the self-healing capability using the "Reserved Capacity" mechanism. However, until the rebalancing is finished, the production would have zero redundancy.

That's why I think the "good-enough" S2D cluster starts from 4 all-flash nodes in mixed-resiliency configuration (mirror+parity).
 

cheezehead

Active Member
Sep 23, 2012
718
174
43
WI
Is Adaptive Optimization a must have feature on HPE VSA?
Just found the free for first TB version;) Hewlett Packard Enterprise

Edit: Looks like it :/
Adaptive Optimization (AO) is HPE’s sub-volume automated storage tiering feature. Using AO, you can store the most active part of your volume on fast disk or SSD to make it readily accessible. The remainder of your volume is stored on a less expensive, typically slower, higher capacity disk. The data is balanced between tiers automatically and continuously in real-time. To get the AO feature, purchase StoreVirtual VSA Ready Nodes or a full StoreVirtual VSA license (10TB or 50TB).
Yep, the free version is limited. On the plus, vs VSAN it works well with spinners, can function in a 2-node config, and can run replication smaller higher latent lings (relative to your load requirements).
 

NISMO1968

[ ... ]
Oct 19, 2013
78
13
8
San Antonio, TX
www.vmware.com
This is an outdated link really. If you pass-thru your hardware into VM using SR-IOV & assign proper amount of vCPU to it you'll get an excellent performance. This is how Microsoft XBOX works: Hyper-V kernel, and all hardware runs inside own virtual machine :)

Spaces Direct - My mind still can't get around a 2 node system having one failure domain.

Not convinced on the kernel mode point, there have been a number of discussions / posts / stats that appear to debunk that particular claim. I've no actual experience other than to note that my HPE VSA with tiered storage is faster than my all flash 4 node VSAN at this point in time. Which kind of alludes to your last point :)

In Kernel or Not In Kernel – This Is The Hyperconverged Question
 

ecosse

Active Member
Jul 2, 2013
381
66
28
This is an outdated link really. If you pass-thru your hardware into VM using SR-IOV & assign proper amount of vCPU to it you'll get an excellent performance. This is how Microsoft XBOX works: Hyper-V kernel, and all hardware runs inside own virtual machine :)
I don't think it is - direct IO path is not in kernel and yet offers excellent performance. Struggling to understand the relevance of Xbox (one).