Alternatives to vmWare vSan for hyperconverged environment (Home)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Why do people always underestimate the new product, sure everybody is right to question Nutanix and as a very large company I can say we did look into it, ultimately we choose not to proceed as it only worked for us in smaller sites and then we already had everything in place with a collection of other products that did the job and were already in use.

It did not make sense to further look due to the huge discounts we get from other vendors and the processes already in place for management, reporting, alerting, capacity management etc

Having said all this if implemented correctly we did not see any concern or big holes in the product, does what it says it does.
Yes the sales pitch about 'instant' replication etc is all smoke and mirrors as all the data is pre-replicated but anybody can see though those miss-representations without even having to think.

I have never looked at CE but I am sure it probably a great product for home use at the right price :)
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Yes the sales pitch about 'instant' replication etc is all smoke and mirrors as all the data is pre-replicated but anybody can see though those miss-representations without even having to think.
I'm sorry i'm not following above but appreciate the comments.

I would also say small sites is actually something "newer" for us as we now have a cost competitive ROBO Platform in which more options will be forthcoming. We are in 700 or the Global 1000 and 50 of the Fortune 100 running Tier 1 workloads along with the other 6200 customers running a variety of workloads. With all due respect, what I don't want taken away from your comment is that Nutanix is only a good fit for "small sites" and workloads..etc. This isn't the standard HCI that you get from the incumbents out there where they need to keep the Revenue stream going for all their other "Data Center Infrastructure" products. It's not like Dell EMC is gonna throw away Unity/VMAX/Compellent out the door because they have VxRail..same for HP with 3Par etc because they have acquired SimpliVity. This is where the messaging gets confusing to the end user. They have shoehorned their products for specific workloads etc because of just that, however, we (Nutanix) do not have any other products so we don't have to say, hey, our Product is great for smaller environments or this "use case" because:

1. We have one product and we put all our R&D $$ into that product/support...etc..and it's proven with real customers and workloads (Tier 1-Tier3)
2. We believe we have a architecture/maturity advantage over the incumbant's "HCI" products in the market because we have a time advantage and a mature architecture that continually get's better with each release.

Where's the proof? Just a small snapshot, but our product is in 700 of the Global 2000 and 50+ of the Fortune 100 running mission critical, Tier 1 Applications.

I personally have customers that are 100% Nutanix, some 100% AHV, etc. DoD, Healthcare, all verticals that run mission critical workloads, OLTP Oracle, Datawarehouse, Big Data, SAP, etc.

Understand that this is where the market is heading. Why would every major vendor jump on board with acquisition of HCI or building their own? Funny, they used to laugh at us when we first came out with our Product and told everyone this idea of HCI wasn't going to go anywhere. Look where we are today.


To get back on track though, let me just end with this statement as I don't want to further derail from the OP intent here. If you want production HCI for the home, then Community Edition is not for you. If you want a great Home Lab setup for testing, etc, then it's a great platform option.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Maybe it was just the way the sales guys here presented it. As I mentioned did not matter as smarter IT departments will always check fit and test anything.

I would say the strength is the maturity of Nutanix as a HCI, one of the first solutions to market certainly helps.

Anyway was just an opinion that HCI in general is great in some situations but others what I guess I choose to call the big datacenter end of the market it's not as suited, maybe your right that others see this differently, no issues with that at all.

Either way although we don't run the product for our own reasons after some working with it in a small POC I was impressed and certainly could see when I could or would use it.
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
Fwiw, Nutanix does scale very large and for a large varieties of workloads (primary array for at least one Fortune 40 that I know of). The real question with them and with any other vendor/solution what fits best for your organization.
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Let's start in baby steps :)

This guy is asking about some SOFTWARE to install at HOME, so he a) has server hardware already, and b) has no intention in buying your HARDWARE stuff anytime soon. Most probably... These a) + b) = c) we're talking your CE aka Community Edition here, one downloadable from your site, and from what I know d) you don't have commercial software trial except CE, and e) CE can't do PCIe pass-thru. It actually couldn't last time I've been playing with it this week, but I'd appreciate if you'd point out how to turn it ON :) I mean it!

So I have a (rhetoric?) question: Who's throwing stones and FUD here? :(

P.S. I don't work for HPE, but we use HPE StoreVirtual, and we're HPE|Nimble customer since... Forever!

Actually Nutanix does PCI-E passthrough where the CVM has direct access to the HBA(s) accross the PCI-E bus and therefore disks. I'm beginning to think you either do not take the time to get the right information purposely or you just like to throw stones and FUD. Either way, i'm not playing until you put some effort into understanding what it is we do. This is the 4th post now, where you have made unfounded claims and FUD, you work for HP?
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
Tiny correction: It actually was LeftHand Networks (now part of HPE!) who came out with their VSA and HCI product first. It was... 6 years before you launched your one?

[ ... ]

Understand that this is where the market is heading. Why would every major vendor jump on board with acquisition of HCI or building their own? Funny, they used to laugh at us when we first came out with our Product and told everyone this idea of HCI wasn't going to go anywhere. Look where we are today.


[ ... ]
 

NISMO1968

[ ... ]
Oct 19, 2013
87
13
8
San Antonio, TX
www.vmware.com
That's a good point but... As long as HCI vendor supports storage only and / or compute only nodes non-symmetric scalability isn't an issue.

Maybe it was just the way the sales guys here presented it. As I mentioned did not matter as smarter IT departments will always check fit and test anything.

I would say the strength is the maturity of Nutanix as a HCI, one of the first solutions to market certainly helps.

Anyway was just an opinion that HCI in general is great in some situations but others what I guess I choose to call the big datacenter end of the market it's not as suited, maybe your right that others see this differently, no issues with that at all.

Either way although we don't run the product for our own reasons after some working with it in a small POC I was impressed and certainly could see when I could or would use it.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Hate to beat a dead horse here, but what's the real problem with using ScaleIO in the long run for home lab use? What's the absolute best case alternative? For home use I've always tended towards HCI because it means I can utilize tower chassis in an all inclusive setup (storage + compute) and not have to deal with rackmount SANs/storage-centric chassis, essentially none of my hardware is 'wasted'. I just want a somewhat resilient environment where I can utilize all of the resources to have a virtual environment that allows me to drop VMs and learn new tech/sw and do some networking(literally, not social lol).
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
@frogtech - you ever tried ScaleIO?
Still not happy with my vSan and the idea to use any disk in any box (even via sata) sounds appealing - just no idea what really to expect performance wise from it...
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
@frogtech - you ever tried ScaleIO?
Still not happy with my vSan and the idea to use any disk in any box (even via sata) sounds appealing - just no idea what really to expect performance wise from it...
I did try ScaleIO, however, I don't think I had the right setup for it. It was kind of slow and the white paper performance tweaking wasn't much to bat an eye at. I think I was running 6 nodes, but I didn't have that many disks each. I had 1x 960GB SanDisk CloudSpeed Ascend per node, and 4-6 2TB hard drives per node. They weren't the best hard drives; just a random mix of Seagate Barracuda and some Toshiba/WD Enterprise drives. IOPS was pretty low even with the SSDs.

Someone suggested that if you want to use an SSD for caching that it's more performant to use a controller's built in caching tech, for example CacheCade on LSI controllers.

Honestly if you're going to use it with platter storage you probably need to stuff as many disks per chassis as you can, in as many chassis as you can. Maybe 4 nodes at a minimum? Supermicro 826/816 are cost effective for this I think.

I'm currently in the process of 're-vamping' my home lab, to use with ScaleIO or something like S2D in Server 2016, by cutting out magnetic disks completely and going all flash. I think the more devices you have the better performance you'll get. Note, this is pure speculation on my end but it seems to be the underlying suggestion by 'experts' of the platform. So when I finally get the rest of my chassis my initial setup will be 8 Oracle F40 PCIe drives, 2 per chassis(4 nodes), which is essentially 8 x 4 = 32 devices in a 4 node cluster. Yes they're 100GB but it doesn't matter. Eventually I would double that number and add 2 more F40 PCIe accelerators for 64 individual devices across 4 nodes. I am sure the IOPs would be quite respectable then.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ok, mabye I should give it a try. Accumulated a lot of ssds and nvme drives.. just often not in vsan compatible numbers /layout (needs a homogen setup), so not that much use there, but if its getting used as is as single device that would be great;

For example I got a bunch of 800GB P3700s ... no place in my 3700/750 vsan pool but might be great for scale io - i hope
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Ok, mabye I should give it a try. Accumulated a lot of ssds and nvme drives.. just often not in vsan compatible numbers /layout (needs a homogen setup), so not that much use there, but if its getting used as is as single device that would be great;

For example I got a bunch of 800GB P3700s ... no place in my 3700/750 vsan pool but might be great for scale io - i hope
New thread with that so I can follow along
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
So some 6 months later ScaleIO Free is dead but it was not a shocker anyway. Performance was not bad (better than vSan) since it scaled up with additional drives, but management was kind of annoying. It had promise and I had been waiting for 3.0 for a while but so be it.

Still have not found a solution to my problem... Think I am going to try Starwind next. NVMe likely to prevent needing a bunch of raid adapters...
 

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
So some 6 months later ScaleIO Free is dead but it was not a shocker anyway. Performance was not bad (better than vSan) since it scaled up with additional drives, but management was kind of annoying. It had promise and I had been waiting for 3.0 for a while but so be it.

Still have not found a solution to my problem... Think I am going to try Starwind next. NVMe likely to prevent needing a bunch of raid adapters...
I rolled Starwind in a production environment, and while I liked the product, the support was a disaster. I switched to Stormagic, and have been thrilled. Unfortunately, no free edition.

The key, I've found on any product, is not necessarily using flash to cache the spinners, rather, identifying what should go on the flash and what is fine on spinners. It's been WAY more effective running all our HC that way.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
S2D has the issue that it only presents SMB shares and no nfs/iscsi. I suppose I could switch to HyperV but I have not really considered that.

And yes, using the right storage type is probably important but I am not getting my wanted speed with NVMe so not even looking at spinners ...;)
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
S2D has the issue that it only presents SMB shares and no nfs/iscsi. I suppose I could switch to HyperV but I have not really considered that.

And yes, using the right storage type is probably important but I am not getting my wanted speed with NVMe so not even looking at spinners ...;)
Spinning disks for VM’s in 2018 :-O
Even if SAS/SATA it this type of setup I agree with @Rand__ its flash or nothing.

S2D if your running more than a couple windows your using datacenter licensing anyway so always worth a try. Advantage is end to end Microsoft and not extra stuff to install or have conflicts with.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
S2D can present target iSCSI vhdx. Just put them in the csv.
Details? Links? :)

Edit:
Ok, I think i found it, something like this

Container Storage Support with Cluster Shared Volumes (CSV), Storage Spaces Direct (S2D), SMB Global Mapping

Setup S2D, add a csv on top and share that out via iSCSI ?

And just fresh from another thread: Windows Server 2016 cluster sharing iSCSI storage for HA vSphere datastores • r/storage

Edit2:
Does not look like a good option for 2 node setup
Another catastrophic failure on our Windows Server 2016 Storage Spaces Direct (S2D) setup • r/sysadmin
 
Last edited: