Rant about VMware...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

b-rex

Member
Aug 14, 2020
59
13
8
I work with VMware everyday at work in an engineering capacity...which at times can be painful, but using it in a home lab is excruciating. I am using equipment, that while old is not that far out of an extended lifecycle, is enterprise grade and was mainstream not that long ago if not still, yet half of the stuff I have no longer works after the vmklinux deprecation (I will never be able to get to 7 with my use case...or even 6.7 for that matter). While Mellanox CX3s support SR-IOV (it actually shows up in the GUI as being capable in 6.5/6.7), the native drivers for VMware don't. It appears there isn't a single 6 Gbps HBA supported anymore. VMware still hasn't added 4k support for datastores or vSAN. They randomly removed PCI passthrough capability with hardware virtualization enabled. The licensing model is ridiculous. The performance from vSAN and other products is absolutely atrocious unless you tune it perfectly and still costs an insane amount of money. The latest irritant is above...I need to passthrough capability from my one CX3 (I'm out of PCI slots so can't just add one...) in each of my two ESX hypervisors using SR-IOV. Admittedly, at work, I can buy whatever I need to using VMware's HCL...so it never occurred to me just how bad their support is for older devices until I realized today that they literally deprecated support for SR-IOV with CX3 in VMware 6...only two years after the devices were first released. If anyone has any tricks to shoehorn SR-IOV on CX3s in 6.5, please let me know. Otherwise I'm going to end up dumping more money into some CX4s or XL710 (which are insanely expensive still). That or I'm just going to dump VMware completely and move on to something comparable but more flexible. I also run Hyper-V with VMM and S2D...and while there's some stuff that's just not the same, it's far more friendly than VMware and covers use cases for almost everyone. It also performs better and isn't nearly as rigid. I just have a unique need for VMware at the moment and I'm really wishing I didn't. If anyone's got some crazy workaround, I'm all ears.
 
  • Like
Reactions: poto

dswartz

Active Member
Jul 14, 2011
610
79
28
" The licensing model is ridiculous".

True enough. Do you have a VMUG membership, though? $200 or so per yr, and you get the whole enchilada.
 

b-rex

Member
Aug 14, 2020
59
13
8
" The licensing model is ridiculous".

True enough. Do you have a VMUG membership, though? $200 or so per yr, and you get the whole enchilada.
I used to but then got access to what I needed through VMAP for free...although I no longer have access to that so I might need to rethink it.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
The one thing really missing from VMUG is a (cost effective) support option.
It would be fine if its not entirely free (maybe nominal fee of 20 or even 50 bucks per case), would only cover hardware on the compatibility list and would be on low priority.

But when using the lab to simulate real life scenarios or experiment with newer HW and things don't work out and the company does not have that HW or you don't have access to your company's support infrastructure (different team, 3rd party provider) then it plain sucks not to be able to do anything about it except posting in the VMWare forum. Sure they try to help, but its "Weird, please open a ticket" every so often...
 
Last edited:
  • Like
Reactions: b-rex and dswartz

DavidRa

Infrastructure Architect
Aug 3, 2015
330
153
43
Central Coast of NSW
www.pdconsec.net
For a start, Hyper-V S2D is just better for storage IO out of the box, in my experience. For example, vSAN uses 256GB chunks, I think, where S2D is more like 256kB? I forget the source, it was a TechNet article and I might be misremembering so check actual numbers before quoting me. In any case, if you do a 32GB read on vSAN, you might be limited to IO on one or maybe 2 physical disks - while on S2D it could easily be 24+. I'm not sure if / when vSAN added parallel read from mirror copies either?

On the flip side, though, vSAN is easier to manage I think, and in most scenarios you wouldn't hit the types of IO that would be "bad" for vSAN. You can do distributed parity and triple mirror on both. And frankly I think both have demonstrated multi-million IOPS setups anyway.

The hypervisors are fairly comparable for basic use cases - vMotion / Live Migration is basically the same, Storage vMotion and Storage Migration in Hyper-V are comparable (though there the licensing costs differ). I think Hyper-V's DR capability is light years ahead of SRM (and again it's included not extra $$). VMware makes it easy for device passthrough and I think vCenter is slightly better than WAC (though give it another year it might be reversed).
 
  • Like
Reactions: high and Evan

b-rex

Member
Aug 14, 2020
59
13
8
Does Hyper-V really perform better? What use cases have you experienced better performance?
In deployments greater than 4 nodes with all-flash, hyperconverged or using SOFS, S2D performs better than vSAN. There are few that would disagree. Even hybrid configurations do perform better. For comparison, a 2-node demo cluster I built with consumer grade SSD and HDD (both SATA) outperformed vSAN by a fairly large margin. It wasn't a hugely scientific comparison but it demonstrated the real performance advantages OOB with S2D over vSAN. This has to do with how read-caching can be configured and the fact that by default, as mentioned elsewhere, the maximum number of columns are used. It also does this without the ridiculous memory overhead seen with vSAN as disk groups increase.

The issues with S2D come down to reliability in smaller deployments (a two-node dev vSAN cluster will actually work somewhat reliably -- S2D will crap the bed the first time a node goes down even with a cloud witness), storage efficiency, ReFS, and configuration. For example, S2D allows more options in terms of how it's caches and storage pools are setup, vSAN doesn't...you get what you get and this is especially true with all-flash. That being said, as mentioned above, vSAN is easier to setup, you have more control over storage usage/resiliency, and using vCenter, it's far easier to manage. With VMM, this disparity shrinks substantially, but I still personally prefer vCenter over VMM. WAC works...just not as well as VMM in my opinion (the more comparable offering to vCenter). I'd second what's said above too about some of the features...passthrough, although uncommon other than for GPU at this point and SR-IOV, seems to be easier with vSAN. I've had a lot of issues with SR-IOV on both, but VMware makes it easier. That's not saying much either.

What I will say is that in my experience deploying both to close to PB scale, I prefer vSAN due to the fact that very few with that scale want to touch Hyper-V. Many of my clients simply prefer VMware too despite the costs...and it seems Gartner agrees. Performance wise though, S2D beats out vSAN in almost every case I've had a chance to compare.
 
  • Like
Reactions: dawsonkm

high

New Member
Feb 24, 2016
10
1
3
47
Thanks for the insights. For an avid home-labber, pentabyte scale vSAN is way outside of my reality, but still very interesting.