Hyper-V vs Vsphere - my initial take

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dswartz

Active Member
Jul 14, 2011
610
79
28
So I have my first hyper-v cluster spun up. Only one node so far - the parts for the 2nd one are trickling in. Most of the issues I had were the typical microsoft "we have it so the defaults are 95% right!" experience. Observations to date:

#1 If you want local storage on a hypervisor, vsphere makes it easy. Just stick one or more disks on the node, and you can storage vmotion guests to it at will.
#2 Hyper-V apparently doesn't allow that. At all. None of the sites I visited had a hint as to how to do that, and microsoft 'experts' pooh-poohed the very idea. I would have pointed out that if you have a setup where you don't have more than one SAN/NAS to use for storage, you can't shutdown the active storage. That is where vsphere has an edge. I could migrate all my guests storage to the 1TB SSD on that node, reboot the SAN, and migrate back. I tried to use the 1TB SSD by sharing a folder on it, and giving that to the hyper-v cluster 'move to' gui, but when you click on 'add share' and give the share info, it fails. Googling revealed the usual microsoft idiocy of requiring certain very specific permissions for very specific accounts. This is a general nuisance with anything to do with SMB in the hyper-v cluster system (e.g. I had the same problem trying to use an SMB share hosted on an omnios NAS/SAN appliance that I spun up for testing - I finally gave up on SMB and stood up an iSCSI volume - omnios makes that very simple.)
#3 Not having to deal with VCSA? Priceless! I've found that virtual appliance to be a pain. Full of bloated crap and often slowing down for no apparent reason, such that the spinning wheel would appear for 1-2 minutes when trying to navigate to a different page. Hyper-v presents a virtual management IP that maps to one host, and you just roll with that. Much more convenient.
#4 Cost: when my VMUG membership lapses, I probably won't renew it. Saving about $200/yr there. Note that I have to pay for two ws2019/standard licenses anyway, since I have 2 ws2019/standard VMs acting as AD/RADIUS/DNS/DHCP servers anyway, and the ws2019 licensing allows me to license both virtual servers, as long as both physical hosts are full licensed.
 

Marjan

New Member
Nov 6, 2016
25
4
3
About Hyper-V, I think you need Virtual Machine Manager (VMM) for this kind of VM storage migration. It should be doable from SAN/NAS to local disk. Of course, you need license for VMM.
I have never tried to migrate VM storage from SAN/NAS to local disk. I can say that from local disk to local disk this works from Hyper-V Manager (can't remember if VM must be down).
Also, the best thing is to have AD and all hots to be members of domain, with proper access rights, delegations...

For storage on SMB, best is to have Windows server as file server for Hyper-V. Many *nix SMB implementations are lacking features required for Hyper-V. So either ISCSI on some proper NAS, like you tried with Omnios, or Windows file server.

I agree for VCSA, it can be slow at times. Even on new enterprise hardware with SSDs, ridiculous amounts of RAM, etc.

I tried quite a few of the virtualization solutions, some of them using for many years now, and all of them have some things that are really pain to deal with.

I wonder if some else has something to add, we might have some list of cons and pros for various virtualization solutions.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
AFAIK, the hyperv VMM comes bundled, I don't see any reference to a license (maybe I am wrong?) In a clustered environment, you have to do migration from the failover cluster manager. Meh...
 

Marjan

New Member
Nov 6, 2016
25
4
3
Good, you already have VMM.
I don't think you need to migrate VMs from failover cluster manager. I haven't used VMM for about 5 years, but from what I remember it can all be done from VMM, as it should be proper way to do it.
Since VMM version 2012 I think, there is shared nothing live migrations for VMs, compute and storage. And it works quite nice.

You got me intrigued now. I am tempted to install Hyper-V cluster and VMM and try few things.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Well, yes and no. As long as you aren't touching clustered storage, sure, VMM works fine. As soon as you tell FCM to make the guest HA, and you move it to clustered storage, VMM is forbidden from messing with things.
 

edge

Active Member
Apr 22, 2013
203
71
28
I live migrate between nodes of my two hyper-v nodes. They aren't clustered. A hyper-v cluster is an actual cluster with cluster shared volumes (CSVs). There is also replication between nodes. You have some jargon to learn and conceptual differences to bridge between the two environments.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I understand the concepts - I've been using a vsphere cluster for several years now in my home lab - it's mostly nomenclature, and specific details as to how things are done.
 

psannz

Member
Jun 15, 2016
79
19
8
39
If you want to use local server storage for a Hyper-V Cluster AND have other nodes use that storage, too, then you need to present it as an smb 3 application share on a Scale File Server role.
Basically, you need to install the Failover Clustering role and install the cluster without storage. Once that's done, you add the "File Server role" to your node, and add your share.

How to create a Failover Cluster without Active Directory
 

dswartz

Active Member
Jul 14, 2011
610
79
28
One interesting d'oh I experienced. Could not for the life of me understand why I couldn't operate on a VM on the other node. Cryptic error message with no real help online. Finally figured out that, yes, Virginia, you probably wanted to login as domain\administrator, not administrator. On a different note, it's a little annoying that the create new VM dialog defaults secure boot to on, which bones Linux guests. I don't always remember to uncheck that box :)
 
Last edited:

edge

Active Member
Apr 22, 2013
203
71
28
That burns me too, but it is all part of the MS push to be secure by default.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
One last thing that has me pulling my last hair out. One host has 1.7ghz E5-2603V4 processor. The other has 2 2.1ghz E5-2620V4 processors. No hyperthreading for the 1st. Normally I'd make sure to have the same chip in both, but this one was lying around. Here's the crazy thing: I can't reliably do live migration from the slow to the fast host, but I can always do so in the opposite direction. When I say reliably, it's crazier than you might think. Some guests seem able to migrate, others can't. Even with the 'cpu compatibility' box checked and the guest cold-booted. I read articles talking about making sure the BIOS revisions are the same and etc, which seems BS to me. I checked the basic things like C-states and such, and that all matches up. I'm pretty sure I was using these same 2 hosts in my 2-node vmware cluster and never had an issue. I'd love to get this last glitch out of the way, but at this point, I have no idea what to look at.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Guests that can migrate: 2 ws2019 servers, 2 centos7 servers.
Guests that cannot migrate: 6 various linux servers (some centos, some not), 1 windows 7 guest, 1 windows 10 guest.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I've been through pretty much everything at this point, other than making sure the same f*cking BIOS rev is in place. It was fun while it lasted, but back to vsphere, I guess.
 

edge

Active Member
Apr 22, 2013
203
71
28
Probably not your issue, but there are only a couple of issues that have given me trouble with live migration.

The first was when the volume hosting the vm was compressed and the volume on the target server was not - the vm would copy over to the target but not boot.

The second has been with older OS versions like win 7 : I found some of them work better in type 1 vm's.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I ended up switching back. The thing is: vsphere has a lot of complicated sh*t, but you can roll with a fairly simple setup. I found too many sharp edges in the hyper-v ecosystem. Stuff that didn't work at all (or as expected) unless you also did X, or Y or Z. One example: the live migration 'incompatible cpu blah blah blah' failures? Totally went away back on vsphere. And the microsoft 'help' sites on this question were telling me to make sure the BIOS versions were the same and other horsesh*t. And people condescendingly telling someone (not me) that 'of course hyper-v does not support local storage - then it would not be clustered!'. I've used this feature with vsphere when I had only one storage appliance, and needed to reboot for some reason, so I would: svmotion guests to NVME drive on vsphere host, reboot storage server, svmotion back to shared storage. Nope, can't do that with hyper-v. Also, storage server has an NFS share with tons of ISO files on it. Can't use them for hyper-v guest installs for whatever reason, so ended up having to copy them to the NVME drive. For no reasonable reason. Other issues too, where I spent days googling to try to understand why something seemingly obvious was not working. End of the day: hyper-v seemed to have too many windows-specific gotchas that required a level of expertise I not only don't have, but don't have the time or inclination to acquire (not for a home lab at least.) Other peoples MMV of course.
 
Last edited:
  • Like
Reactions: name stolen

edge

Active Member
Apr 22, 2013
203
71
28
Oh, BTW, the moving of storage of a hyper-v vm from one device to another is called storage migration and it can be done live. That is how you would move the files to the nvme drive.

I apologize for the idiots who told you hyper-v doesn't support local storage (how do they think it works in win 10?).
 

dswartz

Active Member
Jul 14, 2011
610
79
28
My apologies. I re-read that post. The comment was about hyper-v in a clustered environment. That is what they said can't be done. I've run vsphere in a cluster, and you certainly can use local storage - the guest just can't vmotion after that. Hope that is clearer...
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
you can run VM's on cluster nodes to local storage. what you lose is the automatic failover. and you have to manage them with Hyper-V manager instead of cluster manager

Chris