Hyper-V vs Vsphere - my initial take

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dswartz

Active Member
Jul 14, 2011
610
79
28
you can run VM's on cluster nodes to local storage. what you lose is the automatic failover. and you have to manage them with Hyper-V manager instead of cluster manager

Chris
I understand. What the other dude wanted to do (as did I) was to take a guest in a clustered role and (temporarily) move it to local storage. There is apparently no obvious way to do that. What I think you need to do: remove the guest from the cluster, storage migrate to local, do whatever, storage migrate back, then add back to the clustered role? If so, you'd think the microsoft 'expert' could have said so and not been so dismissive.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,417
468
83
There is a work around.

Have the local storage be presented out as an iSCSI target Volume. then you can add the iSCSI shared volume in cluster manager and migrate your storage to the iSCSI volume

Chris
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
How many nodes/servers in the cluster? I am asking since you could run Nutanix-CE absolutely free up to 4 nodes. No real limitation on storage (except must have at least one SSD per node). Some people on these forums even managed to passthrough the storage controller to the CVM to improve performance. (This is a standard feature on paid nutanix, but disabled on CE to improve compatibility)
Another gotcha is CVMs are memory needy. 8gb reserved would about bare minimum, 16gb is recommended.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
you can run VM's on cluster nodes to local storage. what you lose is the automatic failover. and you have to manage them with Hyper-V manager instead of cluster manager

Chris
Understand that now. No-one on that site, including a couple of people purporting to be experts explained what you did so clearly. Just said 'no cannot do that, why would you want to? that is stupid' (paraphrasing)
 

edge

Active Member
Apr 22, 2013
203
71
28
Understand that now. No-one on that site, including a couple of people purporting to be experts explained what you did so clearly. Just said 'no cannot do that, why would you want to? that is stupid' (paraphrasing)
My experience is those claiming to be experts aren't. I have given many presentations at tech conferences and I try to be careful to stay within the bounds of what I have done and tested, because I have embarrassingly learned that the moment you step beyond that there is a person in the audience who knows better than you and proves it.
 

trippinnik

New Member
Oct 13, 2018
16
0
1
I've had local storage in Hyper-v cluster and moved stuff to it and back. Also this move VMs to local storage and back for SAN reboot sounds silly. SAN should have dual controller so VMs should be fine during SAN reboots.

My experience with vSAN vs S2D is what sets the MS solution ahead. It would take 24 hours for me to put a host in maintenance mode in Vmware. Even SSD storage needed SSD cache. MS seems smarter and more flexible and has more tools to see what is going on. I haven't used vSAN in a while but I haven't heard anyone really talking it up and using it in production either.
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I've had local storage in Hyper-v cluster and moved stuff to it and back. Also this move VMs to local storage and back for SAN reboot sounds silly. SAN should have dual controller so VMs should be fine during SAN reboots.

My experience with vSAN vs S2D is what sets the MS solution ahead. It would take 24 hours for me to put a host in maintenance mode in Vmware. Even SSD storage needed SSD cache. MS seems smarter and more flexible and has more tools to see what is going on. I haven't used vSAN in a while but I haven't heard anyone really talking it up and using it in production either.
Nice that your home lab can afford that :)
 

trippinnik

New Member
Oct 13, 2018
16
0
1
Nice that your home lab can afford that :)
You mean the dual controller? Can't afford that, though I could have had an old equal logic that I got a lot of drives from or the even the compel lent. But too much noise or space. If you've ever dealt with VMWare support you know it's always the network or the storage that's at fault, I definitely feel like MS has more intimate knowledge of storage and since they have a huge cloud footprint they are running and managing what they make at scale.
 

b-rex

Member
Aug 14, 2020
59
13
8
I've had local storage in Hyper-v cluster and moved stuff to it and back. Also this move VMs to local storage and back for SAN reboot sounds silly. SAN should have dual controller so VMs should be fine during SAN reboots.

My experience with vSAN vs S2D is what sets the MS solution ahead. It would take 24 hours for me to put a host in maintenance mode in Vmware. Even SSD storage needed SSD cache. MS seems smarter and more flexible and has more tools to see what is going on. I haven't used vSAN in a while but I haven't heard anyone really talking it up and using it in production either.
To be honest, nobody other than those willing to ride the edge, are willing to talk up either S2D or vSAN. I've deployed both and from my experience they both have their niches but for on-prem deployments, a lot of admins, particularly storage admins hate the idea of vSAN and S2D. That's even with ready nodes, they just don't trust it. I've had less experience with Nutanix, but they're also a leader but still not widely adopted. I've had some experience, was impressed...that being said, for those that want to walk the SDS/HCI space, vSAN and S2D seem to be the two most common that I've worked with. Even so, most clients go with more traditional SAN products. It's hard to sell someone on new solutions when they've run their past Hitachi into the ground with almost nothing to be heard of in terms of unplanned downtime and pages upon pages of easily found disappointment with S2D and vSAN.

It's for these reasons that my on-prem work is a fraction of my cloud work now. Things are changing fast, faster than I ever thought it would and a lot of it has to do with the real disappointment and unrealized ROI that the aforementioned solutions typically present and the astronomical prices of more reliable, more traditional, on-premise solutions.
 
  • Like
Reactions: Marjan

JTF195

New Member
Nov 15, 2017
13
7
3
33
I've been using Hyper-V on S2D at home off and on for a few years now and I love it.
Admittedly I'm using eight dirt cheap 120GB SSDs, so my space is very limited after the redundancy, but even after a few years it still feels like black magic watching a VM live migrate in just a few seconds
 

edge

Active Member
Apr 22, 2013
203
71
28
I was an SA at HPE. I started working with S2D with our Enterprise Data Warehouse. Our S2D network was infiniband/rdma.

A node in the system consisted of two DL380s sharing connected to dual ported SAS jbods. There could be up to four nodes in a rack. At the top of the rack the were either one or two DL380s that were for failover. If a dl380 in a node failed, the its partner took over the SAS drives and exposed them to the failover node via rdma and the cluster kept humming. Everything was part of a single cluster and the it scaled from one node in one rack up to 7 racks. We saw a 4% CPU hit on the node sharing out the drives during a failover.

Never had an issue with S2D, but we were extremely careful with configuration with all firmware, hardware, driver stack, and software stack strictly defined for each revision.