Taking the ESXi --> Hyper-V plunge...few questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JimPhreak

Active Member
Oct 10, 2013
553
55
28
So I've decided to give Hyper-V a real show on my new home VM server. I haven't given up on ESXi yet but with all the trouble it's giving me on my new "cutting edge hardware," I think it's smart to at least give Hyper-V a shot and see how well it works on the same hardware. And given the fact that I already know my two 10gig NICs work in Windows, that's one leg up.

While I'm anxious to give Hyper-V a shot, I'm completely new to it. I'm a Windows admin so I'm familiar with Windows server and guest OS's but I've never messed with Hyper-V. I've worked with VMware since 5.0 was released. Therefore I have a few questions/concerns right off the bat.


1) Hyper-V Server Core, Full Hyper-V Server, or Full Server 2012 R2?

Being this is my first run at Hyper-V I'm unsure which solution is best for me. In theory it sounds like Hyper-V server core is the way to go for the less overhead and fewer patches needed. Seems like it's the closest to a "bare-metal" hypervisor of all the 3 Windows options.

However I'm concerned about my ability to configure and manage it being a total noob in the regard. For example how would I even go about installing drivers (such as my NIC drivers), etc.? How easily is this managed without a domain (my plan is to run a Server 2012 R2 DC as a guest VM so I'd imagine that can't manage the host hypervisor it's a part of)?

Also, are there any recommended remote administration tools for managing Hyper-V from a Windows PC as opposed to the standard Windows 8 RAT?


2) How does hardware passthrough work in Hyper-V?

A deal breaker for me is being able to get my storage solution (unRAID) working in Hyper-V. I haven't seen any documented users getting it to work however those over at Lime Technology have indeed provided the guest drivers for Hyper-V so in theory it should work.

However I need to be able to passthrough a bootable USB flash drive and my M1015 storage controller to get unRAID up and running the way I need it to be. How does hardware (and most notably USB) passthrough work with regard to Hyper-V? I heard Microsoft only added native support for this in R2?


3) Any other gotchas for those who have mad the switch from ESXi over to Hyper-V?

If I'm to give Hyper-V a real shot and not just spend a few days working on it before deciding to go back to VMware, it would help to be able to avoid any early missteps. So if there is anyone who has made this transition who can lend some advice I would GREATLY appreciate it!

I plan to run the following guest VM's on my host. If anything stands out as a possible issue please speak up!
  • pfSense (router/firewall for home network)
  • unRAID (storage + Plex media server docker)
  • Server 2012 R2 AD DC
  • Backup Server
  • Torrent VM
  • Few test Windows VMs
 

markarr

Active Member
Oct 31, 2013
421
122
43
I have used hyper-v at my last two jobs. It works great, but the management is lacking.

1) If you have a windows 8.1 computer to manage it from then install the hyper-v server, if not then use the full gui install. You can do everything from powershell but a little bit of a learning curve.

2) This will be the deal breaker. There is no hardware passthrough like there is in esxi, you can pass individual disks through very similar to rdma on esxi (much easier on hyper-v), not recommended outside home/test enviroments, this also means no usb passthrough.

3) Once you get past the management of Hyper-V it works. There is very little reporting available, you are dealing with windows logs (for better or worse depends on perspective) , PXE booting is quirky, replication works great out of the box, dynamic memory is a little quirky, and live migrating is easy and can do it with shared nothing storage. That is the short list of the top of my head that I can think of.

Outside of the hardware passthrough on the unRAID vm everything else should work just fine.

EDIT: You can passthrough USB devices to the vm.
 
Last edited:

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I have used hyper-v at my last two jobs. It works great, but the management is lacking.

1) If you have a windows 8.1 computer to manage it from then install the hyper-v server, if not then use the full gui install. You can do everything from powershell but a little bit of a learning curve.

2) This will be the deal breaker. There is no hardware passthrough like there is in esxi, you can pass individual disks through very similar to rdma on esxi (much easier on hyper-v), not recommended outside home/test enviroments, this also means no usb passthrough.

3) Once you get past the management of Hyper-V it works. There is very little reporting available, you are dealing with windows logs (for better or worse depends on perspective) , PXE booting is quirky, replication works great out of the box, dynamic memory is a little quirky, and live migrating is easy and can do it with shared nothing storage. That is the short list of the top of my head that I can think of.

Outside of the hardware passthrough on the unRAID vm everything else should work just fine.
Hyper-V doesn't offer any hardware passthrough at all? Wow, that really is going to be the deal breaker I guess. That's really frustrating.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I think honestly you are doing this completely wrong purpose, when you switch hypervisors, you should not try too so hard to maintain the comprehensive system you had in previous hypervisor. If you don't have various storage methods, you then have everything wrong as well. After all part of virtualization is to have more choices, unlike bare-metal.

just my two cent.

I'm just trying to find the best setup for my needs. I'm open to a better solution than I have now (unRAID) for my storage needs but from what I've read unRAID is pretty ideal for bulk static storage needs. I like that all my drives remain spun down except for those with files on them that are being accessed (since data is not striped).

Currently I have two unRAID servers for my storage (90% of which is static media) that mirror each other (they are in different geographical locations and backup over site-to-site VPN). If I can't do hardware passthrough, what are my options for storage on Hyper-V?

My storage hardware is as follows (all just purchased and un-changeable):
  • 64GB SATA DOM
  • 512GB Samsung SM951 M.2 SSD
  • 2 x 480GB Intel 730SSDs
  • 4 x 8TB Seagate Archive Shingled Drives
  • M1015 controller currently flashed to IT mode.
And my storage needs are as follows:
  • Bulk media storage (have about 12TB worth right now and that continues to grow)
  • Storage for VMs themselves
  • Storage for VM backups (currently use Veeam)
  • Storage for some Windows shares (not a deal breaker since I don't need great performance)
 
Last edited:

markarr

Active Member
Oct 31, 2013
421
122
43
One thing with Hyper-V because it is windows you can use software raid for some stuff. For the unRAID if I understand it correctly then you should be able to just pass the drives through and they will be local drives to the vm and should work, unless there is specific mapping done in unRAID
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
One thing with Hyper-V because it is windows you can use software raid for some stuff. For the unRAID if I understand it correctly then you should be able to just pass the drives through and they will be local drives to the vm and should work, unless there is specific mapping done in unRAID
You're saying I should be able to just pass through each drive individually to the VM instead of passing through the entire controller which in turn presents each of the disks individually to the VM?
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I was gonna also mention it but I felt the entire idea was bit off so I went with basic opinion. But yeah this is what I would have done.
I'm not sure which idea you think is off. I'm looking to go from one hypervisor to another due mainly to the hardware support of my motherboard and devices. I don't have the need or desire to change the functionality of the guest VM's within the hypervisor.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Oh nice! Thanks for the link that seems promising. I'll have to test this with unRAID to see if I'd still get disk stats like smart data, temps, spin-down options, etc. That's why most people pass through their controllers in ESXi.
My All-in-One home VM server went like this. Hyper-V -> ESXi -> Proxmox. In Hyper-V, Smart data will not be passed through and hdparm will not work to spin down the disks, so that didn't work for my bulk media storage vm (I use Ubuntu + SnapRAID + AUFS to achieve a similiar bulk media storage system to UnRAID). This ruled out Hyper-V for me for this purpose (Hyper-V is great).

The main thing that pushed me away from ESXi was managing the VMs. The older thick client required a Windows box and the new vSphere client requires WAY too many resources for a simple homelab. Also, the vm backup options either cost a bunch of money or felt like a hack.

I ended up on Proxmox (I've used this for years on my colo'd vm servers). It can do PCIe passthrough, it has a lightweight web gui for management, and vm snapshots are super simple as well. Also, I love *nix, so Proxmox ended up being the best solution for me at home.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
My All-in-One home VM server went like this. Hyper-V -> ESXi -> Proxmox. In Hyper-V, Smart data will not be passed through and hdparm will not work to spin down the disks, so that didn't work for my bulk media storage vm (I use Ubuntu + SnapRAID + AUFS to achieve a similiar bulk media storage system to UnRAID). This ruled out Hyper-V for me for this purpose (Hyper-V is great).

The main thing that pushed me away from ESXi was managing the VMs. The older thick client required a Windows box and the new vSphere client requires WAY too many resources for a simple homelab. Also, the vm backup options either cost a bunch of money or felt like a hack.

I ended up on Proxmox (I've used this for years on my colo'd vm servers). It can do PCIe passthrough, it has a lightweight web gui for management, and vm snapshots are super simple as well. Also, I love *nix, so Proxmox ended up being the best solution for me at home.
Proxmox is a KVM based distro right? Does it have good driver support? That's my big issue with ESXi right now since my motherboard is so new.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Proxmox is a KVM based distro right? Does it have good driver support? That's my big issue with ESXi right now since my motherboard is so new.
Yes, Proxmox is KVM based on Debian Wheezy. I've read in your posts that you are using the D-1540. I'm using the 3.10 kernel in Proxmox (optional, more recent kernel over 2.6+ that's standard). Normally, these drivers would be built into the kernel, but that hardware is so new that you would need a newer kernel, or to just add the drivers to it like this.

Cuong's Technical Notes: Xeon Processor D-1540: 10GbE Driver in Linux (8086:15ad)

You should only need to do the first part of those directions.

Code:
sudo -i
apt-get install linux-headers-`uname -r`
wget http://sourceforge.net/projects/e1000/files/ixgbe%20stable/4.0.3/ixgbe-4.0.3.tar.gz
tar zxf ixgbe-4.0.3.tar.gz
cd ixgbe-4.0.3/src
make
modprobe -r ixgbe
make install
modprobe ixgbe
Hopefully, they show up now.
Code:
lspci -nn | grep Ethernet
I would try this out for you, but I don't have anything as current as that board at home or work yet :)
 
Last edited:

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Yes, Proxmox is KVM based on Debian Wheezy. I've read in your posts that you are using the D-1540. I'm using the 3.10 kernel. Normally, these drivers would be built into the kernel, but that hardware is so new that you would need a newer kernel, or to just add the drivers to it like this.

Cuong's Technical Notes: Xeon Processor D-1540: 10GbE Driver in Linux (8086:15ad)

You should only need to do the first part of those directions.

Code:
sudo -i
apt-get install linux-headers-`uname -r`
wget http://sourceforge.net/projects/e1000/files/ixgbe%20stable/4.0.3/ixgbe-4.0.3.tar.gz
tar zxf ixgbe-4.0.3.tar.gz
cd ixgbe-4.0.3/src
make
modprobe -r ixgbe
make install
modprobe ixgbe
Hopefully, they show up now.
Code:
lspci -nn | grep Ethernet
I would try this out for you, but I don't have anything as current as that board at home or work yet :)

Awesome! Thanks for that link man that's super helpful. I'll try and test out Proxmox this weekend. It does have a nice and simple look which would be nice for home.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Awesome! Thanks for that link man that's super helpful. I'll try and test out Proxmox this weekend. It does have a nice and simple look which would be nice for home.
No problem. I hope that works for you. Another option would be to install a newer 4.0.5 kernel, but I would suggest staying with the standard Proxmox kernels and just adding the drivers as your first step.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Just my two cents, but I'm unsure if u raid will work as a guest VM since (and this is where I may be wrong) you have to use an unraid USB key/drive to boot u raid right? Its been about 8 years since I last used it. If you can clone the USB drive to the vhdx that the VM would boot off of then it could work. Otherwise pass through for USB to boot from is not going to work.

I am a fan of storage spaces (the learning curve sucked) but once you figure out the arrangement of drives that meets your performance requirements I have found it to be very reliable and extremely fast.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Just my two cents, but I'm unsure if u raid will work as a guest VM since (and this is where I may be wrong) you have to use an unraid USB key/drive to boot u raid right? Its been about 8 years since I last used it. If you can clone the USB drive to the vhdx that the VM would boot off of then it could work. Otherwise pass through for USB to boot from is not going to work.

I am a fan of storage spaces (the learning curve sucked) but once you figure out the arrangement of drives that meets your performance requirements I have found it to be very reliable and extremely fast.
Booting from a vhdx or vmdk works fine for unRAID but you still need to pass through the USB drive because once booted it needs that USB drive. That's how I have it running on ESXi at the moment.

As for storage spaces, the main reason I don't want to use it is I don't want to stripe data across multiple discs for my home media server. I like that only the disk that's being accessed needs to spin up and the rest can remain spun down.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
I guess two different use cases and that can be kind of nice though spinning up and down drives. Though that may decrease the longevity of the drives, but I'm sure that's only by a little bit so a moot point from me on that.

I am using quite a few disks and 10gb/40gb Ethernet and 40gb infinband networks so my performance needs and expectations are a little higher. So for a low power always on setup it sounds like esxi is the way to go minus the issues you are having the your new gear.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I'm trying to get Hyper-V setup so I can do some comparison testing.

However I'm unable to create any Generation 2 VMs without getting the following error message. My host is Server 2012 R2 Standard and the guest OS's I've tried are 8.1 and 10.



EDIT: Nvmd, fixed it. Issue was the network share I was accessing the ISO off of. As soon as I copied the ISO to the local host it works fine.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Trading vSphere for Hyper-V...EWWWW

hahah j/k i guess it's ESXi (free edition) for Hyper-V, man if you had all the goodies (lic access) of vSphere I don't think you'd ever look back!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I'm not sure which idea you think is off. I'm looking to go from one hypervisor to another due mainly to the hardware support of my motherboard and devices. I don't have the need or desire to change the functionality of the guest VM's within the hypervisor.
Have patience grasshoppa'...either VMware or someone in the virt community WILL patch/provide a vib for those 10G nics on the Intel Xeon D-1540 based systems w/ 10G option...just a matter of time.

You don't think they are excited abt this platform as well for SMB? :-D