Performance of WHS Vail on ESXi

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.


New Member
Feb 6, 2011
Hi all - this is my first post!

I am a current WHS V1 user - have been for a couple of years, and I have just downloaded the Vail RC and now I am looking for get some new hardware.

I have been offered a nice box - 6 core AMD with 16GB RAM, with 2TB 7200rpm Barracuda disks (no H/W RAID at the moment), and running EXSi and then Vail in a VM.

However what I am concerned about is the disk performance - I stream a lot of HD content around my LAN from my current WHS box and wouldn't want the performance to suffer.

How much impact does having Vail in a VM as opposed to having it installed natively?

Are there any special steps/recommendations for maximising performance?


Jan 26, 2011
Disk performance under a good Hyperviser (ESXi or Hyper-V) is really quite good. You won't notice much degradation at all compared to a native install. I've been running WHSv1 under Hyper-V for quite a while and really like the flexibility it give me. Many people have reported good success with ESXi as well. For a number of reasons, I am getting set to re-build using ESXi right now.


Staff member
Dec 21, 2010
Adding to PigLover's solid points two things have a pretty big impact:
1. Dynamically Expanding Disks v. Fixed Disk size virtual disks
1a. Dynamically Expanding Disks generally have a small footprint but are allowed to "grow" up to a certain point. For example, you can have 100MB of data on a 400gb dynamically expanding disk and it will take up maybe 110MB of actual disk space. This lets you oversubscribe a physical drive while fooling VMs into thinking that they have a 400GB drive with almost 399.9GB free. There is a performance penalty because the size of the disk expands whenever data is written. Furthermore, you can run into instances where data is not written sequentially on a given physical disk for a virtual disk.
1b. A fixed size disk is generally better because space is allocated beforehand and reads/ writes will look more sequential. The negative is that if you use less than a full physical disk, a physical disk may service multiple virtual drives and therefore be doing seeks from the very inner to outer tracks which is bad for performance. Generally doing a full disk - fixed disk is not that bad at all.

2. Use of Disk or Controller (including disk) pass-through. This gives you very close to non-VM disk performance in most instances that you will encounter in home use and is generally better using either methods in 1 above.
2a. See Hyper-V disk pass-through guide here.
2b. See ESXi VMDirectPath Controller Pass-through guide here.

Raw Disk Mapping (RDM) in VMware ESXi is more complicated to setup than any of the above, but would allow one to achieve Hyper-V like disk passthrough of SATA disks or SAN LUNs (primary use) to VMs. I actually have a half written RDM guide on the main site that I will finish up at some point, but it is much more complicated.

For your application above, you should be OK. If that is a modern six core CPU with IOMMU then you could buy a cheap ~$100 SAS controller and pass a controller with disks through using VMdirectpath and not really have to worry about performance.