How can I improve disk performance for my ESXi build?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ch33rios

Member
Nov 29, 2016
102
6
18
43
I've been playing around with ESXi for almost a month now. My first hurdle was getting my GPU to successfully passthrough and while a bit bumpy, I managed to get that fully working which was exciting.

Then, when working through setting up my VMs I started to notice that I had pretty mediocre performance on my spinning HDD's. I thought why would this be the case but after a bunch of research, it seems local storage and ESXi (out of the box) dont mix well.

First my setup:

Xeon E3-1230 v5
Gigabyte MX31-BS0 - Intel C232 chipset (mATX)
32GB DDR4 (2x16GB)
PCIe USB Controller passed through to Win10 VM
Nvidia GTX970 passed through to Win10 VM
1 x 120GB SSD
2 x 480GB SSD
1 x 3TB HDD
2 x 2TB HDD

I'd look into getting a hardware raid card but as it stands right now, I literally dont have space on my board. Call it poor planning/lack of knowledge on my part but the GPU and USB card are taking up the only spaces on the mobo for expansion cards.

I've been looking more and more around this site and there are some references to building a VM within ESXi that has a storage array setup that is then passed back to ESXi as a datastore but it seems like a key component of that is allowing the existing on-board SATA controllers to be passed through to the VM. Unfortunately it seems that my SATA controllers are grayed out in the vsphere web client for being available to passthrough...not sure why as I feel like I've read people successfully passing through their drives using a mobo with the same C232 chipset.
HardwarePage.PNG

So to ya'll experts out there, any thoughts or recommendations on how I might proceed? Is there a more 'custom' way to passthrough the controller on the board by just editing the /etc/vmware/esx.conf file? That seemed to work pretty well for enabling the 'then grayed out GPU'.

Appreciate the time and help!
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
If you want to "pass-thru" the HDD connected to a built-in controller to a guest VM, you should look into raw device mapping (RDM).
 

FullMetalJester

New Member
Jan 5, 2017
9
0
1
41
TBH you need to invest in a RAID controller and setup RAID10 locally for the spinning disks. You could also setup a storage server like FreeNAS or similar, and run ZFS (no RAID controller needed, just onboard SATA ports or an HBA for additional ports) and share the resulting disks over iSCSI. This gives you the added benefit of shared storage if you want to add a second ESXi host. You may need the VMUG subscription to get iSCSI, (i don't remember off the top of my head) so this could incur additional costs. All depends on your budget.
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
TBH you need to invest in a RAID controller and setup RAID10 locally for the spinning disks. You could also setup a storage server like FreeNAS or similar, and run ZFS (no RAID controller needed, just onboard SATA ports or an HBA for additional ports) and share the resulting disks over iSCSI. This gives you the added benefit of shared storage if you want to add a second ESXi host. You may need the VMUG subscription to get iSCSI, (i don't remember off the top of my head) so this could incur additional costs. All depends on your budget.
Yeah I know this is probably the longer term better setup but unfortunately it would require 2 additional purchases: an ATX motherboard for additional PCIe slots and the RAID controller. In fact it would probably also require RAM as right now Im running non-ECC DDR4 and the options for non-ECC server based boards are pretty minimal. Building a separate FreeNAS install would also cost some coin....I do have an extra 3570K lying around but it'd need at least a new mobo (although this might be cheaper since I would really only need a mobo for it).
 

DaveBC

New Member
Apr 7, 2015
20
5
3
42
I believe you cannot pass through your disk controller because the disk with the the ESXi install is attached to it.

Regarding your disk performance,
1) ACHI controllers perform poorly because the devices (or is it just the drivers?) have a low queue depth. Many folks hit this limit hard when they tried to do VSAN with onboard SATA.
Disk Controller features and Queue Depth?
Why Queue Depth matters!
SSDs on AHCI on vSphere do better than HDDs but neither perform as well as they would in physical.

2) A controller with BBWC or FBWC will buffer random writes through cache and allow the VMs to proceed to the next IO as if those writes are committed to disk.

3) When you put more than one virtualized workload through the same physical disk subsys you trigger the I/O blender effect.

I'm honestly unsure what your disk performance would be like with physical RDMs vs vmdk on datastore, given the same controller, and now I'm curious. Try each and run crystaldiskmark to see. While you're at it, try the disk attached to a bare metal W10 and measure that too.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I would consider exchanging your current mainboard for one with 3 slots if you can find such with non ECC support.
Then you can add either a Raid card for local attached storage or a HBA to pass through to a storage appliance.
If you can sell your GA board this might be the cheapest solution.

Without further info on use case, required space, budget, #vm's etc its difficult to offer more sensible recommendations (eg a second server could make sense but might not depending on that)
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
Frankly, the suggestions in this thread are terrible:

  • Purchase of a HW RAID-10 card - !!EXPENSIVE!!
  • Swap motherboard for one without ECC support - !!DATA AT RISK and LOTS OF WORK!!
  • Use Raw Device Mappings - !!POOR PERFORMANCE!!

One thing that has been stated correctly is that you cannot use the integrated SATA controller as a passthrough device because you are using it for local data stores. The best course of action for the OP is to acquire a cheap rebranded LSI 9211 HBA, such as an IBM M1015. Attach drives to it, and pass the device through to a VM which will host the storage under ZFS. Configure a zvol with SSDs as ZIL/SLOG write cache, and present datastores to the ESXi host on an internal vSwitch via NFS/iscsi. You will still need to use your integrated SATA controller to host your ZFS VM.

Alternatively you can can hack up ESXi to use an USB stick for a datastore for your ZFS VM and pass the C232 SATA controller to the VM. This may be an option for you since it seems all of your PCIe slots are occupied.

I don't know how many VMs you are running, but the aboves will likely require a memory upgrade as you should allocate a healthy amount of RAM (I have 16GB) to your ZFS VM. ZFS uses RAM for read caching, so your read performance on your ZFS datastore will be much better as you allocate greater amounts of RAM to your ZFS VM. When you pass through a PCI device to the RAM is reserved for that VM, reducing your available RAM in the shared memory pool for the other VMs.
 
  • Like
Reactions: whitey

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Frankly, the suggestions in this thread are terrible.
LOL, I had a good chuckle/wince moment as well. Get an HBA and be done w/ it. If no room for HBA your SOL unfortunately or move that PCIe USB ctrl to a PCIe flex card mounted off somewhere else in the chassis if another card is potentially blocking a PCIe slot.

Something like this IF it helps to resolve the issue.

PCI Express x4 Flex Riser Card | Logic Supply
 
  • Like
Reactions: CreoleLakerFan

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
  • Swap motherboard for one without ECC support - !!DATA AT RISK and LOTS OF WORK!!
That is what he has now already.
O/c its more sensible to get a proper combo, but with his other consumer oriented gear he's not likely to be willing to do so.
The Board he currently has only has 2 slots and he needs 3.
So options are:
-Get a second box
-Get a new mainboard (and or optionally new RAM)

Alternatively you can try to use Riser Cards to split your existing pcie-slots or convert the M2 Slot to a PCIe4x slot (which might actually be easier/cheaper than replacing the board, depends on the HBA you'd get).
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
If you are really stuck and don't want to spend the $$. You can pass-thru the on board USB controller to handle your kb/mounse job to save a PCIe slot.

Before doing this, move your ESXi installation to an SSD or HDD before hand.

Pass-thru the entire USB controller does work in some MB but not all.
 

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
That is what he has now already.
O/c its more sensible to get a proper combo, but with his other consumer oriented gear he's not likely to be willing to do so.
The Board he currently has only has 2 slots and he needs 3.
So options are:
-Get a second box
-Get a new mainboard (and or optionally new RAM)
Advising him to swap out his server grade board which uses ECC RAM and has IPMI for a consumer board which does neither for the sake of acquiring an additional PCI slot is foolish in general, but particularly so when he can use a USB datastore and pass the onboard SATA controller through to a ZFS VM and serve high-performance block storage to the ESXi host through the PCIe bus.

Come on, guys! This is "ServeTheHome," not Tom's.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
:)
He does not have ECC Ram;)
He was not supposed to swap in a consumer Board

And agree, passing the SATA to a storage vm might be a good option - but that will not speed to the performance he wants.

In the end it will all be a dirty workaround b/c the hardware does not fit the requirements.
Question is - what is the best solution he can get with what he has:)
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
Frankly, the suggestions in this thread are terrible:

  • Purchase of a HW RAID-10 card - !!EXPENSIVE!!
  • Swap motherboard for one without ECC support - !!DATA AT RISK and LOTS OF WORK!!
  • Use Raw Device Mappings - !!POOR PERFORMANCE!!

One thing that has been stated correctly is that you cannot use the integrated SATA controller as a passthrough device because you are using it for local data stores. The best course of action for the OP is to acquire a cheap rebranded LSI 9211 HBA, such as an IBM M1015. Attach drives to it, and pass the device through to a VM which will host the storage under ZFS. Configure a zvol with SSDs as ZIL/SLOG write cache, and present datastores to the ESXi host on an internal vSwitch via NFS/iscsi. You will still need to use your integrated SATA controller to host your ZFS VM.

Alternatively you can can hack up ESXi to use an USB stick for a datastore for your ZFS VM and pass the C232 SATA controller to the VM. This may be an option for you since it seems all of your PCIe slots are occupied.

I don't know how many VMs you are running, but the aboves will likely require a memory upgrade as you should allocate a healthy amount of RAM (I have 16GB) to your ZFS VM. ZFS uses RAM for read caching, so your read performance on your ZFS datastore will be much better as you allocate greater amounts of RAM to your ZFS VM. When you pass through a PCI device to the RAM is reserved for that VM, reducing your available RAM in the shared memory pool for the other VMs.

Ya'll are great!

Very much appreciate the information here and I've actually already (sort of) done a proof of concept regarding this exact setup. Granted, I have not acquired an HBA but here's what I did do:

1) Leveraging raw disk mapping, I mapped my 2x2TB disks to a RockStor VM (I started down the FreeNAS setup but got a bit frustrated with BSD...Im not completely giving up on it but I've never had good luck with FreeBSD when I was using pfSense)
2) Setup RockStor and created a RAID0 pool for the disks
3) Created an NFS export to expose to ESXi as a datastore
4) Successfully setup the NFS datastore in ESXi and could see the full size of the NFS store as configured within the RockStor VM
5) Add a new vdisk to my Win10 VM
6) Booted up the Win10 VM and it successfully saw the new disk and successfully formatted it.
7) Did some basic basic tests: copied a 3GB ISO file, copied a 8GB folder with many files of various sizes, and then ran Crystal Disk Mark. Results were ridiculously improved!!

I dont have the screenshots available to me now but I do remember the copy of the 8GB folder full of stuff to the newly created disk within Win10 yielded an average of about 270MB/s 'copy speed' as reported by Windows explorer. Crystal Disk Mark was also vastly vastly improved! So in short, my little POC worked perfectly! I can only assume the same would occur for FreeNAS ZFS (rockstor does use btrfs but I dont think that makes a huge difference in raw speed....memory consumption yes though as I only gave me RockStor vm 8GB).

In short I will absolutely go down this path once I get a proper HBA card as recommended. Im interested, though, in understanding if there are any issues with the 'order of operations' at all for auto-starting systems from a cold boot. So in other words, assuming my RockStor/FreeNAS -- whatever I choose -- VM starts first then any vmdk stored there will be accessible for subsequent VM starts, right?

Thanks for all the information thus far!
 
  • Like
Reactions: CreoleLakerFan

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
FreeNas with NFS will be significantly slower (google 'Freenas slow nfs esx'). Try iscsi if you don't care about vm integrity.

Make sure to have external backups, an all in one is a fragile construct. Works great until it breaks down;)
 

Ch33rios

Member
Nov 29, 2016
102
6
18
43
FreeNas with NFS will be significantly slower (google 'Freenas slow nfs esx'). Try iscsi if you don't care about vm integrity.

Make sure to have external backups, an all in one is a fragile construct. Works great until it breaks down;)
Hah yeah for sure gotta have my backups setup. Single point of failure :)

Im liking RockStor so I might stick with it for a while just to complete a longer term proof of concept. I know ZFS has been around for a while and btrfs is newer but it seems like a cool up-and-comer...something different at least.
 

CosmoJoe

New Member
Jan 24, 2017
6
0
1
50
I have to jump in and give props to FreeNAS as a storage option. I was originally running a single host in my home lab - a SuperMicro X9DR3-LN4f+ (can't say enough good things about this board) with SSD as a local datastore. Performance was great, but I wanted to run with 2 hosts for clustering and thus had to tackle the whole shared storage quandary. I have a Synology NAS but it has spinning drives and the performance, frankly, was abysmal. I also didn't want to bog it down as it serves as media center in our home.

I ended up turning to FreeNAS. There are lots of options in regards to hardware but I can suggest an extremely affordable and efficient option - ASRock C2550D4I. You get ECC support, IPMI and 12 SATA ports, giving you lots of expandability. The installation is very easy, there is plenty of documentation and if you have some SSDs lying around the performance over iSCSI is solid. If you are concerned about backups, register a free NFR license for Veeam and you can backup to an external drive for added peace of mind. I would also recommend a separate switch/network for your iSCSI traffic if you can swing it.
 

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
If you can get a spare slot I'll give you an adaptec asr-3405 card for the cost of shipping. Not the best but may be better than nothing.
 
  • Like
Reactions: Tom5051

CreoleLakerFan

Active Member
Oct 29, 2013
485
180
43
FreeNas with NFS will be significantly slower (google 'Freenas slow nfs esx').
NFS is slower using spinning disks because it uses synchronous writes by default. Using an SSD as a write cache for ZIL speeds things up tremendously.

Try iscsi if you don't care about vm integrity.
iSCSI, on the other hand, disables synchronous writes by default. These can be enabled, improving data integrity but decreasing performance. The answer to which, as in the case of NFS, is to use an SSD as a write cache.