ESXi 5.5 - Maximum Number of Passthrough Devices >4?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

yu130960

Member
Sep 4, 2013
127
10
18
Canada
ESXi 5.5 - Maximum Number of Passthrough Devices >5 per VM

CONFIRMED: I have passed-through the following 5 HBA (including a 9201-16e)

3 x m1015 (crossflashed to 9211-8i (IT)
1 x 9200-8e
1 x 9201-16e


_________________

I currently have the following setup

Supermicro X8DTH-iF Dual 1366 Motherboard with 7 PCI-e x8 Slots
2 x L5639
48 gigs of Registered ECC
3 x IBM m1015
1 x LSI 9200-8e

2 x SC846 Cases 24 slots

I have ESXi All in one with Nap it and loving it. I was thinking of using this setup all the way to 48 drives with the addition of 2 more 9200-8e and using the second SC846 case as a direct attached expansion case (no expanders). However I ran across the config maximums that say that I can only passthrough 4 devices to a VM. That would mean I am maxed out now.

Has anyone ran into this on whether it is a hard cap or a soft not supported cap? Further, I heard of conflicting reports that the cap is actually 6 per vm.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
I currently have the following setup

Supermicro X8DTH-iF Dual 1366 Motherboard with 7 PCI-e x8 Slots
2 x L5639
48 gigs of Registered ECC
3 x IBM m1015
1 x LSI 9200-8e

2 x SC846 Cases 24 slots

I have ESXi All in one with Nap it and loving it. I was thinking of using this setup all the way to 48 drives with the addition of 2 more 9200-8e and using the second SC846 case as a direct attached expansion case (no expanders). However I ran across the config maximums that say that I can only passthrough 4 devices to a VM. That would mean I am maxed out now.

Has anyone ran into this on whether it is a hard cap or a soft not supported cap? Further, I heard of conflicting reports that the cap is actually 6 per vm.

"max 4 devices
A virtual machine can support 6 devices, if 2 of them are Teradici devices"

http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

You can go around when
- using two storage VMs
- using 16 channel HBAs
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
On a default napp-it install, if I switch out the 8 channel hba for a 16 channel hba and reattach all the drives will it napp-it hiccup at all? This is presuming i am switching out a m1015 for a dual chip variety of the same sas2008 flavour. (9202-16e or 9201-16i)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Its not clear that using the dual-chip sas2008 cards will solve your problem. They present themselves as two independent devices to the OS, so changing two M1015s for a 9202-16e might still be two devices to passthrough...

You should confirm before you spend the money.
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Its not clear that using the dual-chip sas2008 cards will solve your problem. They present themselves as two independent devices to the OS, so changing two M1015s for a 9202-16e might still be two devices to passthrough...

You should confirm before you spend the money.
I just logged in to post the exact same point, and ask if anyone has experienced this. I imagine it is on a device basis and not a card slot basis.

I also wonder what error message, if any, people get when they passthrough the fifth device. I don't have a fifth HBA to try and passthrough.

I also see that from ESXi 4.0 to 4.1 there was an increase in the number of devices that can be passed-through (2 to 4). I was just wondering if it was a software issue or a hardware issue that allowed for the increase.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Yeah the problem is older cards are just PLX bridges (silicon pegi6 - 3 intel nic's with a plx bridge). what is worst is the bridge then sits on one IRQ so anytime one of those 6 nic's fire off, each has to check up on it.

Would be better off to use a modern 16-24 port adapter than a bridged adapter. When pushing 25 million packets per second, it is indeed more efficient to have a dedicated IRQ to each! still in this modern day!

Why don't you just skip ESXi? does it make sense to run a hypervisor on an IO device? VSA are not as efficient as physical, that is known. You are stacking crap on top of crap.

Running more than 1 VSA per host, is extremely difficult! without massive performance loss due to poor i/o sharing routines on esxi part.
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
Why don't you just skip ESXi? does it make sense to run a hypervisor on an IO device? VSA are not as efficient as physical, that is known. You are stacking crap on top of crap.

Running more than 1 VSA per host, is extremely difficult! without massive performance loss due to poor i/o sharing routines on esxi part.
I use storage boxes with 6 x IBM 1015 (50bay Chenbro) but not in an All-In-One Config but as barebone backup systems. But I would not expect performance problems with two storage VMs as this is not VMware VSA but a virtualized Solarish ZFS SAN with hardware pass-through to storage controller and disks. ESXi is not involved in storage I/O (only in transfers via the virtual ESXi switch).

But this discussion is relevant as i think about moving to All-In-One on the backup system as well to be able to startup backup VMs there as well in case of problems on the main system - avoiding slow physical network access.
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Got a deal on a 9201-16e, so I am going to try it out.

Still waiting on some sff 8088 to sff 8087 minisas external PCI brackets to complete me SC846 JBOD expansion case.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I use storage boxes with 6 x IBM 1015 (50bay Chenbro) but not in an All-In-One Config but as barebone backup systems. But I would not expect performance problems with two storage VMs as this is not VMware VSA but a virtualized Solarish ZFS SAN with hardware pass-through to storage controller and disks. ESXi is not involved in storage I/O (only in transfers via the virtual ESXi switch).

But this discussion is relevant as i think about moving to All-In-One on the backup system as well to be able to startup backup VMs there as well in case of problems on the main system - avoiding slow physical network access.
It is true! Vmware doesn't try as hard to go fast (networking) nor disk queue by default.

LSI Parallel = Queue Depth 1 (fixeD)

LSI SAS = Queue Depth 16 (fixed)

PVSCSI = Queue Depth 32/64 (adjustable) -> IE. 255 Queue Depth plus Ring size=32

Remember ESXi goal is to make many vm's go fast. It is not to make 1 VM go fast - nor is it to make 1 VM go fast whilst other vm's are going regular speed.

Hyper-V for example, the nic drivers use all cpu cores default RSS/multiple rx/tx queues - ESXi does not with vmxnet3 ! must be set!

Hyper-V has Queue Depth 255 default! Esxi has 16 or 64 default!

Big difference for 10gb networking and SSD ?? VERY MUCH SO!!
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Update: I got my LSI 9201-16e from ebay and it is a newer one manufactured August 2013. The seller said it was a Dell OEM. I flashed it to the latest P17 as all my other LSI cards are on the same firmware. For some reason, I was expecting it to present as two cards, but it shows up as only one card. In any event, I plugged it in and it seems to be working.

What's more interesting is that ESXi let me pass it through to the VM, so currently I have the following 5 HBAs passed-through to the OmniOS VM (I guess the 4 devices per VM is not a hard cap):

3 x IBM m1015 cross flashed to 9211-8i (no bios)
1 x 9200-8E
1 x 9201-16E

It's been running for a couple days and it seems solid so far.
 
Last edited:

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Been a week now with no problems. It seems crazy that I can have a OmniOS/napp-it VM with 48 direct attached drives!