ASRock Z68 Extreme 7 Gen 3 - problem with PCIe passthrough

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

omniscence

New Member
Nov 30, 2012
27
0
0
The 4x1 + 1x4 lane configuration from the PCH ist something every other mainboard has, so it is definitely a valid one. Further, a single PCIe lane provides 500 MB/s, not 1 GB/s.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
Way over my head. What is that 20 about???
Hi Jeggs101

vsish, aka vmkernel sys info shell.
It has the ability to extract system reliability information.
Many of this options are hidden from the user if you're only looking at the server from the vSphere client.
And, you can manipulate these options with esxcfg-advcfg as usual.

So, using vsish to extract the /hardware/interrupts/vectorList/ information I know that the motherboard can support/allocate 20 interrupts.
And some of those might be shared too. (you'll want unique ones only to do realtime operations)

These interrupts are used by the hardware to tell the software that it needs immediate access to ressources (cpu time or other things).
As opposed to the software at given intervalls polls the hardware to see if it need attention of some sort.

The number of interrupts needed depends on the hardware you got and what interrupts it uses.
Also the hardware, via it's driver, can allocate how many interrupts it needs - thus freeing up interrupts for other hardware to use if the need is less then the default allocated.
Windows seems to allocate maximum interrupts by default, linux and others are more conservative.

So, the motherboard that I've got today, can handle up to 20 interrupts.
That's very few and in practice prevent the usage of many PCIe cards with many interrupts requests.

With PCIe 3.0 and MSI-X each card can allocate 2048 interrupts!

Hope that this was ok information.
 

Tim

Member
Nov 7, 2012
105
6
18
The 4x1 + 1x4 lane configuration from the PCH ist something every other mainboard has, so it is definitely a valid one. Further, a single PCIe lane provides 500 MB/s, not 1 GB/s.
Hi omniscence

Yes you're right, I've seen other motherboards with that configuration of PCIe slots too after some googling.
And regarding PCIe lane speeds, you're right too. The PCIe 2.0 provides 5GT/s that gives 500MB/s throughput per lane according to the wiki.
PCIe 3.0 with its 8GT/s, due to less overhead compared to PCIe 2.0, results in (near) 1GB/s throughput per lane according to the wiki.

But according to Intel, the C216 provides 8 PCIe 2.0 x1, at 1GB/s.
It might be a typo (seen at multiple block diagrams from Intel so I doubt it's a typo) or they provide 1GB/s in total for the 8 lanes.
Giving the PCIe 2.0 from the C216 PCH only 128MB/s at each lane. I doubt that too, but who knows?

Also, the E3-1200v2 family provides 4 lanes of PCIe 2.0 at 4GB/s.

Source to one of the block diagrams claiming 1GB/s on the C216 PCH PCIe 2.0 and 4GB/s from the CPU.
http://www.intel.com/content/www/us/en/chipsets/server-chipsets/server-chipset-c202.html

The same block diagram in the E3-1200v2 product brief from Intel (found on page 2)
http://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/xeon-e3-1200v2-brief.pdf

So what is it? Has Intel a trick to provide 1GB/s on PCIe 2.0 from their C216 PCH and E3-1200v2 cpu family, or has every block diagram this typo or is the GB/s shared on the available lanes?
I don't know.

Anyway, on the PCIe 2.0 slot I'm using a 4 lane dual intel Gb NIC, so there's no bottleneck at all for the NIC.
And the other PCIe 2.0 card is also never going to use the PCIe 2.0 speed of one lane, at 500MB/s or even at 128MB/s
 

Tim

Member
Nov 7, 2012
105
6
18
Update on interrupts.

Sorry for my "wall of text" posts.
But it's a lot of information to prosess on this topic.

After some googling I found some information, but why is it that I always end up with google translate from a chinese site?
This is, I would think, vital information to know when we're building systems based on vmware and multiple PCIe cards to provide functionality of some sort and need to use hardware passthrough to the VM's.

I was trying to figure out how to check how many interrupts a PCIe card has/needs. And how many a motherboard provides.
Anyway, one site state that the LSISAS9211-8i has 15 interrupts and support both MSI and MSI-X.
Seems that the PCIe 3.0 exclusive for the MSI-X might not be accurate since this is a PCIe 2.0 card.

From the wiki we know that the max for PCIe 2.0 is 32 interrupts.
So if I've had Windows as a VM and not adjusted the driver it would have used 32 interrupts instead of 15.
As I'm using FreeBSD with it, the VM only allocates the 15 that it needs.
But as we know, my motherboard can only handle 20, so... this is using 15 of them.
It's not hard to see that adding more PCIe cards will prevent me from using passthrough withouth complications.

Some info on how to acquire this information from vmware ESXi 5.1

First, we'll want to know how many vectors the motherboard provides.
SSH to the ESXi and run vmkvsitools to get some system info.
~ #vmkvsitools irqinfo
This will reveal the irq vectors, also, if you use "esxtop" and press "i" it will show you live info on the same vectors.
And, as shown in the previous post, "vsish -e ls /hardware/interrupts/vectorList/ |wc -l" also shows this information.
BTW: any of the three ways shows that my motherboard has 20 IRQ Vectors. Many of them shared (the LSI is in my case shared with ehci_hcd:usb1 when in PCIe slot 5)

Just to confirm that MSI/MSI-X is enabled on the system:
~ # vsish
/> cd hardware
/hardware/> cat msi
MSI state: 1 -> MSI/MSI-X is enabled.
(remember to type "exit" to quit vsish)

BTW: MSI/MSI-X can be disabled on a per VM basis to enable IOAPIC mode instead of MSI/MSI-X.
This is as far as I know vital to get the DVB-S2 card working without hickups.

Then I found that my LSI card is not using MSI/MSI-X, take a look at the vmkernel.log
cat /var/log/vmkernel.log | grep MSIX | more
It reveals that MSIX is loaded for two units only, the two ports of the onboard broadcom NIC.
Not a word on MSIX for the LSI card, strange.
And I can't find info om how to enable it either but I think it's enabled by default as MSI/MSI-X is the ESXi 5.1 default settings.

Anyway, my quest was to figure out how many interrupts my PCIe cards needs.
One way to do it seems to be to check in /proc, but that's not available in ESXi 5.1
That's not good, since I know that a view in /proc would have shown me "Total number of interrupts" and many other things.
Take a look at the output here:
http://www.vmadmin.co.uk/vmware/35-esxserver/216-esxqlogicqueuedepth

I'm supposed to use vmkvsitools instead but non of the functions gives me the number of interrupts.
vmkvsitools lspci is equal to the regular lspci.
vmkvsitools irqinfo is the same as esxtop with "i" passed to it.
vmkvsitools pci-info gives me the same value as lspci -p (the number 11, which is not the number of interrupts but the "old irq" equivalent if I understand it correctly (and I'm looking for the number 15 if the chinese site is to trust.

No help in one of the many esxcfg either.
And nothing I can find with the esxcli.
vsish under hardware gives me nothing either.

The TBS6981 uses IOAPIC instead of MSI/MSI-X so that's not important to know as I'll have to disable MSI for that one.
And the Intel NIC supports interrupt Levels: INTA, MSI, MSI-X. But I've not yet found out how many interrupts it has, but know that they can be moderatet through the driver.

So, does anyone know how to find a PCIe cards number of interrupts in ESXi 5.1?

And does anyone know how many the Supermicro X9SAE motherboard can provide? (and how many of them are not shared!)
 

Tim

Member
Nov 7, 2012
105
6
18
Got a reply from supermicro.

The X9SAE can handle 910 non-shared interrupts pr. device function when running Microsoft Windows 7/Vista.
And 2048 non-shared interrupts pr. device function if running Microsoft Windows 8 (2048 is the max supported interrupts by PCIe 3.0).

This means that the X9SAE motherboard doesn't have any interrupts limitation in hardware (it fully support the PCIe 3.0 standard when dealing with interrupts/MSI-X).

So the real question is, will vmware ESXi 5.1 give me any limitations on these numbers.
Or will ESXi 5.1, even when using VMDirectPath (Intel vt-d), put any limits on these interrupts (like making them shared or something).
I don't know, but I don't think so (I don't need 2048 interrupts pr. device and I can't understand that ESXi 5.1 is worse then Windows 7 with it's 910 non-shared interrupts)

Still, would like to know if/where in ESXi 5.1 I can find information about interrupts (how many a PCIe card has/allocates).
And if there's a limit in ESXi 5.1 on the max number of interrupts like in Windows 7/Vista (and what that number is, if any limit is in place)

Anyway, for now I've put the Supermicro X9SAE motherboard on my shoppinglist for January 2013.
Looking forward to work with this motherbard and continue my ESXi build, hoping that the DVB-S2 card will play nice with this setup.
 

Tim

Member
Nov 7, 2012
105
6
18
Just a quick update.

The Supermicro X9SAE is sitting on my desk just waiting for me to get a chassis to put it into.
I've had too much to do the last months but when I finaly was going to replace the ASRock (my worst buy to this date) I found that the mounting holes for the X9SAE didn't match the Norco RPC-2212 holes.
Yes, some of them I can move around to fit the motherboard, but some of them are missing or completely at the wrong place.

I'm looking into the Supermicro 4U chassis, CSE-846BE16-R920B, to replace it (the cost for this monster in Norway is nearly $2.000,- so I'll have to put it on my list after this summer I think.)

I hope that the PSU in that chassis isn't too loud. I might replace them with a regular PSU but then I'll loose the internal HDD mounting slots I think and I plan to use them.
Also I'll replace the fans if the noise is to high.