ASRock Z68 Extreme 7 Gen 3 - problem with PCIe passthrough

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tim

Member
Nov 7, 2012
105
6
18
Hi

Would love some help/input on my hardware situation under ESXi 5.1

My old base setup is as follows:
ASRock Z68 Extreme 7 Gen 3, socket 1155
Intel Core i7-2600 @3.4GHz
Crucial Ballistix 32GB (4x8GB) PC3-12800 1600MHz CL8 (8-8-8-24, 1.5V W/XMP)
Norco RPC-2212 chassis

To this I've added the folloving cards:

Intel E1G42ETBLK (dual port ET Gb nic)
PCIe 2.0 x4 lane

LSI SAS9211-8i HBA
PCIe 2.0 x8 lane

And this is all run on the free VMware vSphere Hypervisor 5.1
Using Intel VT-d to passthrough both PCIe cards to their virtualmachines.

My first setup was with the cards in the following slots.
Intel NIC in PCIe slot 4
LSI HBA in PCIe slot 5

This was working fine.
But according to the ASRock manual, not optimal.
According to the manual, PCIe slot 5 has only 4 lanes (although 16x slot wide).
With the LSI card in it, needing 8 lanes, that would be only half of what I needed as far as I understand.

Now I also wanted to add a DVB-S2 PCIe 1x card (TBS6981) and installed it to PCIe slot 3 (a 1x PCIe slot).
(keeping the two other cards in slot 4 and 5 for the time beeing)
That's when my problem started.
1) VMware is not able to see the DVB-S2 card at all.
2) I wasn't able to passthrough the Intel Gb nic card in slot 4.
3) LSI card in slot 5 was ok.

So I rearranged the cards.
Putting the LSI in slot 6 (figuring it would like to be in a 8x lane slot).
And the Intel card in slot 5 (figuring it was ok for a 4x lane card to be in a 4x lane slot)
And I kept the DVB-S2 card in PCEi slot 3 since it's a 1x card.

Now the problem shifted.
1) Still no DVB-S2 card.
2) Intel NIC in slot 5 is now ok with passthrough.
3) LSI card in slot 6 is now not possible to passthrough.

I'll try to install MS Windows 7 tomorrow, just to verify that the DVB-S2 card in fact is working.

But it seems to me that I won't be able to passthrough more then 2 PCIe cards at a time, and even adding one more will disable one of the two other completely.

Do you see any options to solve this with my current hardware (ASRock motherboard).
Move the cards around in other slots? I can't use slot 2, that will disable slot 1,4 and 6.
I think I'm out of options regarding card moves?

Or should I just opt for a new motherboard?

I would like to be able to use these 3 PCIe cards in passthrough at the same time (to different virtualmachines).
And would be nice to be able to keep the CPU, RAM and Norco chassis.
I'm not able to passthrough USB ports on this ASRock card, so a new card should be able to passthrough USB ports aswell.
Also it would be nice if I could get readings of CPU temperatures (not able to do that on the ASRock in ESXi 5.1)
And very limited SMART support, would be nice with better support for that as well.

Is this too much to ask for from a motherboard?

I'm looking at three possible boards at the moment.

ASUS Z9PE-D16
This will force me to change CPU (not the best option) but might be a better choice for running ESXi and passthrough of PCIe and USB stuff?

ASUS Maximus V Formula
ASUS Maximus V Extreme
Both socket 1155, so no new CPU needed, and has the Z77 express chipset.

I do NOT know if these boards give me what I want, they are just the ones I'm able to pick up quick and support VT-d.
So hoping for any input on them from you.

Or any other motherboard that gives me what I need that you know will solve my problems.
I'm done playing around and this time need something that just works.

Do I need to go for Xeon (dual Xeon) to get enough PCIe lanes without any conflict?

My ESXi is running my firewall (pfsense), FreeBSD for ZFS as media storage, windows server and desktop for development and testing and some Slackware tests and a Slackware for the DVB-S2 card (tvheadend software).
The last one would need USB in passthrough for legal decoding of DVB-S2 signals.

Thank you for reading.
And I'm looking forward to any feedback with a sollution (or questions if I've not provided enough info)
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
ESXi won't play nice with the NF200 PCI-E lane splitter - that'll be causing your issues, most likely.

The Z9PE-D16 is an excellent board and with only one CPU in-place you get three x16 or x8 PCI-E 3.0 lanes (you need the second CPU for the other three) - if you don't need a ton of clockspeed or cores the entry-level S2011 Xeons are quite reasonably priced for the insane amount of RAM slots and PCI-E slots you get. One of my favourite S2011 boards - a few thoughts here and here. Passthrough works as you would expect - no PCI-E splitter chips - and the remote console makes fiddling with the BIOS and things like ESXi reinstalls much easier. There's also plenty of room for future expansion re: PCI-E & RAM, or upgraded CPUs. Onboard quad NICs (latest i350 from Intel) + separate management port means you may well not need an extra card for the network ports.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
Hi

Thank you for the comment and links.

It would seem that it's the NF200 chip that's the root of my troubles.
Is this chip present on the two other ASUS boards that I mention? (see the end of the post, I've added some info on this)
Or is there any other reasons to leave them out?

After reading up on the Z9PE-D16 it looks like it's going to cost too much for just the possibility to virutalize the DVB-S2 card.
The motherboard itself costing me $630, an E5-2650 (to re-use my 1600MHz RAM) costs $1400 and I would need a new PSU as well and CPU cooler.
Or a cheaper CPU (E5-2620 at $520) but with new RAM (32GB for ca $215).
So at least $1365 or $2030, I'm not sure if it's worth it for what I need.

Also, I'll need a GFX card, now I'm using the one in the Core i7 and it's covering my needs just barely.
The one on the Z9PE-D16 is not enough (I'm running photoshop on the server).
And if I'm going to add a GFX card I would prefer to go for a new chassis as well. (No room in my 2U)
4U to make room for the GFX, custom PSU and fans to keep it (more) quiet.
Looking at these from Supermicro (If I can customize them to use standard PS/2 PSU, seen it on the web somewhere).
846BE16-R920B, 846A-R1200B, 846E1-R1200B or 846E16-R1200B.
Adding at least another $1700 to the total cost. (I know they're around $1400 but I have to add VAT and transport to Norway).

This would be a complete new build, just for the possibility to virtualize the DVB-S2 card.
Chassis, motherboard, CPU, RAM, PSU, cooler, fans and GFX card. Looking at a price tag starting at nearly $4000.
That's just not possible with my budget this time.

So my options are either one of the two other socket 1155 ASUS cards or drop the DVB-S2 card for now and stick to what's working.

All I can find on the other two ASUS boards regarding the NF200 chip is this:
It’s a similar situation that was firstly highlighted on older motherboards with the NF200 chip.
In light of this, ROG engineers took the time to device a custom system of PCI-Express lane switching
where if only one or two graphics cards are installed then the PCI-Express 3.0 lanes are reassigned
to connect directly to the CPU, bypassing the PLX chip altogether.
I guess that'll give me the same trouble in ESXi as I have today, so I'll just stick to what I've got.
Unless someone can think of a better solution.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
That is a heck of a lot of money to throw at passing through a single card... so! Have you had a look at the P8C WS? No NF200 there, ECC support, S1155, a darn sight cheaper solution than the Z9PE. PCI-E 3.0, two x8 lanes (or one x16), two x4 lanes, a x1 lane and a PCI slot. All PCI-E slots are x16 physical apart from the x1.

Incidentally, you are unlikely to see any performance hit running a M1015+disks in a x4 electrical slot - an SSD array, perhaps, but it doesn't have a huge impact for spinning disks.

Where are you located that the 2650 and Z9PE are costing that much?

Re: the graphics card, a low-profile, passively cooled 6450 is more than enough for 2D Photoshop work and should be $30-40 tops. I use one in my own photo processing workstation and have had zero trouble with it.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
Location: Norway.
Just an example, the Asus P8C costs $40 more here then on Newegg.
And the Intel Xeon E5-2650 is $300 more.

Took the TBS6981 and put it in the PCIe slot 4.
Now I'm able to see it, but not use VMDirectPath on it.
Don't know why, but in some slots it's not working at all, and in the others I don't get the VMDirectPath to work.
The idea was to drop the Intel NIC as I've got onboard NIC I can use as a backup for now.
But still no solution. Tired of swapping the cards around, so I'll just quit that game now.
This ASRock motherboard is starting to annoy me. Might swap it with my old desktop to extend it's lifetime.

The Asus P8C WS board looks ok for my needs.
It's a home server so I don't use ECC RAM.

Looks like I'm able to reuse all of my other components too, that's nice.
But might upgrade to Ive Bridge Core i7-3770 to ensure that the PCIe lanes are all working.
And it'll give me a nice upgrade to the HD 4000 GFX too. (Good enough for my needs).
(i7-3770 costs $380 and the motherboard is $280, so that's affordable)

Just one last question before I hit "buy" on it, are you sure I'll be able to use VMDirectPath on that board with my three PCIe cards at the same time without any trouble?
Either with my Sandy Bridge Core i7-2600, or with an upgrade to the Ivy Bridge Core i7-3770.
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
I would recommend against the 3770 and would instead recommend the usually-equally-priced E3-1245 V2 Xeon - near-identical performance but you get ECC memory support. 4GB ECC DIMMs aren't particularly expensive these days and are well worth it, imho. More error checking/correcting and system stability is worth the small extra initial outlay.

The 1245 has HD4000 graphics, too.

We had the P8B WS and had no trouble passing through PCI-E or PCI cards - I haven't had a chance to try the C yet but I can't see a reason why it wouldn't work. Stranger things have happened, mind, but I would be surprised if it there was an issue with the board which prevented passthrough. I do know that it used to be the case that some cards don't play nicely with passthrough (e.g. video cards, some TV tuners) and simply don't work. I haven't tried doing that recently so I don't know how that's come along. I can't guarantee anything but I'd personally buy one expecting it to work.

Edit: As a further point, even if you're not keen on ECC now the Xeon gives you the option of using it down the track - which you won't get with the i7. Also... the PCI-E problem isn't going to be related to the i7-2600, unless you have a faulty CPU. There's no difference in PCI-E lanes between a Sandy i7 and an Ivy i7. Not that upgrading CPU is going to be a bad thing - ~10% more performance with lower power consumption is good - but it certainly wouldn't be something I'd be doing straight off the bat as a troubleshooting step for the passthrough issue.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
E3 was cheaper then the i7, $30 difference.

With only 4 memory slots on the Asus P8C WS board I would need 8GB modules to reach 32GB.
Not sure what that'll cost me here in Norway.
Can't say that I've had any trouble with non-ecc memory so far so might just continue to trust the modules I've got for the time being.

I know that the TBS6981 is capable of being used with VMDirectPath under ESXi 5.1 so it all comes down to the support on the motherboard.

Ok, thank you for your very good help.
I'll take a closer look at other forums to look out for any trouble (if any) with the P8C board and passthrough.
But looks like this will be the solution.

What CPU cooler do you recomend for the E3-1245 V2 for my 2U chassis?
 
Last edited:

sotech

Member
Jul 13, 2011
305
1
18
Australia
Hmm. 8GB modules are still around $80 here, which is about $40 more than a non-ECC 8GB module. Wasn't long ago they were >$200 so it's good to see that they've come down, though I can understand why you'd be keen to stick with the DDR3 you already have. At least with the Xeon you can drop some in later and use your current RAM elsewhere, should the opportunity arise.

I'm not really up to snuff on the 2U coolers - everything we sell is 3U or 4U in the rack range so I've not ever had to explore the 2U options, sorry! Others here will probably be able to chip in there :)
 

Tim

Member
Nov 7, 2012
105
6
18
8GB reg/ECC is around $160 (that's US dollar, like the rest of my $ values in this thread).
And it's this one, SuperMicro 8GB PC3-12800 DDR3-1600 ECC Registered CL11.
Might be other brands or other webshops to search for better options on this type of RAM, so I'll take a look at it later.

For now I'm just concerned about the VMDirectPath for all PCIe lanes on the P8C WS board (at the same time).
Hoping to grab one as soon as possible.

I know that the Xeon E3-1245V2 should be the same as the Core i7-3770, but some tests suggest that the i7 is faster.
Not decided yet.

I'll keep you updated on the P8C WS board.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
8GB reg/ECC is around $160 (that's US dollar, like the rest of my $ values in this thread).
And it's this one, SuperMicro 8GB PC3-12800 DDR3-1600 ECC Registered CL11.
Might be other brands or other webshops to search for better options on this type of RAM, so I'll take a look at it later.

For now I'm just concerned about the VMDirectPath for all PCIe lanes on the P8C WS board (at the same time).
Hoping to grab one as soon as possible.

I know that the Xeon E3-1245V2 should be the same as the Core i7-3770, but some tests suggest that the i7 is faster.
Not decided yet.

I'll keep you updated on the P8C WS board.
One thing to keep in mind is your tuner card may not work with VT-d passthrough at all. Tuner cards have a long (and well-deserved) reputation for being finicky or just cheap pieces of crap.
VT-d isn't part of the basic pci-e spec, has anyone actually reported your model working? Try it in a barebone config perhaps.

Over here in the states some people are interested in passing through a popular pci-e tuner card that works with our brain dead crypto system (cablecard) and they have had many problems trying to do so, even though at heart it is just a NIC.
Fortunately we have another option that is an external ethernet bridge.
 

Tim

Member
Nov 7, 2012
105
6
18
I know of the troubles with cablecard, but that's not a problem with the TBS6981.
The TBS6981 is confirmed (by many others) working on ESXi in passthrough mode and running fine on a GNU/Linux based virtualmachine and the tvheadend software that I'm going to use.

So the only problem I've got with my ASRock is the NF200 chip, preventing me from using vt-d on more then two cards at a time.

Hoping that the Asus P8C WS board is up to the task (and also allow me to do the same on USB ports if I'm lucky).
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
8GB reg/ECC is around $160 (that's US dollar, like the rest of my $ values in this thread).
And it's this one, SuperMicro 8GB PC3-12800 DDR3-1600 ECC Registered CL11.
Might be other brands or other webshops to search for better options on this type of RAM, so I'll take a look at it later.

For now I'm just concerned about the VMDirectPath for all PCIe lanes on the P8C WS board (at the same time).
Hoping to grab one as soon as possible.

I know that the Xeon E3-1245V2 should be the same as the Core i7-3770, but some tests suggest that the i7 is faster.
Not decided yet.

I'll keep you updated on the P8C WS board.
You don't want registered stuff for that board - look at the unbuffered prices (which may not differ, but reg isn't compatible with the above setup).

Have you got a source for the 1245 vs. i7 performance difference? We've got some benchmarks for both, I'd be interested to see what anyone else has found. 100MHz on the top end doesn't result in a tangible difference at those clock speeds.
 

Tim

Member
Nov 7, 2012
105
6
18
Sorry, mixed in registered with the ECC there, my bad.
The price is the same as far as I can tell.

The E3-1245v2 scored 9122 while the Core i7-3770 scored 9450
Bot are 3.4GHz.
I'll admit that it's not that much a difference.
I think I'll end up with the xeon, see my next post.
Source:
E3-1245v2 and Core i7-3770 benchmark
 

Tim

Member
Nov 7, 2012
105
6
18
Ok, done some reading and not sure if the P8C board is the right/best buy.

Asus seems to have some trouble with the PCIe bandwith, some pointing at them not using the C216 to its full potential.
And that the Xeon E3 series only has 20 PCIe lanes, giving the board a hard time when filling up all the slots.
(especially when using passthrough on all slots)

I can't figure out why, as the number of lanes is less then the E3 and C216 can provide.
So there must be some sort of limitation in the Asus build somewhere.

Ok, I know, it's PCIe 3.0 they complain about and my cards is PCIe 2.0 so I should be safe anyway.
And I'm not going to put a 16x PCIe 3.0 GFX card in it any time soon (that would be a much later build if needed)
But I'm not so sure, since it might put me in the same position as with the ASRock, PCIe slots turned off.

I think it's a great board, but I need to take it to its extreme and passthrough at least 3 PCIe 2.0 boards at the same time.
I'm just not confident it'll do that nicely.
Feel free to prove me wrong on this, but I've not seen any proof of this working as I want to.

The option seems to be to go for a C216 based Supermicro board.
Namely the X9SAE-V.
Supermicro X9SAE-V

Why? Well, it's based on the C216 and as far as I can tell has all the features enabled.
Like vPro (Intel AMT) and the right intel NIC to support it (asus failed on both).

It has:
2x PCIe 3.0 x8 (in x16 slots)
2x PCIe 2.0 x4 (1 in x8 slot)
2x PCIe 2.0 x1
(a total of 26 lanes)

We know that the C216 has PCIe 2.0 with 8 lanes.
Configurations of 2x4, 4x2 or 8x1.
So I guess that the 2x PCIe 2.0 x4 slots on the motherboard is handled by the C216

That leaves the rest for the Xeon E3-1245v2, we know has 20 lanes of PCIe 3.0.
1x16 & 1x4, 2x8 & 1x4 or 1x8 & 3x4.
That should take care of the 2x PCIe 3.0 x8 slots.
And leaving room for the 2x PCIe 2.0 x1 slots with good margin.
(x2 PCIe 2.0 x1 slots is not using up the bandwith of 1x PCIe 3.0 x4 slot)

The price is a bit higher, well it might end up double the price of the P8C since I'll have to buy it overseas.
But it seems more fit for the "server" and esxi vmdirectpath task I'm going to use it for.

Any thoughts on what I should choose based on this new information?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,811
113
Just wondering where you saw all of that about the P8C? Have one sitting here but have not been able to open it yet.
 

Tim

Member
Nov 7, 2012
105
6
18
Hardforum, vr-zone, newegg, vmware forums and some asus forums and random asian/russian forums that google translated - just what I saw after random googling about that topic.
Again I'll admit that not everyone that posts on the forums know what they're talking about, so I'm reading the information they put out there with a grain of salt.
And some might try to use more lanes that it got, so it's not easy to be sure of the limitations really.
That's the reason I registered on this forum, it seems more trustworthy then most other forums out there.
But too many seems to have trouble with passthrough on the asus board to ignore it.

If you find time to test it, with the E3-1245v or i7-3770 or something close to that and can verify that it's able to passthrough at least 3 PCIe cards at the same time it would be great.
My passthrough needs for this board are the 8x LSI card, 4x Intel NIC and the 1x TBS6981. All are PCIe 2.0 cards.

I'm debating whether or not the Intel ATM/vPro feature on the supermicro board is worth the price.
But if the P8C due to it's C216 is able to forward the CPU temp and more of the SMART info to ESXi, I don't need the ATM/vPro features.
So that would also be nice to get a confirmation of how it's reporting it in ESXi.
 

Tim

Member
Nov 7, 2012
105
6
18
Just a quick update.

I've seen many people getting the TBS6981 working in passthrough mode.
Even one in Germany with an old Supermicro C2SBC-Q motherboard (socket LGA775)

The trick to get it working seems to include at least two steps.
One, fast enough hardware (cpu with enough fast pcie lanes and a motherboard that supports it and enough fast ram)
Two, tweak the ESXi with pciPassthru0.msiEnabled = FALSE (sets the physical mode to IOAPIC if the virtual mode is IOAPIC)

The first point I'm still doing research on.
Running DPC Latency Checker seems to reveal the "real time" trap most users end up in when trying to use the DVB-S2 card.
i.e. passthrough is working, but with some glitches on the videostream due to latency in the passthrough on slow hardware.
Also, it seems you need PCIe 3.0 speed even the cards is PCIe 2.0, just to overcome the interrupt timeouts or what they get (google translate might fool me on that one).
Not sure what socket is needed, hoping 1155 is enough, but fear I'll have to upgrade to socket 2011 (better vt-d I guess?)
And when it comes to PCH's, I guess the desktop versions won't cut it. So at least the 602 or 216 I guess is needed.

I'm going to try the pciPassthru0.msiEnabled = FALSE parameter in a day or two to see if it has any impact on the Z68 motherboard I've got now.
At least, if that trick is working, it should enable me to passthrough the TBS6981 card to a vm, even on the Z68.
Then I'll have to run some tests to see if I've got any latency problems.

A note on the pciPassthru0.msiEnabled parameter.
Default it's TRUE, which means physsical mode is MSI or MSI-X.

Also, I've got a "x2APIC" setting in the UEFI on the z68 motherboard.
Not sure if this is supposed to be enabled or not in my case, so I'll have to test it.
Also not sure if this has anything to do with the ESXi tweak pciPassthru0.msiEnabled.
The only tips in the bios is that (x2apic) parameter is not supported on some OS and default is disabled.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,516
5,811
113
If only you were in the US! Seems like an exciting project Tim.
 

Tim

Member
Nov 7, 2012
105
6
18
Thank you Patrick.

Sorry this late update but I'm waiting for a reply from supermicro. (how long does it take to get an answer from them?)
If I don't hear anything from them I might just end up with the X9SAE-V http://www.supermicro.nl/products/motherboard/Xeon/C216/X9SAE-V.cfm
Or the X9SAE if there's any reason that the V version would be of any trouble with the passthrough (I can't see why).
I'll have to wait until January to buy it, and find a reseller that ships to Norway (and to a private person, not a company) any recommendations is welcome.

The reason for going for supermicro and not the asus board is the extra features (at a low cost) and that I'm not sure that the asus board will behave nicely with three PCIe cards in passthrough at the same time.
Also, a more "server" oriented motherboard might have a better design to handle this kind of setup.
This is proven by the HP Compaq 8200 Elite that has the Intel Q67 Express chipset and it's ability to use all of it's PCIe slots in passthrough at the same time.
(the source here is a bit vague, but I trust it as I've heard of other Q67 that can handle it).
Meaning that as long as the motherboard is well designed (no PCIe bridge or other cheating) it's not a big problem what chipset it got as long as vt-d is fully supported.
But I prefer the server chipset over the desktop ones any day when it comes to this setup.

Regarding the Z68 I've got and the x2APIC UEFI settings.
It had no effect on the DVB-S2 card (TBS6981) in PCIe slot 4.
Also it seems that it doesn't matter which PCIe slot I put it in.

Regarding the pciPassthru0.msiEnabled=false parameter.
First I need to edit the /etc/vmware/passthru.map file.
The format is : vendor-id device-id resetMethod fptShareable.
The first two parameters is known, but I have no clue about the two last ones for the TBS6981.
The possible options are:
reset methods: flr, d3d0, link, bridge, default
fptShareable: true/default, false

EDIT:
I don't know if the TBS6981 needs a special setting, but found the reset methods description on page 5 in this pdf from vmware.
http://www.vmware.com/files/pdf/techpaper/vsp_4_vmdirectpath_host.pdf
There's also some info on the virtual/physical irc sharing on that same page. It might be relevant to the info in the post below.
END EDIT

Next, I have to create a virtualmachine and edit its .vmx file to enable the pciPassthru0.msiEnabled feature.
But since I don't have the TBS6981 card in passthrough it's never seen by the .vmx file.
So, this is a tweak to get it running smooth AFTER I've been able to set it up in passthrough mode, thus first needing a better motherboard.

So, I'll wait for a response from supermicro, and try to find a good reseller that ships to private persons in Norway.
And post back here the progress as soon as anything happens.
 
Last edited:

Tim

Member
Nov 7, 2012
105
6
18
Got the reply from supermicro.

Basically they listed the capabilities of the CPU and the C216 PCH and compared it to the X9SAE board.
I asked for which of the two boards would be ok and they responded only with the non-V model.
I take it that they did that because it will serve my needs given the needed PCIe lanes in my setup.

Most of this is known information, just a recap to sum it op in one place.

On Intel web site the E3-1200v2 models has 16 PCIe 3.0 lanes at 16GB/s and 4 PCIe 2.0 lanes at 4GB/s.
We also know that the C216 PCH has 8 PCIe 2.0 lanes at 1GB/s (only supporting 1x slots as far as I can tell).

We also know that the CPU PCIe configuration can be 1x16 3.0 with 1x4 2.0.
And that the C216 can give us 2x4, 4x2 or 8x1 (at least that's what Supermicro tells me, as we know Intel states only 8 lanes in singel configuration).

The respons from Supermicro told me that the X9SAE board gives me the following PCIe slots.
PCIe 3.0, 1 slot 16/16 (physcial/electrical)
PCIe 2.0, 1 slot 8/4 (physcial/electrical)
PCIe 2.0, 1 slot 4/4 (physcial/electrical)
PCIe 2.0, 1 slot 1/1 (physcial/electrical)
PCIe 2.0, 1 slot 1/1 (physcial/electrical)

So, here we see the 1x16 with 1x4 from the CPU (the 4x in a 8 physical slot) - so far so good.

But the 4-1-1 configuration from the C216 is not one of the listed configurations (2x4, 4x2 or 8x1)
The "missing" 2 PCIe 2.0 lanes is used for the onboard NIC (ref. user manual block diagram page 1-9)

Anyway, given the hardware capabilties and my needs I don't see any problems with this motherboard.
The number of lanes from the CPU/PCH on the PCIe slots are 26, and I only need to use 13 of them.

I'll put the LSISAS9211-8i card in the PCIe 3.0 16 slot.
As it needs the lanes given this info from the product brief:
"The LSI SAS 9211-8i provides 8 lanes of 6Gb/s SAS and is matched with 8 lanes of PCI Express 2.0 5Gb/s performance to eliminate bottlenecks."

The intel NIC got 4 lanes so that goes into the 8 physical/4electrical slot (since it provides 4GB/s compared to only 1GB/s in the other 4x slot).

And the TBS 6981 fits in one of the two 1x slots.

So, at least the hardware lines up when comparing PCIe lanes and slots (and then some spare lanes).

Then the respons from Supermicro got interesting.
And I quote:
As long as the host OS (SW) can correctly differentiate what is used onboard
vs what is being supported by processor and chipset, and do HW isolation
for the virtual machine, then there is no HW limitation.
Otherwise, the limitation is on SW.
I guess vmware ESXi 5.1 can cope with this (as my client vm's are BSD/Solaris/Linux only and not windows).

I know that windows can have some trouble with the MSI-X vectors (interrupt vectors), allocating to many of them.
But that's probably another bottleneck.

I ran this on vmware:
"vsish -e ls /hardware/interrupts/vectorList/ |wc -l"
The response on the ASRock Z68 Extreme 7 Gen 3, was: 20, yes only 20.

I'm wondering how many is available on the X9SAE, and how to figure out how many is needed for each of my cards.

Anyone knows how to check for this? (under vmware esxi 5.1)

EDIT:
fixed the "missing pcie lanes" part of the post, they are used for the onboard nic, ref block diagram in user manual page 1-9.
 
Last edited: