Intel S2600CP dual GPU passthrough issue

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RyC

Active Member
Oct 17, 2013
359
88
28
I am pulling my hair out trying to boot ESXi 6.0U2 with 2 GPUs installed. With the default BIOS settings (and latest BIOS version), one GPU will prevent any VM it is attached to from starting at all and the error logs (and then Google) point to disabling a BIOS option called Memory Mapped I/O Above 4G. However, when I do that, at boot I get a BIOS error about" PCI out of resource" and ESXi won't even boot. So then I turn Memory Mapped I/O back on and have set the size to all of the different options, but I always get the same error when trying to start a VM with the 2nd GPU.

So I seem to be in a pickle. It doesn't matter which slot I use, whichever GPU ends up being in the non CPU1 primary slot ends up not working (when ESXi is able to boot). I've tried the CPU1 slots and the CPU2 slot. There are no other PCIe cards installed. It seems like turning off option roms might help with booting with MMIO above 4g disabled, but I can't find any option to turn off PCIe slot option roms, only the onboard Ethernet. I'm really pulling my hair out here, thanks for any advice anyone may have.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
I am also having similar problems with my intel 2600IP4.
Thankfully my esxi 6 can still boot. I intend to add 4 GPU cards for passthrough to 4 windows VM.
I did the same bios memory map changes, and only only my ATI V5700 passthrough ok. None of my AMD 6450 can are able to , although one of them could initially (BEFORE I added the v5700) so not sure what is going on.

My 4 port USB 3.0 card passes through with no issues. As well as the HBA cards (dell h310 and the intel SATA module).

I am quite certain there are settings in the bios we need to tweak , just not sure what
 

RyC

Active Member
Oct 17, 2013
359
88
28
I've tried tweaking every BIOS option I can think of. If you have Memory Mapped I/O above 4G set to Disabled (and more than 1 GPU installed), does it still boot or does it throw you into a BIOS with an error?
 

epicurean

Active Member
Sep 29, 2014
785
80
28
I had to enabled the memory mapped i/o above 4G or it wont boot with my HBA card. I put it as 8G at this point, not sure what to set otherwise.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
I had to enabled the memory mapped i/o above 4G or it wont boot with my HBA card. I put it as 8G at this point, not sure what to set otherwise.
esxi needs it mapped below the 4G mark... so you need to figure out how to get your hba to comply.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Hi Patriot,
Are you saying if its not mapped below the 4G mark, it will not passthrough any GPU?

How do I get the H310 to comply?
 

RyC

Active Member
Oct 17, 2013
359
88
28
This is pretty much the same situation as I encountered. The host and ESXi will not boot at all with 2 GPUs installed and Memory Mapped IO above 4G Disabled. With Memory Mapped IO above 4G enabled to ANY amount, ESXi and the VM with first GPU attached will boot, but whichever VM has the 2nd GPU attached will not boot with an error in the log pointing to how ESXi doesn't support Memory Mapped IO above 4G for passthrough.

This seems to be a limitation with Intel boards (and ESXi), since Supermicro BIOSes have an option to expand the below 4G memory area (MMCFG BASE), which is confirmed missing by Intel support.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
I am not making any progress.
Just like RyC,setting Memory Mapped IO above 4G, with anything more than 1 GPU will not boot
 

evolucian911

Member
Jun 24, 2017
36
6
8
35
I replied to another thread with a fix for this. You cannot resolve this issue in the BIOS. You NEED to buy a Powered 16x to 16x pcie extension / riser. The one I use has a female molex for power. Plug that into the blue slot at the top closest to the processors and the device will work perfectly with esxi or even in windows SLI /CFX.
I am pulling my hair out trying to boot ESXi 6.0U2 with 2 GPUs installed. With the default BIOS settings (and latest BIOS version), one GPU will prevent any VM it is attached to from starting at all and the error logs (and then Google) point to disabling a BIOS option called Memory Mapped I/O Above 4G. However, when I do that, at boot I get a BIOS error about" PCI out of resource" and ESXi won't even boot. So then I turn Memory Mapped I/O back on and have set the size to all of the different options, but I always get the same error when trying to start a VM with the 2nd GPU.

So I seem to be in a pickle. It doesn't matter which slot I use, whichever GPU ends up being in the non CPU1 primary slot ends up not working (when ESXi is able to boot). I've tried the CPU1 slots and the CPU2 slot. There are no other PCIe cards installed. It seems like turning off option roms might help with booting with MMIO above 4g disabled, but I can't find any option to turn off PCIe slot option roms, only the onboard Ethernet. I'm really pulling my hair out here, thanks for any advice anyone may have.
Sent from my LG-LS997 using Tapatalk
 

WingsGB

New Member
May 10, 2018
5
0
1
34
I am also having a similar issue but i am running s2600cp on unraid.

At first i couldn't get it to work at all, i updated my firmware and did some googling. I enabled MMIO and set size to 4g

Now with 1 gtx 1060 in slot 5 i can boot. With both gpus in the system it just hangs on the first intel copyright screen. If i remove both card and just have 1 gpu in slot 3 it hangs and wont even get to the bios.

Any suggestions this is driving me crazy.

Thanks
 
Last edited:

evolucian911

Member
Jun 24, 2017
36
6
8
35
I am also having a similar issue but i am running s2600cp on unraid.

At first i couldn't get it to work at all, i updated my firmware and did some googling. I enabled MMIO and set size to 4g

Now with 1 gtx 1060 in slot 5 i can boot. With both gpus in the system it just hangs on the first intel copyright screen. If i remove both card and just have 1 gpu in slot 3 it hangs and wont even get to the bios.

Any suggestions this is driving me crazy.

Thanks
Guys. It just won't work as it would. It just won't work without a Powered cable.

Sent from my LG-LS997 using Tapatalk
 

WingsGB

New Member
May 10, 2018
5
0
1
34
Guys. It just won't work as it would. It just won't work without a Powered cable.

Sent from my LG-LS997 using Tapatalk
Thanks for the reply, I am having a different issue though. You state that slot 3 works. I cant get my system to even boot to the bios screen if a gpu is in slot 3
 

evolucian911

Member
Jun 24, 2017
36
6
8
35
Ubove 4G setting turned on or off?
Thanks for the reply, I am having a different issue though. You state that slot 3 works. I cant get my system to even boot to the bios screen if a gpu is in slot 3
Sent from my LG-LS997 using Tapatalk
 

WingsGB

New Member
May 10, 2018
5
0
1
34
Ubove 4G setting turned on or off?

Sent from my LG-LS997 using Tapatalk

maximize memory below 4gb DISABLED
Memory mapped I/O above 4gb ENABLED
Memory mapped I/O size 4G

So if i only install a gtx 1060 into blue slot 3 my system just hangs on the very first intel screen with the copyright and platform info. Before the f2 option.

I test that slot with a controller card and it works.

If i put the same gpu in slot 5 it boots.

I am willing to melt the back of 1 of the black slots but would like to see it working first in blue slot 3
 

WingsGB

New Member
May 10, 2018
5
0
1
34
Its not power related.
I know that all the slots are functional
I know my graphics cards work
If gpu is installed in slot 5, it boots.
If gpu its in slot 3, it fails to boot.

The only difference i can see so far is that CPU 2 controls slot 5. Is something happening with cpu1 controlling the gpu? i have my vga connected to onboard video.