"IIO 1 PCIe Port Bifurcation Control" greyed out - X9SRL-F motherboard

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Agrikk

Member
Sep 6, 2013
38
3
8
Oakland, CA
I have purchased a two-port NVMe PCIe adapter and I'm trying to get my Supermicro X9SRL-F motherboard to recognize it. I'm trying to turn on bifurcation on the slot but the ""IIO 1 PCIe Port Bifurcation Control" option I'm supposed to access is black and not selectable.

Is this setting enabled somewhere else that I'm not finding?
 

Attachments

i386

Well-Known Member
Mar 18, 2016
4,039
1,460
113
34
Germany
I'm not sure if all x9 based mainboards got bios updates with support for bifurcation
 

Agrikk

Member
Sep 6, 2013
38
3
8
Oakland, CA

View attachment 28756

Those are already only x4 and x8 slots, so there is nothing to bifurcate. This is usually only supported on x16 slots.
The board I was asking about was a X9SRL-F :) but the premise is the same.

An x8 slot can be split into two x4 slots which is what I was looking for. With a 2-slot NVMe card, each slot requires an x4 lane to function so you put it on an x8 slot and split it.
 

dabl

New Member
Sep 17, 2019
6
2
3
Thanks very much for this post.

I have the same motherboard and am also interested in using a two-port NVMe PCIe adapter in it and enabling bifurcation control in it.

Can you please post the make and model of the NVMe PCIe adapter you had success with for use in the X9SRL-F?

It would be nice to use an AOC-SLG3-2M2 but from previous threads here I saw one report that said they couldn't get it to work in their X9SRL-F.

Any specifics on the configuration in the X9SRL-F with your NVMe PCIe adapter would be very much appreciated including which slot you're using it in.

For example I see people mention having to enable x4x4x4x4 PCIe bifurcation on a given x8 slot where the x4x4x8 selection did not work etc.

Any ideas/advice on which slots to choose on the X9SRL-F for the below cards, totaling what's occupying the X9SRL-F slots so far, taking into consideration the NVMe PCIe adapter you had success with?

LSI SAS 9207-8i (SAS2308 based controller) (PCIe x8) currently in SLOT 7

Nvidia Quadro P2000 (PCIe x16) currently in SLOT 4

I'd also be curious about the environment in which you're using your X9SRL-F + NVMe PCIe adapter, I'm using unRAID.

Below is my current IIO PCIe port Bifurcation Control configuration

One thing I'm currently confused about is which IIO 1 IOUx setting goes with which physical slot.

Also, it didn't occur to me until now that since I bought the board used with a Xeon ES-2680 v2 and ram pulled from a working system it may have slot configurations from that system then that I don't want now.

The LSI SAS 9207-8i runs the 16 drives in my 836 chassis so I definitely want that running at full speed.

Perhaps a Bios reset to defaults would be a good idea?

I'm hesitant to do that though for fear of what might break or be time consuming to get working again.

Else would be nice to know the defaults for the IIO IOUx settings.

To be clear, I have not changed any bios settings so far but did update to the latest v3.3

Obviously I have some learning to do about these IIO 1 IOUx settings in general and will poke around about those.

Meanwhile thanks for any help!

temp.jpg
 
Last edited:

Agrikk

Member
Sep 6, 2013
38
3
8
Oakland, CA
The 2-port NVMe adapter I used was this one:

https://www.amazon.com/gp/product/B09PGDMWKH which is about half price of the Supermicro card you posted.

That said, the secret sauce here is understanding how each PCIe slot maps to the CPU and chipset. As a definition, IIO stands for Integrated Input/Output and is the controller that manages traffic between PCI Express and a CPU. In our case, with the Supermicro X9SRL-F motherboard, there is only one CPU. Therefore there will only be one IIO channel to bifurcate (IIO 1). If there were two CPUs, there would be IIO 1 and IIO 2.

X9SRL-F slot layout.PNG

But what we are more interested in is how each slot maps to the PCIe controller. If we look at the lane diagram, we can see that PCIe Slot 5 maps to IOU 1A and 1B, PCIe slots 4 & 6 map to IOU ports 2A & 2B as well as ports 3C & 3D, and PCIe slots 2,3,7 map to IOU ports #A & 3B as well as 2C and 2D.
X9SRL-F Lane layout.PNG

So looking at your image, your BIOS is showing that:
- IOU3 is controlling PCIe slot 2, 3 and 4
- IOU1 is controlling PCIe slot 5
- IOU2 is controlling PCIe slot 6 and 7

In order to bifurcate a PCIe lane in order to map two devices into a single slot (and each NVMe chip is a device), we have to select an x8 lane and split it into two x4 lanes.

So from your BIOS picture and the lane diagram, we could bifurcate PCIe slot 5 into two x4 slots.

This would create a new line item in the list for a port 1B at x4.

Why?

Because we know that slot 5 maps to IOU 1. It runs at x8 so it'll have only a single port. But if we split it into x4x4, then a second port is created and listed as port 1B.

We could also bifurcate IOU3 from x8x4x4 into x4x4x4x4. This would create an x4 lane for slot 2, an x4 lane for slot 3 and two x4 lanes for slot 4. When you did this, BIOS would represent the additional lane by adding a "Port 3D" line in the list.

Why?

Because we know from the lane diagram that PCIe slot 2 maps to IOU 3A, slot 3 maps to IOU 3B and slot 4 maps to IOU 3C (at x8). Splitting x8 into x4x4 would create two lanes out of one, thus creating and enabling port 3D.


So for my example I have the following hardware stuffed into the case:
2 M1015 SATA HBA adapters
2 2-port NVMe adapters
1 Mellanox ConnectX-3 fiber HBA

I needed an x8 slot for the fiber controller to perform at near max thoughput and two bifurcated x4x4 slots for the 2-port NVMe cards, so my final layout looked like this:

Slot 7 - M1015
Slot 6 - NVMe adapter
Slot 5 - ConnextX-3
Slot 4 - NVMe Adapter
Slot 3 - M1015
Slot 2 -
Slot 1 -

Bifurcation Settings.png
 

Agrikk

Member
Sep 6, 2013
38
3
8
Oakland, CA
edit: actually, I think IIO IOU 2 might be bifurcated as X4X4X8 to allocate two X4 lanes to Slot 6. I took a lot of pictures to document my attempts and I might have them mixed up.

Also- I am using this board as a trueNAS server with one pool of 16 500g SSD in RAID10 attached to the two M1015 HBAs and another pool of 4x 1TB NVMe SSDs across both adapters in RAID10.

In your case you are going to want to put your LSI adapter in an X8 slot to take advantage of the 7.8GB/s offered by a PCIe 3.0 X8 connection. I would also reset the BIOS to factory default to clear out any lingering settings from previous owners. Better to start with a clean slate rather than modify someone else's modifications.

FYI You might take a performance hit on your graphics card because slot 4 is actually only X8 though it is x16 form factor (this board doesn't have any X16 slots).
 
Last edited:

dabl

New Member
Sep 17, 2019
6
2
3
Thanks very much for taking the time to explain all that, hugely appreciated!

The graphics card is only being used for transcoding in an unraid jellyfin docker so the output should be using very little bus bandwidth as I understand it.