SMCI X9 Bifurcation I Why some boards but not all ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
It's frustrating that bifurcation is not available on the entire X9 range. I could really use it.

My business uses one X9DRH-iTF board and three X9DRH-iF boards. Their BIOSes have sections for bifurcation and even have descriptions explaining multiple possible bifurcation splits when highlighting a slot, but if you actually try to change anything, you find that for each slot there is only one option in the menu - no bifurcation. There is no v3.4 BIOS for these boards, only v3.3, which is what I am using.
 
  • Like
Reactions: svtkobra7

svtkobra7

Active Member
Jan 2, 2017
362
88
28
It's frustrating that bifurcation is not available on the entire X9 range. I could really use it.
  • I could not agree more, but as the devil's advocate:
  • NVMe rev 1.0 spec: Mar '11 (first NVMe drive released in '13)
  • X9 release: ~Q1 2012 (E5-2600 v1)
In sum, the first NVMe drive dropped after the X9 was released. For the entire X9 lineup to bifurcate NVMe, would require both compatibility with drives not even existing when the X9 was released, but also being able to bifurcate PCIe lanes amongst those drives. IMO, a somewhat big ask for dated tech (note: I have two X9DRH-7F). At least we get one of the two:
  • To confirm, at least the X9DRH-7F can boot from NVMe (off-topic and no consolation);
  • As to your situation, what about an AOC w/ PLX switch? I think it is fair to allow some compromises when mating 8 year old tech - the X9 - with (near) current gen storage.
My business uses one X9DRH-iTF board and three X9DRH-iF boards. Their BIOSes have sections for bifurcation and even have descriptions explaining multiple possible bifurcation splits when highlighting a slot, but if you actually try to change anything, you find that for each slot there is only one option in the menu - no bifurcation. There is no v3.4 BIOS for these boards, only v3.3, which is what I am using.
Feature availability across the X9 lineup is not uniform and the documentation sucks. Surely the X9DRH can support bifurcation and perhaps one day we see somebody handy with AMIBCP (BIOS Configuration Program) enable that functionality. In the interim, SMCI does have a solution for us, and not one that I favor: X10, X11, etc.

Stepping into their shoes, SMCI has no cause to release an update for an 8 year old board, which would require diverting resources who are directly or indirectly supporting their current lineup, and driving revenue. Re-tasking that headcount would appease enthusiasts (for the most part), whose procurement of X9s (likely) occurred on the secondary market (never impacting SMCI's income statement). Such a decision would do little to ensure SMCI remains a going concern. Consider that that enabling bifurcation in X9s could - to some small extent - cannibalize purchase of their current line, causing upgrade to current gen to be deferred. I too dislike the proposition and not having boards that will bifurcate, but I've tried to rationalize the lack of support. ;)
 
  • Like
Reactions: The Von Matrices
  • NVMe rev 1.0 spec: Mar '11 (first NVMe drive released in '13)
  • X9 release: ~Q1 2012 (E5-2600 v1)
In sum, the first NVMe drive dropped after the X9 was released. For the entire X9 lineup to bifurcate NVMe, would require both compatibility with drives not even existing when the X9 was released, but also being able to bifurcate PCIe lanes amongst those drives. IMO, a somewhat big ask for dated tech (note: I have two X9DRH-7F). At least we get one of the two:
On the other hand, bifurcation is not a feature of NVMe. I'm sure there were riser cards in 2012 that split one PCIe slot into two slots each with half the number of lanes, and that would require bifurcation even if no NVMe drives were in those slots.

Please don't interpret the following as a slight against you. I'm not disappointed at SMC for not adding the feature to a 9-year old board. I'm disappointed that the bifurcation feature is not uniform across their entire lineup or even documented anywhere. What I don't understand is, if SMC doesn't support legacy products, then why did they even bother to validate and enable the bifurcation feature on some X9 boards at all? Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?

As to your situation, what about an AOC w/ PLX switch? I think it is fair to allow some compromises when mating 8 year old tech - the X9 - with (near) current gen storage.
The problem with cards with switches is that they are relatively expensive; the cheapest I could find is a card with an ASMedia switch for $79. The reason I'm still using E5-2600v2 servers is because I need large amounts of memory, and DDR3 memory is cheap on the used market. These are 2U servers, and I already have all the PCIe slots filled with 1-drive M.2 cards. So if I want to add a second drive to each slot, I have to buy the second drive and a $79 switched card, which significantly increases the cost of the additional storage. The more switched cards I would buy, the more it makes sense just to upgrade to an X10 system with cheaper passive cards.
 

svtkobra7

Active Member
Jan 2, 2017
362
88
28
Please don't interpret the following as a slight against you.
  • Never would o/c. I completely share your struggle and taking a contrarian position tricks me into feeling better?!? ;)
I'm not disappointed at SMC for not adding the feature to a 9-year old board. I'm disappointed that the bifurcation feature is not uniform across their entire lineup or even documented anywhere.
  • I wonder if they had plans to roll out X9-wide, but decided not to, and got caught with their pants down?
What I don't understand is, if SMC doesn't support legacy products, then why did they even bother to validate and enable the bifurcation feature on some X9 boards at all? Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
  • Valid points. But if it isn't supported, I don't suppose we can complain too much for it working on some but no others.
On the other hand, bifurcation is not a feature of NVMe. I'm sure there were riser cards in 2012 that split one PCIe slot into two slots each with half the number of lanes, and that would require bifurcation even if no NVMe drives were in those slots.
  • True that bifurcation is not exclusive to NVMe storage, but I struggle to think of a commonplace need to slice of a PCIe slot for anything other than PCIe lane hungry NVMe storage. I don't doubt there is one.
  • I consider them to go hand-in-hand, as without bifurcation, or other solution, you end up with your 1drive:1slot issue.
Please don't interpret the following as a slight against you. I'm not disappointed at SMC for not adding the feature to a 9-year old board. I'm disappointed that the bifurcation feature is not uniform across their entire lineup or even documented anywhere. What I don't understand is, if SMC doesn't support legacy products, then why did they even bother to validate and enable the bifurcation feature on some X9 boards at all? Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
  • I didn't take it as such.
  • I concur completely with your sentiment.
  • Documentation is not SMCI's core competency ... I remember a time when I bought 2 x E5-2600 v2 processors, popped them into an X9 motherboard whose own product page advertised compatiblity, only to determine that I had to RMA the board for Ivy Bridge compatibility.
Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
  • Amen.
  • I thought I had found a pattern as to why bifurcation works one some, but not others:
    • X9DRi-LN4F & X9DRi-F = Intel® C602 chipset = SATA
      • Motherboards with the C602 chipset like bifurcation.
    • X9DR3-LN4F & X9DR3-F = Intel® C606 chipset = SAS
      • Motherboards with the C606 chipset do not seem to like bifurcation.
    • o/c the X9DRH- w/C602 + SAS = breaks this likely incorrect attempt to rationalize the nonsensical.
The problem with cards with switches is that they are relatively expensive; the cheapest I could find is a card with an ASMedia switch for $79. The reason I'm still using E5-2600v2 servers is because I need large amounts of memory, and DDR3 memory is cheap on the used market. These are 2U servers, and I already have all the PCIe slots filled with 1-drive M.2 cards.
  • What is the use case for all 7x M.2 NVMe cards (out of curiosity)?
So if I want to add a second drive to each slot, I have to buy the second drive and a $79 switched card, which significantly increases the cost of the additional storage.
  • Wouldn't a switch on x16 net you 4x drives instead of 2x drives on the x8 slots? Maybe there is benefit to use only throw a switch in that x16 slot?
  • I want to say I saw a 4-port AOC with switch recently for $100, so if 4x drives, $25/tax each is not too bad.
  • I too started down this path and had a handful of M.2 NVMe and realized it wasn't the route I wanted to head.
The more switched cards I would buy, the more it makes sense just to upgrade to an X10 system with cheaper passive cards.
I think I said SMCI has a solution for you! =>
In the interim, SMCI does have a solution for us, and not one that I favor: X10, X11, etc.
  • How many AOCs with switches = breakeven for increased DDR4 cost, though? I'd actually be curious to calculate, but too lazy.
Good luck!
 
  • Like
Reactions: The Von Matrices
  • What is the use case for all 7x M.2 NVMe cards (out of curiosity)?
I run the cryptocurrency mining pool Prohashing , and these servers are used to host the clients for cryptocurrencies. We host about 300 different cryptocurrencies on these servers. They average about 50 GB of data per client, but some of the most popular coins like Ethereum are over 500 GB each. They can be tremendously hard on the storage system because for every transaction on the blockchain the client needs to search the disk to find the input(s) and determine whether it is valid before it can be included in a block. So there isn't much CPU usage; it's mostly limited by storage I/O.

The reason we're still using E5-2600v2 servers is because we bought them years ago and they're still working fine for our needs. The only real problem right now is that the blockchain sizes are growing, especially as cryptocurrency continues to become more popular and people send more transactions, all of which need to be stored on disk. The PCIe disks have plenty of I/O performance for us, but capacity is becoming a problem. I was just looking for a way to get some more storage space without needing to replace all the disks we already have with larger disks. Also, it's really only 5 slots because one slot has a SAS HBA and one slot a 10 GigE controller.

  • Wouldn't a switch on x16 net you 4x drives instead of 2x drives on the x8 slots? Maybe there is benefit to use only throw a switch in that x16 slot?
  • I want to say I saw a 4-port AOC with switch recently for $100, so if 4x drives, $25/tax each is not too bad.
  • I too started down this path and had a handful of M.2 NVMe and realized it wasn't the route I wanted to head.
The problem is that these are 2U servers, and I haven't seen anything that would hold 4 M.2 drives on a low profile card. I'm thinking you're right in that I might want to rethink the whole M.2 card route and go with something more scalable.

How many AOCs with switches = breakeven for increased DDR4 cost, though? I'd actually be curious to calculate, but too lazy.
You're right in that it probably would never make sense to buy a new server vs adding switched cards. I just feel like I'd be wasting money buying these switched cards if a few months later we just upgraded the new servers that could use the cheaper cards. I calculated that the difference is about $1000 per server to upgrade with the same number of cores (12 per CPU) and same memory capacity (256 GB), mostly due to DDR4 memory being more expensive but also because for some reason X10 motherboards have really held their value well in resale market.
Good luck!
Thanks! :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
  • I wonder if they had plans to roll out X9-wide, but decided not to, and got caught with their pants down?
Coincidentally I recently was told by SM support that they have different teams working on different mainboards and apparently they do not share a real common code base; so when one team implements something for one board it does not get added to the other boards automatically... O/c with a request at the right time (please add feature fromm x to board y too), you have a better chance they do it (while the boards are still under support); or o/c if you have a financial motivator...

  • True that bifurcation is not exclusive to NVMe storage, but I struggle to think of a commonplace need to slice of a PCIe slot for anything other than PCIe lane hungry NVMe storage. I don't doubt there is one.
Riser was the keyword here, eg
1U chassis with a x16 slot breaking into 2 vertical x8's
x32 slot on special form factor breaking into x16,2x8 due to reduced board width
 

parasubvert

New Member
Jan 17, 2021
1
0
1
I have an X9DAi , bifurcation settings work fine in the BIOS (x4x4x4x4) but I still can only see a single drive in ESXi or Windows.

Frustrating, debating whether I buy an X9DRi-F off eBay and sell this one. I have decent chips (E5-2697 v2) and 512 GB RAM.
 

bash

Active Member
Dec 14, 2015
131
61
28
42
scottsdale
playing with the bios in these boards for bifurication is probably the most frustrating experience I have ever had with supermicro.
 

gseeley

New Member
Mar 3, 2018
16
10
3
56
Found this thread when looking to see if my X9DRD-7LNF with BIOS 3.3 supported bifurcation before I rebooted it to install an AOC-SLG2-2M2 card. It does indeed support it with two E5-2650L v2 processors.

One thing to keep in mind is that your CPU (for the most part) implements the PCIe bifurcation and then the BIOS has to support that functionality as well. Next, depending on your board and if it's a single CPU (UP) or multi-CPU (MP) the wiring of the CPU(s) to the PCIe slots is another factor you need to look at and is dependent on the chipset of the board.
 

texmurphy

New Member
Oct 18, 2021
12
0
1
I saw mentioned on this thread that PCH C602 seems to like bifurcation, but on my X9DRL-iF it doesn't seem to. Is there any beta or modded bios in the wild that I can use to update my 3.3 BIOS and make it work??
 

svtkobra7

Active Member
Jan 2, 2017
362
88
28
I saw mentioned on this thread that PCH C602 seems to like bifurcation, but on my X9DRL-iF it doesn't seem to. Is there any beta or modded bios in the wild that I can use to update my 3.3 BIOS and make it work??
It is really easy to do, but there is a non-zero chance of something running afoul.

Per this thread - https://forums.servethehome.com/index.php?threads/nvme-boot-with-supermicro-x9da7-x9dri-f.13245/ - the relevant deets are here: [HowTo] Get full NVMe support for all Systems with an AMI UEFI BIOS

Also, see: https://forums.servethehome.com/ind...not-bifurcate-on-x9dri-ln4f.19047/post-233965
 

texmurphy

New Member
Oct 18, 2021
12
0
1
1) I don't particualrly need the boot option, only to be able to run two NVME on my AOC-SLG3-2M2, is that going to be possible with this mod?
2) Will i need to run my BIOS and OS on UEFI instead of Legacy?
 

gkovacs

New Member
Jun 17, 2021
5
0
1
1) I don't particualrly need the boot option, only to be able to run two NVME on my AOC-SLG3-2M2, is that going to be possible with this mod?
2) Will i need to run my BIOS and OS on UEFI instead of Legacy?
I was unable to get bifurcation working on the X9DRL-iF on BIOS 3.3 (regardless of booting Legacy or UEFI), and there are no newer BIOS versions available. OTOH I was able to get bifurcation working on the X9DRi-LN4F+ after flashing BIOS 3.4.
 

BrianAz

New Member
Jul 20, 2021
1
0
1
Correct, the success story is mine. I have the X9DRD-7LN4F-JBOD.

I finally rebooted my VM host, so I took some screenshots, here is what my bifcurication settings look like:
View attachment 11055 View attachment 11056 View attachment 11057 View attachment 11058 View attachment 11059
Thanks for this. I can confirm also that bifurcation works on the X9DRD-7LN4F-JBOD.

I also have a X9DR7-LN4F which seems to have bifurcation available in the bios, but when I try to change the slot it does not give any options just like this post. I'm still messing about with it to see if I'm missing something.
 
Last edited:

blademan

New Member
Jan 7, 2022
7
2
3
Old thread, but I hope this helps someone. On the X9DRD-7LN4F-JBOD, I was only able to bifurcate slots that initially default to "x8x8", and I was unable to bifurcate slots that default to "x8". On this board, that's PCI slots 2, 4, and 6. Select "x4x4x4x4" and it works.
 

Attachments