Sorry for your lossI bought the PLX card back in April or May.
It's frustrating that bifurcation is not available on the entire X9 range. I could really use it.
Feature availability across the X9 lineup is not uniform and the documentation sucks. Surely the X9DRH can support bifurcation and perhaps one day we see somebody handy with AMIBCP (BIOS Configuration Program) enable that functionality. In the interim, SMCI does have a solution for us, and not one that I favor: X10, X11, etc.My business uses one X9DRH-iTF board and three X9DRH-iF boards. Their BIOSes have sections for bifurcation and even have descriptions explaining multiple possible bifurcation splits when highlighting a slot, but if you actually try to change anything, you find that for each slot there is only one option in the menu - no bifurcation. There is no v3.4 BIOS for these boards, only v3.3, which is what I am using.
On the other hand, bifurcation is not a feature of NVMe. I'm sure there were riser cards in 2012 that split one PCIe slot into two slots each with half the number of lanes, and that would require bifurcation even if no NVMe drives were in those slots.In sum, the first NVMe drive dropped after the X9 was released. For the entire X9 lineup to bifurcate NVMe, would require both compatibility with drives not even existing when the X9 was released, but also being able to bifurcate PCIe lanes amongst those drives. IMO, a somewhat big ask for dated tech (note: I have two X9DRH-7F). At least we get one of the two:
- NVMe rev 1.0 spec: Mar '11 (first NVMe drive released in '13)
- X9 release: ~Q1 2012 (E5-2600 v1)
The problem with cards with switches is that they are relatively expensive; the cheapest I could find is a card with an ASMedia switch for $79. The reason I'm still using E5-2600v2 servers is because I need large amounts of memory, and DDR3 memory is cheap on the used market. These are 2U servers, and I already have all the PCIe slots filled with 1-drive M.2 cards. So if I want to add a second drive to each slot, I have to buy the second drive and a $79 switched card, which significantly increases the cost of the additional storage. The more switched cards I would buy, the more it makes sense just to upgrade to an X10 system with cheaper passive cards.As to your situation, what about an AOC w/ PLX switch? I think it is fair to allow some compromises when mating 8 year old tech - the X9 - with (near) current gen storage.
Please don't interpret the following as a slight against you.
I'm not disappointed at SMC for not adding the feature to a 9-year old board. I'm disappointed that the bifurcation feature is not uniform across their entire lineup or even documented anywhere.
What I don't understand is, if SMC doesn't support legacy products, then why did they even bother to validate and enable the bifurcation feature on some X9 boards at all? Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
On the other hand, bifurcation is not a feature of NVMe. I'm sure there were riser cards in 2012 that split one PCIe slot into two slots each with half the number of lanes, and that would require bifurcation even if no NVMe drives were in those slots.
Please don't interpret the following as a slight against you. I'm not disappointed at SMC for not adding the feature to a 9-year old board. I'm disappointed that the bifurcation feature is not uniform across their entire lineup or even documented anywhere. What I don't understand is, if SMC doesn't support legacy products, then why did they even bother to validate and enable the bifurcation feature on some X9 boards at all? Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
Also frustratingly, their BIOS changelogs don't even mention the feature, so I couldn't find out which boards added it and which didn't without buying one to try it out. What good is a changelog that doesn't document all the changes?
The problem with cards with switches is that they are relatively expensive; the cheapest I could find is a card with an ASMedia switch for $79. The reason I'm still using E5-2600v2 servers is because I need large amounts of memory, and DDR3 memory is cheap on the used market. These are 2U servers, and I already have all the PCIe slots filled with 1-drive M.2 cards.
So if I want to add a second drive to each slot, I have to buy the second drive and a $79 switched card, which significantly increases the cost of the additional storage.
I think I said SMCI has a solution for you! =>The more switched cards I would buy, the more it makes sense just to upgrade to an X10 system with cheaper passive cards.
In the interim, SMCI does have a solution for us, and not one that I favor: X10, X11, etc.
I run the cryptocurrency mining pool Prohashing , and these servers are used to host the clients for cryptocurrencies. We host about 300 different cryptocurrencies on these servers. They average about 50 GB of data per client, but some of the most popular coins like Ethereum are over 500 GB each. They can be tremendously hard on the storage system because for every transaction on the blockchain the client needs to search the disk to find the input(s) and determine whether it is valid before it can be included in a block. So there isn't much CPU usage; it's mostly limited by storage I/O.
- What is the use case for all 7x M.2 NVMe cards (out of curiosity)?
The problem is that these are 2U servers, and I haven't seen anything that would hold 4 M.2 drives on a low profile card. I'm thinking you're right in that I might want to rethink the whole M.2 card route and go with something more scalable.
- Wouldn't a switch on x16 net you 4x drives instead of 2x drives on the x8 slots? Maybe there is benefit to use only throw a switch in that x16 slot?
- I want to say I saw a 4-port AOC with switch recently for $100, so if 4x drives, $25/tax each is not too bad.
- I too started down this path and had a handful of M.2 NVMe and realized it wasn't the route I wanted to head.
You're right in that it probably would never make sense to buy a new server vs adding switched cards. I just feel like I'd be wasting money buying these switched cards if a few months later we just upgraded the new servers that could use the cheaper cards. I calculated that the difference is about $1000 per server to upgrade with the same number of cores (12 per CPU) and same memory capacity (256 GB), mostly due to DDR4 memory being more expensive but also because for some reason X10 motherboards have really held their value well in resale market.How many AOCs with switches = breakeven for increased DDR4 cost, though? I'd actually be curious to calculate, but too lazy.
Thanks!Good luck!
Coincidentally I recently was told by SM support that they have different teams working on different mainboards and apparently they do not share a real common code base; so when one team implements something for one board it does not get added to the other boards automatically... O/c with a request at the right time (please add feature fromm x to board y too), you have a better chance they do it (while the boards are still under support); or o/c if you have a financial motivator...
- I wonder if they had plans to roll out X9-wide, but decided not to, and got caught with their pants down?
Riser was the keyword here, eg
- True that bifurcation is not exclusive to NVMe storage, but I struggle to think of a commonplace need to slice of a PCIe slot for anything other than PCIe lane hungry NVMe storage. I don't doubt there is one.
It is really easy to do, but there is a non-zero chance of something running afoul.I saw mentioned on this thread that PCH C602 seems to like bifurcation, but on my X9DRL-iF it doesn't seem to. Is there any beta or modded bios in the wild that I can use to update my 3.3 BIOS and make it work??
1) I don't particualrly need the boot option, only to be able to run two NVME on my AOC-SLG3-2M2, is that going to be possible with this mod?It is really easy to do, but there is a non-zero chance of something running afoul.
Per this thread - https://forums.servethehome.com/index.php?threads/nvme-boot-with-supermicro-x9da7-x9dri-f.13245/ - the relevant deets are here: [HowTo] Get full NVMe support for all Systems with an AMI UEFI BIOS
Also, see: https://forums.servethehome.com/ind...not-bifurcate-on-x9dri-ln4f.19047/post-233965
I was unable to get bifurcation working on the X9DRL-iF on BIOS 3.3 (regardless of booting Legacy or UEFI), and there are no newer BIOS versions available. OTOH I was able to get bifurcation working on the X9DRi-LN4F+ after flashing BIOS 3.4.1) I don't particualrly need the boot option, only to be able to run two NVME on my AOC-SLG3-2M2, is that going to be possible with this mod?
2) Will i need to run my BIOS and OS on UEFI instead of Legacy?
Thanks for this. I can confirm also that bifurcation works on the X9DRD-7LN4F-JBOD.Correct, the success story is mine. I have the X9DRD-7LN4F-JBOD.
I finally rebooted my VM host, so I took some screenshots, here is what my bifcurication settings look like:
View attachment 11055 View attachment 11056 View attachment 11057 View attachment 11058 View attachment 11059