Unneeded home server upgrade I don't need but want to do

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nexox

Well-Known Member
May 3, 2023
1,870
918
113
There are hardly any M.2 SSDs with PLP anyway (I can't google up any and those that exist are probably expensive AF) so it's pointless to obsess over that feature.
There are several in the 2TB range in m.2 22110, but they don't support ASPM so you take a power consumption hit on the CPU.
 

etorix

Active Member
Sep 28, 2021
205
115
43
I guess the EPYC 4004, being a server CPU, would idle pretty high as well?
EPYC 4000 are desktop Ryzen CPUs with official ECC certifications. Same idle power (and same overall specs).
Just like Xeon E/E3 are desktop Core CPUs with ECC.
Do not confuse these with EPYC 5000/8000/9000 or Xeon Scalable/Xeon 6, which are very different beasts.
Not sure if there isn't a misunderstanding, I have 4xSATA SSDs (Samsung PM883) connected to PCIe 3.0 HBA.
There could have been a misunderstanding from the start then… None of what has been discussed in this thread applies to M.2 SATA SSD.
the noted lack of any meaningful BIOS settings is kind of off-putting
What's lacking? Server boards are meant to run stable. Stock settings, no overclock.
Server/workstation boards take more time to boot than gaming boards because they perform more checks. But if you think that Supermicro is slow, you have never experiencied a Fujitsu/Kontron board…
 
  • Like
Reactions: nexox

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
PYC 4000 are desktop Ryzen CPUs with official ECC certifications. Same idle power (and same overall specs).
Just like Xeon E/E3 are desktop Core CPUs with ECC.
OH! I had no idea. So does that mean my current CPU is likely to idle at noticeably lower power consumption, judging based on the earlier replies? I mean few watts wouldn't make an upgrade to PCIe 5.0 any less appealing...

There could have been a misunderstanding from the start then… None of what has been discussed in this thread applies to M.2 SATA SSD.
It does not, because those are the SSDs that are currently in the server. I was just potentially correcting someone in an assumption what am I upgrading from.
 

etorix

Active Member
Sep 28, 2021
205
115
43
Yes. Your Xeon E-2136 is basically a Core i7-8700(F) with ECC support—which is nothing extraordinary, as in this generation Core i3 do support ECC—, or a i7-8600 with hyperthreading and ECC (and a disabled iGPU, but the 8600F doesn't exist). It enjoys the low idle power you're looking for.
 
  • Like
Reactions: nexox

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
Can anyone explain the difference in PCIe configuration between the two boards?
vs

The Supermicro is built on a an inferior variant of the chipset (B650 vs B650E) yet is seems to have more PCIe 5.0 lanes than the Asrock, and according to this article it should barely have any 5.0 lanes at all. I don't quite get it.

Which one of the board would you choose anyway, and why?

P.S. Speaking of lanes, when I look at the BIOS changelog (let me take this opportunity to shit on Supermicro and say how pissed I am at their decision to completely stop publishing changelogs) for the Asrock, does this sound like added option for bifurcation?
3. Update PCIE Link Speed/Width items
4. Add PCIe Control Option
 
Last edited:

nexox

Well-Known Member
May 3, 2023
1,870
918
113
Both boards have the same number of 5.0 lanes, x16 in regular slot(s) and the Supermicro has two 5.0 m.2 while the ASRock has one 5.0x4 regular slot and one 5.0 m.2. That's 24 lanes for both. I don't know what article you're referencing but the STH review says "PCIe connectivity is relatively robust" about the Supermicro board.
 

nilfisk_urd

Member
Feb 14, 2023
79
37
18
Both are using a 650E, as the only difference is PCIe 5 on the x16 Slot - Supermicro just forgot to put the E there.

Both boards have (nearly) the exact same amount of PCIe-lanes (The asrock one has one pcie x1 slot more).
The Supermicro has two mechanical x16 Slots. If only the first is populated, it works as x16. If both Slots are populated, both get an x8 link. AFAIK there is no bifurcation support.
The ASRock B650D4U supports PCIe Bifurcation of the x16 Slot to x8/x8, x8/x4/x4 or x4/x4/x4/x4.

If you need two x8 slots, or workstation features like more USB-ports and audio output, get the supermicro board. If you want to connect as much nvme ssds as possible without using an PCIe-switch, get the asrock rack board.
If you go with ASRock, get the ASRock Rack B650D4U3 (the B650D4U seems to have some kind of bug).
 
  • Like
Reactions: nexox

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
Supermicro just forgot to put the E there
Ok that explains.

If you go with ASRock, get the ASRock Rack B650D4U3 (the B650D4U seems to have some kind of bug).
What kind of bug?
The D4U3 has somewhat different PCIe configuration though.

If you want to connect as much nvme ssds as possible without using an PCIe-switch, get the asrock rack board.
I would still need some sort of an adapter, right? I presume like the Asus M.2 card mentioned in earlier replies? Or that's a switch card too? I must admit I must have not gotten the difference between the two concepts.

But I have also realized I might have to go with the Supermicro, becase I need a physical x8 slot for the 10Gbit card. Both the Intel X710 and the Mellanox (which I will eventually buy again when I upgrade my PC) are PCIe 3.0 only, but still are x8 physical, which sucks, because 5.0 x4 slot certainly has all the lines needed and then some.
 

nexox

Well-Known Member
May 3, 2023
1,870
918
113
Or that's a switch card too? I must admit I must have not gotten the difference between the two concepts.
A PCIe switch is like a network switch, but with different width (lane count) ports, so it would have an x8 port on the slot and several x4 ports for m.2 or other connectors, then it sends and receives PCIe frames or whatever they're called to and from the host and the connected devices. That means if only one downstream device is active it can get full bandwidth, but if several are using lots of bandwidth they'll be limited to the upstream port bandwidth.

Bifurcation means the CPU can configure a single slot into multiple ports which can be (nearly) directly connected to devices with passive components (the "nearly" part is that the single clock signal needs a small chip to split it into multiple clock signals, one for each downstream device.

The downside of a switch is they're more expensive and use more power, plus a tiny bit of added latency, you can usually identify them by the ~30mm square heatsink they require, as opposed to a passive adapter which usually doesn't have many components on the PCB at all (but some may have a small heatsink for voltage regulators or something.)
 

nilfisk_urd

Member
Feb 14, 2023
79
37
18
If i recall correctly, the B650D4U just died after a few months.

Difference: position of the x1 and the x4 slot is switched. Normally, these boards use open ended x4 slots, so you could put in x8 cards - but on the product photo of the D4U3, the slot looks like a closed one (which sucks).
 

nexox

Well-Known Member
May 3, 2023
1,870
918
113
You can always modify the slot to open it up or find a straight x4 to x8 passive riser, because NICs are low profile and there's space in a full height slot. Most x8 NICs will run in x4 just fine, I get full 25G out of my ConnectX4 Lx in an x4 slot (well, TB dock with 4 lanes,) but some Intel NICs apparently have issues and perform really terribly with less than 8 lanes in some situations, you'd have to test or do some research about that.
 

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
A PCIe switch is like a network switch, but with different width (lane count) ports, so it would have an x8 port on the slot and several x4 ports for m.2 or other connectors, then it sends and receives PCIe frames or whatever they're called to and from the host and the connected devices. That means if only one downstream device is active it can get full bandwidth, but if several are using lots of bandwidth they'll be limited to the upstream port bandwidth.

Bifurcation means the CPU can configure a single slot into multiple ports which can be (nearly) directly connected to devices with passive components (the "nearly" part is that the single clock signal needs a small chip to split it into multiple clock signals, one for each downstream device.

The downside of a switch is they're more expensive and use more power, plus a tiny bit of added latency, you can usually identify them by the ~30mm square heatsink they require, as opposed to a passive adapter which usually doesn't have many components on the PCB at all (but some may have a small heatsink for voltage regulators or something.)
Ok I understand the concept, but then what exactly does the Asus card do and what is it good for?
Is it like the passive type you mentioned?

You can always modify the slot to open it up or find a straight x4 to x8 passive riser, because NICs are low profile and there's space in a full height slot. Most x8 NICs will run in x4 just fine, I get full 25G out of my ConnectX4 Lx in an x4 slot (well, TB dock with 4 lanes,) but some Intel NICs apparently have issues and perform really terribly with less than 8 lanes in some situations, you'd have to test or do some research about that.
How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?
 

nexox

Well-Known Member
May 3, 2023
1,870
918
113
The Asus card looks passive so it would require bifurcation support on the CPU/motherboard, but Asus really just wants you to use them with Asus boards and the documentation isn't great. I'm sure there's more information out there about them to make it more clear whether they'll work for you, but the photos definitely don't have a PCIe switch chip.


How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?
Ideally it would lift the card up just enough to use the low profile bracket, but I can't seem to find any that size any more, the common/cheap models I see on amazon and ebay are too short and would require some custom adapter work.

For reference, they look like this: 1760282605522.jpeg
 
  • Like
Reactions: Octopuss

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
So with a passive card the system wouldn't see the individual SSDs plugged into it I guess?

FFS if I didn't want 10Gbit connection to the NAS, my life would be extremely simple when it comes to upgrading the damn thing. I could even use mini-ITX board and make the entire server really small. FFS.
 

nilfisk_urd

Member
Feb 14, 2023
79
37
18
How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?
you either use the original low-profile bracket of the network card (that somewhat fits on the normal full-height slot) or 3D print a custom bracket.
 
  • Like
Reactions: Octopuss

nexox

Well-Known Member
May 3, 2023
1,870
918
113
So with a passive card the system wouldn't see the individual SSDs plugged into it I guess?
With the passive card and bifurcation it will look to the OS like each SSD is in its own x4 slot, with a switch card the OS can tell the devices are connected through a switch but they're still individual PCIe devices that otherwise act like they're plugged into their own slots.


FFS if I didn't want 10Gbit connection to the NAS, my life would be extremely simple when it comes to upgrading the damn thing. I could even use mini-ITX board and make the entire server really small. FFS.
You know, 100G parts are getting cheap these days, no reason to stop at 10G.
 

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
With the passive card and bifurcation it will look to the OS like each SSD is in its own x4 slot, with a switch card the OS can tell the devices are connected through a switch but they're still individual PCIe devices that otherwise act like they're plugged into their own slots.
I meant with the passive card and no bifurcation support on the motherboard.

You know, 100G parts are getting cheap these days, no reason to stop at 10G.
I can't tell whether you're serious or sarcastic, but regardless, I have no use for that. I have a brand new and mostly great switch with a pair of 10Gbit SFP+ slots that are just about all I need. I just wish the damn 10Gbit cards didn't use x8 physical slots.
Of course, I could look for a board that has 10Gbit NIC integrated (I think I saw some, probably at Asrock), but I heard those RJ45 transceivers can get really hot and use a lot of power (well, relatively).
 

nexox

Well-Known Member
May 3, 2023
1,870
918
113
I meant with the passive card and no bifurcation support on the motherboard
That combination will lead to only the first m.2 slot working and the rest won't do anything.

I just wish the damn 10Gbit cards didn't use x8 physical slots.
Of course, I could look for a board that has 10Gbit NIC integrated (I think I saw some, probably at Asrock), but I heard those RJ45 transceivers can get really hot and use a lot of power (well, relatively).
For the price of a ConnectX 4 Lx you could even just dremel the back half of the slot connector off, if you mess it up you're only out $25 or so (do not inhale whatever comes off the board when you cut it.)

10GBaseT transceivers use a bit of power but the newer ones with 80m or 100m cable length ratings run cooler, I still tend to add an external fan to my switches that use more than one.