There are several in the 2TB range in m.2 22110, but they don't support ASPM so you take a power consumption hit on the CPU.There are hardly any M.2 SSDs with PLP anyway (I can't google up any and those that exist are probably expensive AF) so it's pointless to obsess over that feature.
EPYC 4000 are desktop Ryzen CPUs with official ECC certifications. Same idle power (and same overall specs).I guess the EPYC 4004, being a server CPU, would idle pretty high as well?
There could have been a misunderstanding from the start then… None of what has been discussed in this thread applies to M.2 SATA SSD.Not sure if there isn't a misunderstanding, I have 4xSATA SSDs (Samsung PM883) connected to PCIe 3.0 HBA.
What's lacking? Server boards are meant to run stable. Stock settings, no overclock.the noted lack of any meaningful BIOS settings is kind of off-putting
OH! I had no idea. So does that mean my current CPU is likely to idle at noticeably lower power consumption, judging based on the earlier replies? I mean few watts wouldn't make an upgrade to PCIe 5.0 any less appealing...PYC 4000 are desktop Ryzen CPUs with official ECC certifications. Same idle power (and same overall specs).
Just like Xeon E/E3 are desktop Core CPUs with ECC.
It does not, because those are the SSDs that are currently in the server. I was just potentially correcting someone in an assumption what am I upgrading from.There could have been a misunderstanding from the start then… None of what has been discussed in this thread applies to M.2 SATA SSD.
Ok that explains.Supermicro just forgot to put the E there
What kind of bug?If you go with ASRock, get the ASRock Rack B650D4U3 (the B650D4U seems to have some kind of bug).
I would still need some sort of an adapter, right? I presume like the Asus M.2 card mentioned in earlier replies? Or that's a switch card too? I must admit I must have not gotten the difference between the two concepts.If you want to connect as much nvme ssds as possible without using an PCIe-switch, get the asrock rack board.
A PCIe switch is like a network switch, but with different width (lane count) ports, so it would have an x8 port on the slot and several x4 ports for m.2 or other connectors, then it sends and receives PCIe frames or whatever they're called to and from the host and the connected devices. That means if only one downstream device is active it can get full bandwidth, but if several are using lots of bandwidth they'll be limited to the upstream port bandwidth.Or that's a switch card too? I must admit I must have not gotten the difference between the two concepts.
Ok I understand the concept, but then what exactly does the Asus card do and what is it good for?A PCIe switch is like a network switch, but with different width (lane count) ports, so it would have an x8 port on the slot and several x4 ports for m.2 or other connectors, then it sends and receives PCIe frames or whatever they're called to and from the host and the connected devices. That means if only one downstream device is active it can get full bandwidth, but if several are using lots of bandwidth they'll be limited to the upstream port bandwidth.
Bifurcation means the CPU can configure a single slot into multiple ports which can be (nearly) directly connected to devices with passive components (the "nearly" part is that the single clock signal needs a small chip to split it into multiple clock signals, one for each downstream device.
The downside of a switch is they're more expensive and use more power, plus a tiny bit of added latency, you can usually identify them by the ~30mm square heatsink they require, as opposed to a passive adapter which usually doesn't have many components on the PCB at all (but some may have a small heatsink for voltage regulators or something.)
www.asus.com
How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?You can always modify the slot to open it up or find a straight x4 to x8 passive riser, because NICs are low profile and there's space in a full height slot. Most x8 NICs will run in x4 just fine, I get full 25G out of my ConnectX4 Lx in an x4 slot (well, TB dock with 4 lanes,) but some Intel NICs apparently have issues and perform really terribly with less than 8 lanes in some situations, you'd have to test or do some research about that.
Ideally it would lift the card up just enough to use the low profile bracket, but I can't seem to find any that size any more, the common/cheap models I see on amazon and ebay are too short and would require some custom adapter work.How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?

you either use the original low-profile bracket of the network card (that somewhat fits on the normal full-height slot) or 3D print a custom bracket.How does the riser work though? I mean I couldn't have the card sitting in the slot then, because I'm sure I can't move it up or down on the bracket that holds it in a case?
With the passive card and bifurcation it will look to the OS like each SSD is in its own x4 slot, with a switch card the OS can tell the devices are connected through a switch but they're still individual PCIe devices that otherwise act like they're plugged into their own slots.So with a passive card the system wouldn't see the individual SSDs plugged into it I guess?
You know, 100G parts are getting cheap these days, no reason to stop at 10G.FFS if I didn't want 10Gbit connection to the NAS, my life would be extremely simple when it comes to upgrading the damn thing. I could even use mini-ITX board and make the entire server really small. FFS.
I meant with the passive card and no bifurcation support on the motherboard.With the passive card and bifurcation it will look to the OS like each SSD is in its own x4 slot, with a switch card the OS can tell the devices are connected through a switch but they're still individual PCIe devices that otherwise act like they're plugged into their own slots.
I can't tell whether you're serious or sarcastic, but regardless, I have no use for that. I have a brand new and mostly great switch with a pair of 10Gbit SFP+ slots that are just about all I need. I just wish the damn 10Gbit cards didn't use x8 physical slots.You know, 100G parts are getting cheap these days, no reason to stop at 10G.
That combination will lead to only the first m.2 slot working and the rest won't do anything.I meant with the passive card and no bifurcation support on the motherboard
For the price of a ConnectX 4 Lx you could even just dremel the back half of the slot connector off, if you mess it up you're only out $25 or so (do not inhale whatever comes off the board when you cut it.)I just wish the damn 10Gbit cards didn't use x8 physical slots.
Of course, I could look for a board that has 10Gbit NIC integrated (I think I saw some, probably at Asrock), but I heard those RJ45 transceivers can get really hot and use a lot of power (well, relatively).