Building a server based on an AMD 5950X slammed into a Supermicro 2U chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OP_Reinfold

Member
Sep 8, 2023
99
45
18
So I've got a new project to do.
  • x570 motherboard
  • AMD 5950x
  • 128gb ECC ram
  • 1 x nvme 12tb Micron 9400 max
  • 3 x sata 7.68tb PM8xx enterprise drives
  • 2 x nvme 15.36tb PM1733 drives
  • 2 x nvme 6.4tb CM6-V drives
  • Broadcom 9560 raid card
  • 100G dual-port Mellanox CX6
  • Nvidia 4000 ada gpu
  • Supermicro 2U chassis

Purpose: Going to serve a number of jobs via virtual guests, and also provide a virtual 'coding' workstation.

Will need to fabricate a bracket for the gpu and also a couple of brackets for the pcie-switch cards.

Motherboard is probably best to go with the Asus x570 Ace Pro WS, because of its well known expansion capability.

I'll probably go with a LFF Supermicro 2U chassis as it won't have more than 8 x 2.5 drives, just mount them in 3.5 to 2.5 adapters.

Won't be using a backplane, going commando, cables all the way.

Pictures to follow.
 
  • Like
Reactions: jode and pimposh

mattventura

Well-Known Member
Nov 9, 2022
612
319
63
A few questions/comments:
1. Why hardware RAID? It will waste a lot of performance potential.
2. Why a mix of drives? Do you already have the drives?
3. Why not a backplane? Where do you plan to put the drives otherwise?
4. Where are you planning on getting that many PCIe lanes on a consumer platform? It will bottleneck very badly. One of the x8 slots on the X570-ACE has to go through the chipset's x4 link back to the CPU, plus it's sharing that x4 with one of the M.2 slots, the U.2 port, the Sata ports, and more. AM4 gives you 24 lanes total (including the 4 to the chipset).

To be honest, your use case easily justifies a workstation/server platform. You're looking to use somewhere around 52 PCIe lanes.
 

ca3y6

Active Member
Apr 3, 2021
319
223
43
Agree with Mattventura. U.2 drives cables are a pain to work with, they are very thick and inflexible and your connection to the drive will get loose all the time. And if you go for a server format chassis, there is really no good reason to not go for an actual server chassis, even used. Also I am not aware of PCIe4 switch cards, I have only seen PCIe3.
 
  • Like
Reactions: OP_Reinfold

OP_Reinfold

Member
Sep 8, 2023
99
45
18
Pcie4 switch cards are available, I get them from a friend of a HS supplier, but I believe theres also a guy on AliExpress who sells them if you're keen.

I've been using cables effortlessly in much more serious epyc server builds, no probs there at all, I prefer cables because then on upgrade I am never tied down to backplane configuration/limitations. This isn't a production server so the flexibility gained by not being locked down to a particular server's limits is much appreciated in the long-term. Plus I'm not planning on pulling drives on the go, so once installed, pretty much there till the end of service life or on upgrade, whichever comes first.

Regarding the x4 chipset uplink, yes aware of that. I wont be accessing all the drives at full sequential bandwidth all at the same time, so there is no need for individual lanes to every drive, perfectly happy with pcie-switches, the latency loss is absolutely minimal and virtually unnoticeable to even benchmarking queens.

The mix of drive formats is purely down to energy saving as well as isolation. The 7.68tb sata drives idle down to under a watt each when not in use, and in use barely crack 5w, whereas the U2/U3 drives idle at 5-8w and crank up from anywhere between 15 to 25watts. The respective sata drives are more for warm-storage, if they were available I'd enthusiastically purchase 15tb/30tb sata SSDs! - but the market doesn't serve edge cases lol

Regarding the Raid card, I love them, have always used Raid cards, just a lover of taking control away from the OS and handing it out to hardware accelerators.

The build is spec'd in such a way to limit energy waste, and to serve the jobs it will do at any given time, but also be flexible enough to butcher into any configuration without being limited by things outside of my control - I'm a control freak, born and bred that way.

The thing here is that I like to isolate things, now I've got roughly around 400gb of free ram left on one of my epyc servers along with 3 odd x16 pcie gen 4 slots, and to be fair I could just have used that server for what I'm doing here, but isolation is something I have fallen in love with due to simplicity and ongoing maintenance, after being in the game of on-premise rack management, you tend to begin to think that way more and more, ie not having to bring down countless other services just to flaff around with some hardware.
 
Last edited:

mattventura

Well-Known Member
Nov 9, 2022
612
319
63
The "hardware accelerator" in this case is more of a "hardware decelerator". It will effectively present them to the OS as a single SCSI device, rather than native NVMe devices, so all of the advancements that come out of the NVMe hardware and software stack are effectively gone. Especially if you plan to run Linux, software options are far better. I agree that if you use one of those, you *certainly* won't be maxing out the sequential speeds.

If you buy an 826, you aren't really limited by backplane upgrades, since they've been supporting that form factor for ages and likely will in the future. You can get anywhere from a SAS1 backplane to a gen5 NVMe + 24g SAS backplane. NVMe backplanes also can sometimes go a generation above what they were intended to handle, as long as they aren't one of the few switch-based backplanes.
 
  • Like
Reactions: nexox

OP_Reinfold

Member
Sep 8, 2023
99
45
18
The "hardware accelerator" in this case is more of a "hardware decelerator". It will effectively present them to the OS as a single SCSI device, rather than native NVMe devices, so all of the advancements that come out of the NVMe hardware and software stack are effectively gone. Especially if you plan to run Linux, software options are far better. I agree that if you use one of those, you *certainly* won't be maxing out the sequential speeds.

If you buy an 826, you aren't really limited by backplane upgrades, since they've been supporting that form factor for ages and likely will in the future. You can get anywhere from a SAS1 backplane to a gen5 NVMe + 24g SAS backplane. NVMe backplanes also can sometimes go a generation above what they were intended to handle, as long as they aren't one of the few switch-based backplanes.
I think this is kind of misleading, I've done back to back tests, and I notice nothing but greatness.

Let me explain.

If you are talking raw throughput and latency with a directly attached NVME drive to cpu lanes vs through anything inbetween - then yes, you are never going to have the exact same numbers, but the variances are negligible, and the overall gains are impressive...

but now some may ask 'what gains?'.... and here is the back to back testing (pays dividends if people actually test things themselves without following the internet 'hardware raid is dead' and all their arguments bandwagon which I've been frowning at for years)

When you directly lock onto cpu lanes with an ssd, you're also taxing a core or two, you take the drives away from the CPU and put them behind a well engineered accelerator/raid card, and now you are offloading that hit on the cores to the discreet card.

So do a 1TB copy between 2 gen4 drives natively connected to cpu lanes, log the cpu load usage in the background via a script.

Now, do the same with those nvme drives on the raid card, watch your cpu listening to crickets, and time wise you only lose roughly a couple of percent.

Point being, I'm building an energy efficient virtual server where I want to minimize cpu tax, and rather the CPU was listening to crickets and service my 'workstation' guest with solid ipc than be forced to pin cores and whatnot nightmare configuration just to get the 'crackle'/'lag' free experience.


Regarding the 826, it might not be that, but irrespective, honestly I am totally anti backplanes for custom builds where I don't really know where my expansion needs will head.

Because even then with manufacturer options, when you go to try and find some backplane that meets your needs, you find out that its only available via xyz with a hefty scalper tax, but even then most the time I never find what I need, if I did want to add more nvme drives, via switch chaining, or if I wanted to throw in an extra 8 sata SSDs via a sata expander chip off the back of one of the switches, again a server manfuckturer isn't going to suddenly come to my rescue and drop me a perfectly matched backplane for free if at all even available....

Cabling is free of headaches where you keep changing the foundations, I have servers running for years now with cables, and the number of times I've switched things around with nothing but smiles because of no backplane limits is a testament.

Now with a noticed industry drive to MCIO direct cabling of gen5 drives due to electrical tolerances, the joys of customisation are again becoming a trend.

PS. I'd like to add that I really only purchase Broadcom and Highpoint cables for direct connections to U3/U2 drives, and anyone who has used these official branded cables will tell you that they fit super-tight and even a little pull doesn't nudge them easily. I avoid all the mass-produced no-name crap, if peoples experience is based on mass-market junk, fair enough, but people really should get into the habit of stopping drawing broad conclusions based on such experiences, again a pet-hate of mine I have been watching silently on internet forums for years, sure budgets are budgets but my point is made. Plus I am anti WHEA (pcie aer, which a hell of a lot of boards have disabled by default and don't even give the option to end-users to enable and thus most people don't even realise their computer is doing constant retries on pcie transfers, talk about being blind) lol...

On another note I got 1 metre cables driving some gen4 drives, and yes those cables cost a fortune, but they work beautifully. A friend has been lucky enough to be playing with some gen5 MCIO cabling, and those cables cost nearly as much as some nvme drives! lol - until they go full optical, silver/copper cabling is only going to get more pricey as we're on the way to gen6 and beyond.
 
Last edited:

OP_Reinfold

Member
Sep 8, 2023
99
45
18
Talking of storage accelerator cards, Kioxia has had a few in design and development over the past few years, though I believe they were destined for custom hyperscale integrations, if only they were standard PCIE format, I would love to get my hands on their managed NVME interposer accelerator cards, though not sure what they actually have out there, a lot of that stuff is now considered 'dark unicorns', it exists, just regular folks don't know anything else about it outside of what Patrick and co have released via news articles. Oh and from my readings, Nvidia even have their own storage acceleration going on, and again, its only for those ultra-secretive AI centres... meanwhile us are left with direct cpu attached storage, *grunt grunt*
 

OP_Reinfold

Member
Sep 8, 2023
99
45
18
Meant to upload this earlier, I managed to nag an x670e, will add a 9950x to it, and the 9560-16i card also arrived.

I think I'll still test out an x570 with a 5950x, already ordered just taking a little longer to arrive, because it more than meets my single thread IPC needs, but what put me off the 9950x and the x670e combo was the fact that it burns a whole lot more watts at idle, well with the x670e having 2x chipsets and 10G etc etc totally understandable.

So I'll do a comparison, and if we are talking a whole lot waste at idle, then I will stick to the x570 5950x combo and just sell the x670e 9950x combo off. The only downside is the 3rd slot on the x670e is only x4 electrical, but will do some testing and see if I can fudge it to function the way I need.

20250425_210319.jpg
 

CyklonDX

Well-Known Member
Nov 8, 2022
1,591
550
113
You have something to say, say it... that's what a forum is for... :)

Last time people didn't speak up the world got injected with devil knows what lol



  • x570 motherboard
  • AMD 5950x
  • 128gb ECC ram
  • 1 x nvme 12tb Micron 9400 max
  • 3 x sata 7.68tb PM8xx enterprise drives
  • 2 x nvme 15.36tb PM1733 drives
  • 2 x nvme 6.4tb CM6-V drives
  • Broadcom 9560 raid card
  • 100G dual-port Mellanox CX6
  • Nvidia 4000 ada gpu
  • Supermicro 2U chassis
you do know you are adding latency for every pcie switch, and you will deal with sun style heat?
Additionally you will have hard time with dual 100G, and those u.2 with raid card and nv card over single port.
 
  • Like
Reactions: nexox and pimposh

CyklonDX

Well-Known Member
Nov 8, 2022
1,591
550
113
Meant to upload this earlier, I managed to nag an x670e, will add a 9950x to it, and the 9560-16i card also arrived.
its a decent mobo, but not great since it only offers you 2 pcie's in reality.

I prefer msi x670e ace, its a better pony with 3 pcie ports from cpu. *at x8x8x4 classic what was typical in 120 usd board few years ago.

Nvidia even have their own storage acceleration going on, and again, its only for those ultra-secretive AI centres
 

OP_Reinfold

Member
Sep 8, 2023
99
45
18
you do know you are adding latency for every pcie switch, and you will deal with sun style heat?
Additionally you will have hard time with dual 100G, and those u.2 with raid card and nv card over single port.
Yes, that of course is obvious, but we're talking in the nanoseconds, maximums typically at around 150ns for the older chips and under a hundred for newer models. All my testing with other builds and countless switches over the years has proven it to be totally negligible, I see no performance drops in any of my workloads across both Broadcom and Asmedia bridges. If they were as bad as presumed to be, they'd be dead years ago, but in real workloads the impact is transparent and the advantages are fruity indeed. The PLX based ones heat up a lot and kill the watts, the Asmedia ones are the cooler more efficient ones, even a low noise trickle fan can keep them cool ;)

The gen4 100G card is going into a x8 gen 4 cpu slot, it is dual ported, but I will never be using maximum bandwidth of both ports at the same time, probably looking at roughly around 60g usage on both ports at max workloads. Indeed one does not scope a build because they will be using all the maximums, who actually does that outside of elaborate AI research labs? ;)

The Nvidia card will run off the switch off the M.2 cpu slot, more than ample for the 'workstation' needs, even gaming at x4 you get over 90% performance so a non-issue. I've seen reviewers over the years throw out bullshit numbers, oh 80% or 75% or 60%, and I digress, because benchmarking is one thing, actually using the shit for real workloads throws up completely different numbers.

I wouldn't be doing this if it wouldn't work, I've done some crazy builds, and everytime, works great. One just needs to think outside of the box and look at their given workloads and see how to fit them into a build.

LOL, I actually shoved a 3090FE onto an Asmedia x4 slot gen 3 pcie-switch card, the damn card played Cyberpunk with only a 5fps drop! The rest was history. :D

My principle, and I hope it becomes infectious, dont presume, dont assume, dont trust reviewers or even the crowd or even a forum, try it yourself, you will be surprised at the outcomes once you start seeing the results yourself, half the enjoyment of this is to prove the fact there is a lot of closed-minded-mongering and terrible information floating around out there.

Real workloads nearly always dictate completely opposite facts :)
 
Last edited:

OP_Reinfold

Member
Sep 8, 2023
99
45
18
Advice for others who are planning on using pcie switches, one thing to note is that if you're looking at 100G+ RDMA across fancy sub microsecond ported ethernet switches, don't put the nic on a pcie-switch, that is one thing I will always directly connect to CPU lanes, also some cards don't behave properly with sr-iov when on certain types of bridges, like all things, ymmv according to your hardware config, but in my tests I have found that some problems are more down to the kernel and drivers moreso than hardware, but then as an edge case scenario in general high bandwidth nics are in servers with plenty pcie lanes so in the kernel/driver devs defense it isn't something that is high on the list of considerations let alone allocate precious time for implementation.

However if you're talking sub 25G and no RDMA and sr-iov, all options are viable, everything works like clockwork with no discernible lag whatsoever. In one build I daisy-chained a 10G port across 3 switches, worked beautifully.
 
Last edited:

CyklonDX

Well-Known Member
Nov 8, 2022
1,591
550
113
are you able to show a spec/manual paper for that pcie switch?
*typical pcie gen4 switches have around 12-25ns per io.

I run from MSI, and I mean ruuunnnnnnnnnnn..... :cool:
different needs; i run from asus ever since their quality started to drop. Never looked back, and very happy with that decision. Currently its overpriced grade B hardware.
 
Last edited:

OP_Reinfold

Member
Sep 8, 2023
99
45
18
I agree, problems across the spectrum of brands.

No different I guess to the days of spinners, some swear by Western Digital, others Seagate, personally was stung by both across the decades.

Most reliable spinners I ever had were the old black Fujitsu 2gb ide drives lol

The good thing with Asus is ongoing timely firmware support, MSI have a bad record of 'forgetting' their prior gens wayyyy too quickly.

No spec sheets, I managed to snag a few, the only problem I found with this particular bespoke model is that you can't do aspm with them, they fill the log up with countless errors. I was told that the IP owner also designed the same circuitry in the AM series chipsets, high-efficiency low power. The ones some guy was selling on AliExpress a couple months back were a different variety which apparently did support ASPM, so I'm guessing it is down to the implementation along with firmware. My parts were destined for hyperscale workloads and thus guessing they weren't going to be ever idle for their intended purpose.

The problem is that there is no demand in general consumer audience, so retail just doesnt exist, everything is integrated at oem level and we would have to find a factory over in asia that would be willing to partake in doing a run for general retail. If I could get in touch with that guy on Ali, vanished now it seems, couldn't find his shop, maybe there could be some discussions to be had regarding small production runs because countless colleagues want to pinch what I have. Oh and just in case, my ones were priced at 1.5k each, I got them for a coffee and a fag break. The ali ones were around 400, but mines do triplex signalling with full x2apic compatibility, no idea what that means so I wont profess to know something I don't, to be honest never even bothered researching, thats the only blurb I managed to squeeze out of the friend.
 
Last edited:

CyklonDX

Well-Known Member
Nov 8, 2022
1,591
550
113
The good thing with Asus is ongoing timely firmware support, MSI have a bad record of 'forgetting' their prior gens wayyyy too quickly.
i'd disagree - especially with amd.

no demand
there is demand, the price is just not right. If whole board with chip costs them some 50 usd to make they want to charge 400-800 usd. No one is into that anymore. At that price i could make it myself in us.

At very least chip name, you mentioned is am.
 

OP_Reinfold

Member
Sep 8, 2023
99
45
18
Yes no one is into that anymore, but the system will never change now, there are far greater powers at work, and everything we see and hear through media is all misdirection. Corporations have changed the way they operate, there is no going back, grab what you can while you can, 10 years from now, the show will be completely different, you'll be glad you got what you have, because you wont be getting anything else ;)
 

OP_Reinfold

Member
Sep 8, 2023
99
45
18
i'd disagree - especially with amd.
Naaa, I disagree. Gigabyte and MSI are the worst at firmware upgrades, fixes, and especially ongoing support after end-of-life... Give me an Asus mid-to-pro board model that got left behind too early?