Stable AM5 motherboard suggestions for server purposes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kriterio

New Member
Apr 14, 2024
8
0
1
I am planning to use ECC RAM with 128gb, and AMD 7950x3d or AMD 7900. It can be ATX as well. I will put the server in a datacenter.
 
Last edited:

name stolen

Member
Feb 20, 2018
50
17
8
Ok, I was gonna bite, since I do some home serving with X570 and B550. But, my experiences are AM4 and one chipset gen behind. It was perhaps easier in the X/B-500 generation because the chipsets were a little more straightforward. So, here's my experience and if you want to try to transpose that up a generation, go for it.

X570 - Asus ROS Crosshair VIII (Wifi) - we have the CPU-connected x4 NVME dedicated, and 16 more CPU-connected PCIe Gen4 lanes. I have an x8 GPU, and 3 x 2TB Gen4 SSDs occupying those 20 lanes. Both physical x16 slots are bifurcatable. South of the chipset, which is connected Gen4 x4 to CPU, is a 4TB QLC SSD and an Optane 905P. Neither is a bandwidth monster so that works fine. The Realtek 2.5 GbE NIC is solid. This system is storage-heavy, obviously, even without using any of the 8 SATA ports. It's 10TB of Gen4 SSD and almost a TB of Optane, with LOTS of concurrent bandwidth. The board is solid as a rock - you set BIOS, save and reboot, and things stay that way. Still, I don't know that I would want to ship this off to a datacenter without IPMI.

B550 - Gigabyte B550M Aorus Elite v1.2 - way cheaper than above, but surprisingly rock solid. This board also bifurcates, but only has one CPU-connected x16, which can go to x8/x4/x4, so along with the onboard CPU-connected NVME x4, you can potentially have 16 lanes of CPU-connectedness, all of which I'm using for SSDs and Optane, again. There are also 6 lanes of Gen3 PCIe behind the Gen3x4 DMI link to B550. This board has been running Proxmox for me with 4x16GB=64GB Micron E-die 3200 OC'ed to 3466 (yeah, yeah, like I said, rock solid). I've been using a 5600G APU so that I have some display out, but this board has been SO surprisingly set-and-forget that I'm tempted to pull the APU for a CPU and install 64GB of Crucial 3200 ECC UDIMMs. The APU (unless it's the PRO model) can't make use of ECC, while the CPU can. I think I would consider setting this up somewhere without easy access and without IPMI, as long as someone could go hit the reset button if necessary.

Since the newer gen has onboard basic GPU, some of the fretting about CPU/APU +/- ECC can be eliminated, in exchange for X670 being a "weird" chipset to put it nicely. I haven't mentioned them yet, but MSI has probably become my favorite UEFI over the past few months. I just got a cheap B760 board from them (that is sorta functionally equivalent with this AM5 board)with a super-discounted Alder Lake to use up some DDR4 laying around, with the intention for it to be an efficient but punchy home server or NAS foundation. I'm only a week into testing it, but it is reasonably flexible and rock solid. No bifurcation. This system is leaning towards the kind of stability one would want without IPMI, so probably totally fine if its stays at home, even in the garage or attic.

MSI seems less likely to fsck the consumer at the lower price points and leave in the 2.5 GbE NIC, Wifi 6E, and decent audio and USB that GB and Asus are ripping out around $150.

And, there's always ASRock Rack, which lists 6 variations of board under AM5 server offerings.
 

kriterio

New Member
Apr 14, 2024
8
0
1
Ok, I was gonna bite, since I do some home serving with X570 and B550. But, my experiences are AM4 and one chipset gen behind. It was perhaps easier in the X/B-500 generation because the chipsets were a little more straightforward. So, here's my experience and if you want to try to transpose that up a generation, go for it.

X570 - Asus ROS Crosshair VIII (Wifi) - we have the CPU-connected x4 NVME dedicated, and 16 more CPU-connected PCIe Gen4 lanes. I have an x8 GPU, and 3 x 2TB Gen4 SSDs occupying those 20 lanes. Both physical x16 slots are bifurcatable. South of the chipset, which is connected Gen4 x4 to CPU, is a 4TB QLC SSD and an Optane 905P. Neither is a bandwidth monster so that works fine. The Realtek 2.5 GbE NIC is solid. This system is storage-heavy, obviously, even without using any of the 8 SATA ports. It's 10TB of Gen4 SSD and almost a TB of Optane, with LOTS of concurrent bandwidth. The board is solid as a rock - you set BIOS, save and reboot, and things stay that way. Still, I don't know that I would want to ship this off to a datacenter without IPMI.

B550 - Gigabyte B550M Aorus Elite v1.2 - way cheaper than above, but surprisingly rock solid. This board also bifurcates, but only has one CPU-connected x16, which can go to x8/x4/x4, so along with the onboard CPU-connected NVME x4, you can potentially have 16 lanes of CPU-connectedness, all of which I'm using for SSDs and Optane, again. There are also 6 lanes of Gen3 PCIe behind the Gen3x4 DMI link to B550. This board has been running Proxmox for me with 4x16GB=64GB Micron E-die 3200 OC'ed to 3466 (yeah, yeah, like I said, rock solid). I've been using a 5600G APU so that I have some display out, but this board has been SO surprisingly set-and-forget that I'm tempted to pull the APU for a CPU and install 64GB of Crucial 3200 ECC UDIMMs. The APU (unless it's the PRO model) can't make use of ECC, while the CPU can. I think I would consider setting this up somewhere without easy access and without IPMI, as long as someone could go hit the reset button if necessary.

Since the newer gen has onboard basic GPU, some of the fretting about CPU/APU +/- ECC can be eliminated, in exchange for X670 being a "weird" chipset to put it nicely. I haven't mentioned them yet, but MSI has probably become my favorite UEFI over the past few months. I just got a cheap B760 board from them (that is sorta functionally equivalent with this AM5 board)with a super-discounted Alder Lake to use up some DDR4 laying around, with the intention for it to be an efficient but punchy home server or NAS foundation. I'm only a week into testing it, but it is reasonably flexible and rock solid. No bifurcation. This system is leaning towards the kind of stability one would want without IPMI, so probably totally fine if its stays at home, even in the garage or attic.

MSI seems less likely to fsck the consumer at the lower price points and leave in the 2.5 GbE NIC, Wifi 6E, and decent audio and USB that GB and Asus are ripping out around $150.

And, there's always ASRock Rack, which lists 6 variations of board under AM5 server offerings.
Thank you for sharing your experiences. I've tried to put all the parts together. It would be great to get your opinion:
 
Last edited:

name stolen

Member
Feb 20, 2018
50
17
8
True first thoughts:

Wow, that's a lot of power supply for the parts listed. 650-850W would still leave room for like 10 SSDs and 10 HDDs, a mid-grade GPU, and double the RAM. With lots of headroom.

I like the 65W CPU choice, even if that number is just to help you pick a cooler and not a hard limit.

I use liquid cooling often. At home. No way would I ship a closed loop all-in-one cooler to a datacenter. I even have some 2012 and 2013 Corsair AIOs that still work and haven't leaked. In fact, I've never had an AIO leak. But I HAVE had them lose coolant over time - I guess slow evaporation through tubing, I don't know, but my 2015-2017 Corsair AIOs are almost unusable now due to bubbling and lack of fluid remaining.

Heatsinks never fail, and quality fans don't fail very often. Start here and size up as needed, if needed.

Being extremely picky, but pointing out areas that may have issues, the NICs. One is Intel 2.5 GbE. That has been problematic in the past, more so than Realtek 2.5 GbE. The other is Aquantia 10GbE. In MY experience, the Aquantia hardware and drivers work fine in macOS, work mostly fine in Linux (with some slowdowns and catchups but no egregious dropouts), and work like hell (or don't work) in Windows, with frequent dropouts that will make you pull out hair. Personally, these ProArt X670E onboard NICs are an interesting set for a personal workstation, and if they don't work out as well as you want, you can try a different driver or firmware, or just install a new PCIe NIC card. Not once you ship this off.

NICs are so simple, but could easily start a war. X540 is old, PCIe 2.0, hot and power hungry, can't do multi-gig rates, and frequently faked. X550 is newer, PCIe 3.0, still kinda warm, sometimes needs hand-holding for multi-gig, and frequently faked. Also, if you're firmware versions don't match the driver versions, you get LOTS of syslog entries without preventative action. AQC107 and AQC113c can and do have serious issues depending on OS. Seems only Apple has fully figured them out. And despite the old-time forum hate for the mfg, Realtek RTL8125B seems to be the winner of the 2.5GbE generation. Intel i226 seems better than i225, although if I needed 2.5G, I'd be tempted to just go Realtek and forget about it.

Fractal Torrent looks like good airflow. I haven't used one - I've heard the front design leads to increased air turbulence noise, but that wouldn't matter in a DC.

I like the SSD - I have a few 1TB and 2TB Gold P31s. Platinum P41s are even better. Don't accidentally swap it for QLC, though.

RAM - you said you wanted ECC, but this isn't it. If you do want ECC, you may want to start shopping for ECC UDIMMs at Crucial and branch out from there, if necessary.

There are so many little gotchas when you have to button something up and send it off - that's why I'm nitpicking. :) If you're sending this off to run Linux and you know the Aquantia and the Intel NICs are fine, go for it. That hasn't been my own experience in my own home.
 

CyklonDX

Well-Known Member
Nov 8, 2022
870
290
63
i've been using msi meg x570 unify with 128G 3200MHz ecc ram, 5900x@up-to 4.95GHz(stock) its been very stable - never crashed yet.
(been online up to half a year at one time)
(also been stable at non-ecc 3600MHz)
(been using for kvm's, with air cooling noctua)

(sas 9400-8i8e, P41, micron 3400, and one of the following gpu's at the time rad vii pro/7900xtx/3080Ti)
 
Last edited:

Chriggel

Member
Mar 30, 2024
88
41
18
I'm running an Asus TUF X670E, but not as a server, so I can't comment on 24/7 operation. However, I'm going to second what name stolen already pointed out and expand on it:

DON'T use watercooling. No watercooling solution, AIO or custom loop, is truly maintenance free. And pumps can die. A conventional heatsink cannot fail. A fan on it could, but even if it dies, you're already set up for that with the Fractal Torrent. Chances are that you won't even notice that your fan failed, should it ever happen. If you really want watercooling, I suggest a custom loop and redundant pumps, but only if you can't avoid watercooling altogether for whatever reason.

My real suggestion would be a tower cooler with two fans. This is usually done to improve performance, but it's also kind of redundancy. The Thermalright Peerless Assasin 120 would be an extremely solid choice in terms of performance and price.

Also, go for ECC and don't overclock. Your memory is non-ECC and overclocked, you don't want either.

And the PSU really is pretty beefy, up to the point where it's basically wasted. Go for something smaller and insanely high quality, Seasonic comes to mind. The Prime TX-650 sounds fitting and provides lots of power for your system. If you really need more power for something you haven't mentioned yet, or need a certain amount of connectors, choose a larger Prime TX unit instead.

Another option would be redundant ATX power supplies. It's niche, but they do exist. From the top of my head, there's the FSP Twins Pro and Silverstone Gemini. Maybe there are others. If the datacenter provides redundant power, which it would, because it's a datacenter, then you could probably use it to your advantage.

With all that, you're pretty much prepared as good as you can be for running a server 24/7 using mostly consumer parts.
 

kriterio

New Member
Apr 14, 2024
8
0
1
True first thoughts:

Wow, that's a lot of power supply for the parts listed. 650-850W would still leave room for like 10 SSDs and 10 HDDs, a mid-grade GPU, and double the RAM. With lots of headroom.

I like the 65W CPU choice, even if that number is just to help you pick a cooler and not a hard limit.

I use liquid cooling often. At home. No way would I ship a closed loop all-in-one cooler to a datacenter. I even have some 2012 and 2013 Corsair AIOs that still work and haven't leaked. In fact, I've never had an AIO leak. But I HAVE had them lose coolant over time - I guess slow evaporation through tubing, I don't know, but my 2015-2017 Corsair AIOs are almost unusable now due to bubbling and lack of fluid remaining.

Heatsinks never fail, and quality fans don't fail very often. Start here and size up as needed, if needed.

Being extremely picky, but pointing out areas that may have issues, the NICs. One is Intel 2.5 GbE. That has been problematic in the past, more so than Realtek 2.5 GbE. The other is Aquantia 10GbE. In MY experience, the Aquantia hardware and drivers work fine in macOS, work mostly fine in Linux (with some slowdowns and catchups but no egregious dropouts), and work like hell (or don't work) in Windows, with frequent dropouts that will make you pull out hair. Personally, these ProArt X670E onboard NICs are an interesting set for a personal workstation, and if they don't work out as well as you want, you can try a different driver or firmware, or just install a new PCIe NIC card. Not once you ship this off.

NICs are so simple, but could easily start a war. X540 is old, PCIe 2.0, hot and power hungry, can't do multi-gig rates, and frequently faked. X550 is newer, PCIe 3.0, still kinda warm, sometimes needs hand-holding for multi-gig, and frequently faked. Also, if you're firmware versions don't match the driver versions, you get LOTS of syslog entries without preventative action. AQC107 and AQC113c can and do have serious issues depending on OS. Seems only Apple has fully figured them out. And despite the old-time forum hate for the mfg, Realtek RTL8125B seems to be the winner of the 2.5GbE generation. Intel i226 seems better than i225, although if I needed 2.5G, I'd be tempted to just go Realtek and forget about it.

Fractal Torrent looks like good airflow. I haven't used one - I've heard the front design leads to increased air turbulence noise, but that wouldn't matter in a DC.

I like the SSD - I have a few 1TB and 2TB Gold P31s. Platinum P41s are even better. Don't accidentally swap it for QLC, though.

RAM - you said you wanted ECC, but this isn't it. If you do want ECC, you may want to start shopping for ECC UDIMMs at Crucial and branch out from there, if necessary.

There are so many little gotchas when you have to button something up and send it off - that's why I'm nitpicking. :) If you're sending this off to run Linux and you know the Aquantia and the Intel NICs are fine, go for it. That hasn't been my own experience in my own home.
Huge thanks! I've now changed PSU and Cooler:

Should I use ECC RAM for my server? Does it differ to much? I mean DDR5 has partially ECC bits afaik.
 

CyklonDX

Well-Known Member
Nov 8, 2022
870
290
63
I use Seasonic Prime 1kW SSR-1000TR 80+ Titanium with ups.

In terms of DDR5 built in ECC its not traditional ECC, should be seen as not ECC.
(here's a decent vid)
 
Last edited:
  • Like
Reactions: kriterio

MountainBofh

Beating my users into submission
Mar 9, 2024
150
123
43
Should I use ECC RAM for my server? Does it differ to much? I mean DDR5 has partially ECC bits afaik.
Strongly depends on what it's doing. For ZFS servers, or compute servers that are performing scientific calculations that can not risk a bit flip, than ECC is warranted. Other stuff I would argue that ECC is not near as vital for.
 
  • Like
Reactions: kriterio

Chriggel

Member
Mar 30, 2024
88
41
18
Huge thanks! I've now changed PSU and Cooler:
That's the PA120 SE, there's a non-SE version too. The SE is ever so slightly smaller, I think to meet some restrictions regarding cooler height in some specific cases. But the Torrent will take the PA120 non-SE. Not a huge difference probably, but I wanted to point it out.

Should I use ECC RAM for my server? Does it differ to much? I mean DDR5 has partially ECC bits afaik.
As the old fart that I am, I remember when ECC was supported by many desktop platforms. It was then gradually phased out and paywalled in some instances. There's no good reason for that and I feel that it was a huge mistake. Instead we should just have made it standard everywhere.

I wouldn't recommend building any system without ECC if it's possible. The only reason why I got an AM5 desktop is that it supports ECC. For a desktop, you could make the argument that you can live without it, even though I personally would only partially agree. For a server though, it's a no brainer. ECC all the way.

On-Die ECC is not ECC at all. I think naming it as such was a huge disservice to everyone. The On-Die ECC basically replaced the CRC that memory has done previously and it should be seen as its replacement. It's just an internal error checking and only the method how it does it was changed. DDR5 memory didn't really receive any new feature that wasn't there previously and it's not a replacement for proper ECC.
 
  • Like
Reactions: kriterio

kriterio

New Member
Apr 14, 2024
8
0
1
That's the PA120 SE, there's a non-SE version too. The SE is ever so slightly smaller, I think to meet some restrictions regarding cooler height in some specific cases. But the Torrent will take the PA120 non-SE. Not a huge difference probably, but I wanted to point it out.



As the old fart that I am, I remember when ECC was supported by many desktop platforms. It was then gradually phased out and paywalled in some instances. There's no good reason for that and I feel that it was a huge mistake. Instead we should just have made it standard everywhere.

I wouldn't recommend building any system without ECC if it's possible. The only reason why I got an AM5 desktop is that it supports ECC. For a desktop, you could make the argument that you can live without it, even though I personally would only partially agree. For a server though, it's a no brainer. ECC all the way.

On-Die ECC is not ECC at all. I think naming it as such was a huge disservice to everyone. The On-Die ECC basically replaced the CRC that memory has done previously and it should be seen as its replacement. It's just an internal error checking and only the method how it does it was changed. DDR5 memory didn't really receive any new feature that wasn't there previously and it's not a replacement for proper ECC.
Thank you for pointing it out. Because I was about to buy the wrong one. :) My build after updating Memory & Cooler:
 

name stolen

Member
Feb 20, 2018
50
17
8
The 72zzwg updated list looks good to me - thanks for being so receptive to advice. I'll just go ahead and say the only thing that's bugging me a little - only one storage device? Any chance your budget allows for a second SSD? If your situation doesn't call for it, then disregard. Just in case you eat up one SSD faster than you think, there's already a standby device in your remote server. If your main working storage isn't local, then you're probably fine as is. Great looking build list - be sure to install Solidigm SSD software and update firmware before deploying.
 
  • Like
Reactions: kriterio

kriterio

New Member
Apr 14, 2024
8
0
1
The 72zzwg updated list looks good to me - thanks for being so receptive to advice. I'll just go ahead and say the only thing that's bugging me a little - only one storage device? Any chance your budget allows for a second SSD? If your situation doesn't call for it, then disregard. Just in case you eat up one SSD faster than you think, there's already a standby device in your remote server. If your main working storage isn't local, then you're probably fine as is. Great looking build list - be sure to install Solidigm SSD software and update firmware before deploying.
No worries, I should have thanked you instead :) Yes, I have a budget of 2000$. Should I get the same SSD model? And what do you think about KVM? Do I need to get a PiKVM or something like that?
 

Tech Junky

Active Member
Oct 26, 2023
392
129
43
@kriterio

I'm using the ASRock pg lightning board and picked one up on Amazon for $160 last Aug. They had a bunch of returns at that time and grabbed one. Figured out they were probably being returned due to some uefi issues. Played around with 3 different versions to get it working the way I wanted it to but, saved big on the costs.

There are some quirks though most boards when it comes to populating all four ram slots where sometimes they all work or only two do. Something to keep in mind as I saw you pick four modules.

I also use the pa120 on my 7900x and it works well. I skipped paste and went with a graphite pad instead. Makes it easier to move things around as needed and not deal with the mess. Temps at idle are about 40c with the pad.

For drives a run WD for the os/backup SN850/770 and for storage run a Kioxia CD8 as I tried micron's and they both died within a week. The Kioxia drive runs cooler at 40c as well.
 
  • Like
Reactions: kriterio

kriterio

New Member
Apr 14, 2024
8
0
1
@kriterio

I'm using the ASRock pg lightning board and picked one up on Amazon for $160 last Aug. They had a bunch of returns at that time and grabbed one. Figured out they were probably being returned due to some uefi issues. Played around with 3 different versions to get it working the way I wanted it to but, saved big on the costs.

There are some quirks though most boards when it comes to populating all four ram slots where sometimes they all work or only two do. Something to keep in mind as I saw you pick four modules.

I also use the pa120 on my 7900x and it works well. I skipped paste and went with a graphite pad instead. Makes it easier to move things around as needed and not deal with the mess. Temps at idle are about 40c with the pad.

For drives a run WD for the os/backup SN850/770 and for storage run a Kioxia CD8 as I tried micron's and they both died within a week. The Kioxia drive runs cooler at 40c as well.
Does it have an ECC support?