Thank you for sharing your experiences. I've tried to put all the parts together. It would be great to get your opinion:Ok, I was gonna bite, since I do some home serving with X570 and B550. But, my experiences are AM4 and one chipset gen behind. It was perhaps easier in the X/B-500 generation because the chipsets were a little more straightforward. So, here's my experience and if you want to try to transpose that up a generation, go for it.
X570 - Asus ROS Crosshair VIII (Wifi) - we have the CPU-connected x4 NVME dedicated, and 16 more CPU-connected PCIe Gen4 lanes. I have an x8 GPU, and 3 x 2TB Gen4 SSDs occupying those 20 lanes. Both physical x16 slots are bifurcatable. South of the chipset, which is connected Gen4 x4 to CPU, is a 4TB QLC SSD and an Optane 905P. Neither is a bandwidth monster so that works fine. The Realtek 2.5 GbE NIC is solid. This system is storage-heavy, obviously, even without using any of the 8 SATA ports. It's 10TB of Gen4 SSD and almost a TB of Optane, with LOTS of concurrent bandwidth. The board is solid as a rock - you set BIOS, save and reboot, and things stay that way. Still, I don't know that I would want to ship this off to a datacenter without IPMI.
B550 - Gigabyte B550M Aorus Elite v1.2 - way cheaper than above, but surprisingly rock solid. This board also bifurcates, but only has one CPU-connected x16, which can go to x8/x4/x4, so along with the onboard CPU-connected NVME x4, you can potentially have 16 lanes of CPU-connectedness, all of which I'm using for SSDs and Optane, again. There are also 6 lanes of Gen3 PCIe behind the Gen3x4 DMI link to B550. This board has been running Proxmox for me with 4x16GB=64GB Micron E-die 3200 OC'ed to 3466 (yeah, yeah, like I said, rock solid). I've been using a 5600G APU so that I have some display out, but this board has been SO surprisingly set-and-forget that I'm tempted to pull the APU for a CPU and install 64GB of Crucial 3200 ECC UDIMMs. The APU (unless it's the PRO model) can't make use of ECC, while the CPU can. I think I would consider setting this up somewhere without easy access and without IPMI, as long as someone could go hit the reset button if necessary.
Since the newer gen has onboard basic GPU, some of the fretting about CPU/APU +/- ECC can be eliminated, in exchange for X670 being a "weird" chipset to put it nicely. I haven't mentioned them yet, but MSI has probably become my favorite UEFI over the past few months. I just got a cheap B760 board from them (that is sorta functionally equivalent with this AM5 board)with a super-discounted Alder Lake to use up some DDR4 laying around, with the intention for it to be an efficient but punchy home server or NAS foundation. I'm only a week into testing it, but it is reasonably flexible and rock solid. No bifurcation. This system is leaning towards the kind of stability one would want without IPMI, so probably totally fine if its stays at home, even in the garage or attic.
MSI seems less likely to fsck the consumer at the lower price points and leave in the 2.5 GbE NIC, Wifi 6E, and decent audio and USB that GB and Asus are ripping out around $150.
And, there's always ASRock Rack, which lists 6 variations of board under AM5 server offerings.
Fixed now. SorryI would be happy to have a look - however " This part list is private. "
Huge thanks! I've now changed PSU and Cooler:True first thoughts:
Wow, that's a lot of power supply for the parts listed. 650-850W would still leave room for like 10 SSDs and 10 HDDs, a mid-grade GPU, and double the RAM. With lots of headroom.
I like the 65W CPU choice, even if that number is just to help you pick a cooler and not a hard limit.
I use liquid cooling often. At home. No way would I ship a closed loop all-in-one cooler to a datacenter. I even have some 2012 and 2013 Corsair AIOs that still work and haven't leaked. In fact, I've never had an AIO leak. But I HAVE had them lose coolant over time - I guess slow evaporation through tubing, I don't know, but my 2015-2017 Corsair AIOs are almost unusable now due to bubbling and lack of fluid remaining.
Heatsinks never fail, and quality fans don't fail very often. Start here and size up as needed, if needed.
Being extremely picky, but pointing out areas that may have issues, the NICs. One is Intel 2.5 GbE. That has been problematic in the past, more so than Realtek 2.5 GbE. The other is Aquantia 10GbE. In MY experience, the Aquantia hardware and drivers work fine in macOS, work mostly fine in Linux (with some slowdowns and catchups but no egregious dropouts), and work like hell (or don't work) in Windows, with frequent dropouts that will make you pull out hair. Personally, these ProArt X670E onboard NICs are an interesting set for a personal workstation, and if they don't work out as well as you want, you can try a different driver or firmware, or just install a new PCIe NIC card. Not once you ship this off.
NICs are so simple, but could easily start a war. X540 is old, PCIe 2.0, hot and power hungry, can't do multi-gig rates, and frequently faked. X550 is newer, PCIe 3.0, still kinda warm, sometimes needs hand-holding for multi-gig, and frequently faked. Also, if you're firmware versions don't match the driver versions, you get LOTS of syslog entries without preventative action. AQC107 and AQC113c can and do have serious issues depending on OS. Seems only Apple has fully figured them out. And despite the old-time forum hate for the mfg, Realtek RTL8125B seems to be the winner of the 2.5GbE generation. Intel i226 seems better than i225, although if I needed 2.5G, I'd be tempted to just go Realtek and forget about it.
Fractal Torrent looks like good airflow. I haven't used one - I've heard the front design leads to increased air turbulence noise, but that wouldn't matter in a DC.
I like the SSD - I have a few 1TB and 2TB Gold P31s. Platinum P41s are even better. Don't accidentally swap it for QLC, though.
RAM - you said you wanted ECC, but this isn't it. If you do want ECC, you may want to start shopping for ECC UDIMMs at Crucial and branch out from there, if necessary.
There are so many little gotchas when you have to button something up and send it off - that's why I'm nitpicking.If you're sending this off to run Linux and you know the Aquantia and the Intel NICs are fine, go for it. That hasn't been my own experience in my own home.
Strongly depends on what it's doing. For ZFS servers, or compute servers that are performing scientific calculations that can not risk a bit flip, than ECC is warranted. Other stuff I would argue that ECC is not near as vital for.Should I use ECC RAM for my server? Does it differ to much? I mean DDR5 has partially ECC bits afaik.
That's the PA120 SE, there's a non-SE version too. The SE is ever so slightly smaller, I think to meet some restrictions regarding cooler height in some specific cases. But the Torrent will take the PA120 non-SE. Not a huge difference probably, but I wanted to point it out.Huge thanks! I've now changed PSU and Cooler:
As the old fart that I am, I remember when ECC was supported by many desktop platforms. It was then gradually phased out and paywalled in some instances. There's no good reason for that and I feel that it was a huge mistake. Instead we should just have made it standard everywhere.Should I use ECC RAM for my server? Does it differ to much? I mean DDR5 has partially ECC bits afaik.
Thank you for pointing it out. Because I was about to buy the wrong one.That's the PA120 SE, there's a non-SE version too. The SE is ever so slightly smaller, I think to meet some restrictions regarding cooler height in some specific cases. But the Torrent will take the PA120 non-SE. Not a huge difference probably, but I wanted to point it out.
As the old fart that I am, I remember when ECC was supported by many desktop platforms. It was then gradually phased out and paywalled in some instances. There's no good reason for that and I feel that it was a huge mistake. Instead we should just have made it standard everywhere.
I wouldn't recommend building any system without ECC if it's possible. The only reason why I got an AM5 desktop is that it supports ECC. For a desktop, you could make the argument that you can live without it, even though I personally would only partially agree. For a server though, it's a no brainer. ECC all the way.
On-Die ECC is not ECC at all. I think naming it as such was a huge disservice to everyone. The On-Die ECC basically replaced the CRC that memory has done previously and it should be seen as its replacement. It's just an internal error checking and only the method how it does it was changed. DDR5 memory didn't really receive any new feature that wasn't there previously and it's not a replacement for proper ECC.
No worries, I should have thanked you insteadThe 72zzwg updated list looks good to me - thanks for being so receptive to advice. I'll just go ahead and say the only thing that's bugging me a little - only one storage device? Any chance your budget allows for a second SSD? If your situation doesn't call for it, then disregard. Just in case you eat up one SSD faster than you think, there's already a standby device in your remote server. If your main working storage isn't local, then you're probably fine as is. Great looking build list - be sure to install Solidigm SSD software and update firmware before deploying.
Does it have an ECC support?@kriterio
I'm using the ASRock pg lightning board and picked one up on Amazon for $160 last Aug. They had a bunch of returns at that time and grabbed one. Figured out they were probably being returned due to some uefi issues. Played around with 3 different versions to get it working the way I wanted it to but, saved big on the costs.
There are some quirks though most boards when it comes to populating all four ram slots where sometimes they all work or only two do. Something to keep in mind as I saw you pick four modules.
I also use the pa120 on my 7900x and it works well. I skipped paste and went with a graphite pad instead. Makes it easier to move things around as needed and not deal with the mess. Temps at idle are about 40c with the pad.
For drives a run WD for the os/backup SN850/770 and for storage run a Kioxia CD8 as I tried micron's and they both died within a week. The Kioxia drive runs cooler at 40c as well.
Then it would not be a good choice for a server. Thanks for the suggestion, anyway.