NAS Build: Xeon vs. EPYC 3000 vs. Ryzen

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bernhard

New Member
Dec 17, 2020
2
1
1
Dear fellow NAS-builders,

after sailing the sea of infinite combinations for days, I feel finally slightly more informed than confused. This is a simple "what should I go for" question.

Requirements:
- 8+ Threads (A fair amount of stuff will be going on aside from ZFS such as CI/CD Piplines, PostgreSQL databse, etc.)
- IPMI (because of IPMI itself, also because of the Server geared mainboards this selection yields)
- ECC support (not the Zen1 fake support)
- up to 64GB RAM good, 128GB better
- Budget 1000€ for Mainboard / CPU (I have PSU, Case, RAM in spare)

Bonuses:
- Support for DDR4 RDIMM 288-Pin, reg ECC (I have laying around)
- 10G Ethernet

Stuff I don't need:
- iGPU / GPU for transcoding

What did I find? It pretty much boiled down to these three options:

#1 ASRock Rack E3C246D4U2-2L2T + Intel Xeon E-2236 (not settled on CPU)
+ compatibility
- average power / tdp ratio (178), slightly better than the Intel Xeon E-2136 (168.825) though.

#2 ASRock Rack X570D4U-2L2T + AMD Ryzen 5 5600X (not settled on CPU, waiting for Zen3 Ryzen 3)
+ extensibility / long life (AMx sockets seem to support many generations)
+ bombastic power / tdp ratio (342)
- compatibility (Ryzen Platform rare among server usecases, maybe someone has experience that he wants to share regarding TrueNAS)

#3 Supermicro M11SDV-8C+-LN4F + AMD EPYC™ 3251 SoC Processor
+ good better power / tdp ratio (266)
- compatibilty?
- No 10G Ethernet

Any thought / input is highly appreciated!

PS: Details of my journey can be viewed in a Google Sheet here.
 

altano

Active Member
Sep 3, 2011
280
159
43
Los Angeles, CA
What do you need for your drives? Would an onboard HBA appeal to you?

For a NAS I don’t think you can beat the Xeon-D 15XX series, STILL. It’s lower power than the 21XX series, still more than powerful enough for any NAS build, cheaper when second hand, and unlike the Epyc 3000 it has 10gbe and HBAs onboard.
 
  • Like
Reactions: Bernhard and Marsh

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
When I replace one of my VM or storage servers, I always look for SuperMicro systems in the 3k Euro area without disks (less than half of it for mainboard and CPU). The last few years this was always a Xeon silver system.

I now compared such a system from last summer with the very newest SuperMicro H12 Epyc system that comes to the market last month in the same price range to check if this is what I will use the next few years.

While I expected a performance improvement I did not expect to nearly double storage performance with same pools. While the new system has twice the cores and RAM this seems not to be the main relevant point. In a virtualized environment with same cores and RAM (8 cores and 64GB RAM) than the Xeon the situation was quite the same. As I want dual use systems (VM + storage) I use the 16 core + 128GB RAM Epyc.

From results
- For a really high performance filer especially with encryption, raw power really counts
- If price counts, Epyc systems are winners
- I have seen some compatibility problems on AMD with NVMe passthrough
- ZFS encryption + sync is quite slow even with a very fast server

- Lower class systems (ex Xeon-D) are fast enough for a 1/10G filer without encryption. For a 10G+ system with or without encryption, there is a huge difference (encryption is a mandatory demand of the European data protection rules as it enforces data security at a state of the art level)

 
Last edited:
  • Like
Reactions: Bernhard

DedoBOT

Member
Dec 24, 2018
44
13
8
Epyc system. Cause the 8 memory channels. Zfs loves fast RAM. 10gbe Nics are around 100 bucks in ebay.
P.s.
I'd go with epyc mb with 8 ram slots and at least 2 pcix16 - one for 10gbe ,the second gets the HBA .
 
Last edited:

itronin

Well-Known Member
Nov 24, 2018
1,240
801
113
Denver, Colorado
I'd go with epyc mb with 8 ram slots and at least 2 pcix16 - one for 10gbe ,the second gets the HBA .
My comments apply to an x16 phys x16 data slot.

So help me understand.

Why x16 for the 10gbe and the HBA?

I can't recall seeing a single or dual 10Gbe that was x16, even the quad 10gbe I've seen are x8.

Same for the HBA. I can't recall seeing x16 HBA's. Especially if you are using spinners x8 seems more than enough though for most use cases.

What value then is there in putting an x8 card in an x16 slot .

Seems to me you'd be leaving food on the table so to speak. ie. lanes better used for other purposes like I dunno nvme storage connections?

TIA.
 

DedoBOT

Member
Dec 24, 2018
44
13
8
My comments apply to an x16 phys x16 data slot.

So help me understand.

Why x16 for the 10gbe and the HBA?

I can't recall seeing a single or dual 10Gbe that was x16, even the quad 10gbe I've seen are x8.

Same for the HBA. I can't recall seeing x16 HBA's. Especially if you are using spinners x8 seems more than enough though for most use cases.

What value then is there in putting an x8 card in an x16 slot .

Seems to me you'd be leaving food on the table so to speak. ie. lanes better used for other purposes like I dunno nvme storage connections?

TIA.
My fault . Physical x16 , of course x8 is enough for 10gbe and 8port HBA. The kid is crawling at my back, so excuse me :)

By the way . I will not risk with the onboard sata ports via the chipset. Dedicated HBA on pci slot is way better approach . I prefer it even before build-in MB sas controllers .
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
You need lanes mainly for future expansion with NVMe
Count 4x per NVMe so each 4x slot or M.2/Oculink port can connect a single NVMe,
a 8 (16)x slot can drive 2 (4) NVMe with an adapter card and bifurcation.

This is why for example a H12SSL-C | Motherboards | Super Micro Computer, Inc. can drive 24x MVMe plus onboard 12G SAS and additional 10/40/100G nic (The H12SSL-CT has 10G onboard)
 
Last edited:
  • Like
Reactions: Bernhard

Bernhard

New Member
Dec 17, 2020
2
1
1
Thanks for all the replies! Really appreciated!

In chronological order:
1. @altano: A looked for Xeon-D 15XX, but I could find used / new offers in my location (Germany).
2. @altano: As I understood it, HBA is fine (no need for RAID when there is ZFS). Right?
3. @gea: Thanks for bringing up the issue of 10G in combination with encryption. Maybe 10G does not make sense, after all.
4. @gea: You mentioned to awesome Epyc performance. Do you have experience regarding the Epyc (or Zen.* in general) compatibility with TrueNAS?
5. @DedoBOT: Do you have an special recommendation (Mainboard / CPU)?
6. @DedoBOT: Why do you think dedicated HBA is the better option compared to onboard SATA? Do you have a recommendation for a HBA card?
7. @gea: The H12SSL-C is a beast! However, even the EPYC with the lowest power consumption has 120W TDP.
 
  • Like
Reactions: altano

DedoBOT

Member
Dec 24, 2018
44
13
8
5. I'm not familiar with actual epyc's motherboards. My latest build is x11spfw-tf , now I will definitely go with epyc zen2 platform.
6.Flexibility when sh*ts happen . With spare HBA on the shelf you can swap the dead one in minutes. In case of motherboard's fail you wil be in pursuit of broad range of options, not just an specific model.
The LSI 9302-8i HBA is obvious recommendation.
 
Last edited:
  • Like
Reactions: Bernhard

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
4. @gea: You mentioned to awesome Epyc performance. Do you have experience regarding the Epyc (or Zen.* in general) compatibility with TrueNAS?
It is not the management tool that counts regarding compatibility but the operating systems like Free-BSD, Linux or Solaris or a Solaris fork like OmniOS. Mostly best hardware support is on Linux. The Unix options Free-BSD and Solarish are not as widely supported but offers mostly a better ZFS integration (Solaris with native ZFS) or when it comes to Solarish often a faster SMB server than the usual SAMBA. In general it is mainly the nic and hba that is critical regarding driver support. Pass-through is another item that may produce problems.
 
  • Like
Reactions: itronin

bmorepanic

New Member
Oct 24, 2020
13
1
3
Baltimore, Hon
Just stuff to think about:
  • Asrock is real testy about ram and the ram being on their list. The actual specs required tend to be in the user guide, not the spec list.
  • Also do a careful read of the slots on Asrock boards without epyc chipsets - like the x570. They tend to not implement everything on the board; instead use of some things eliminate use of others. The lower chipsets don't support enough pci lanes.
  • Agree that zfs loves ram. Because of that and postgres - which also loves ram for complex queries - not sure I'd put both of those on the same computer IF zfs is to serve files and there are demands made on postgres. You can constrain each or both from being ram pigs, but why would you want to?
  • If you have to encrypt, ask how that works exactly on whatever flavor of linux you'll use and zfs. Some work a lot faster with gpus - even the non-high price ones.
I went on an asrock rack x470 with the dual 10G. I needed a lot of storage and went hba/sas. I had to choose between getting the size or the speed. Even at that, I made sounds suspiciously like maniacal laughter when I did my first networked shared file tests as it was easily the fastest server I have ever done for that task. I didn't encrypt but did write caching, worked a treat in my situation where I need to store, but not so much read.

If I had had the budget, I would have done an epyc for the pcie lanes. I figured this could be replaced in a couple of years when NVME prices have dropped.
 

111alan

Active Member
Mar 11, 2019
291
109
43
Haerbing Institution of Technology
Do not use AMD CPUs for i/o intensive or idle power sensitive workloads. Using IOmeter(and FIO in linux) I observed a 20-40% IOPS decrease for high-end NVMe drives like PM1725a and Pblaze5, and it doesn't scale nearly as well when you add more drives. Although the difference will be smaller in not so CPU bound scenarios such as sequential read, AMD still doesn't have advantage in this area for its high idle voltage caused by its LDO power management design, and a much higher price(in China a 48-core EPYC2 costs 50% more than two 24-core xeons of similar frequency).

Here's my test.
不同软硬件环境下NVMe SSD性能简测和一些SSD测试建议 (ssdfans.com)

I suggest you to get some low end Xeons like Gold 6139(18 threads) and save the money for an Optane cache. Or even cheaper, get a desktop or E3 grade 6 core, both super cheap and can handle 2 top class nvme drives with near-linear IOPS scaling.
 
Last edited:
  • Like
Reactions: DedoBOT

aivxtla

New Member
Apr 19, 2020
10
6
3
I have the EPYC 3251 SuperMicro board, at least coming from the SuperMicro X10SDV-TLNF4 D-1541 board and also comparing to a D-2141 as well it has lower idle power draw than both, especially compared to the 2141. The 3251 draws less power than the D-1541 but in general performs closer to the 2141, as this site's own review showed. The 3251 board in my 1U chassis runs at around 38-40 C at idle with one 40mm fan running at ~1,500-1,600 RPM (PUE2 Mode), and using the the mylar shroud that came with the D-1541 system. When I had the D-1541, with two 40mm fans at ~3,000 RPM it idled at around 50C.

For the EPYC 3251 there is also the Asrock Rack board with 2x 10 Gbe ports at ~$700 right with a $100 off code at Newegg, but the stock heatsink seems inferior to the SuperMicro which has two heat pipes and from someone I know who tested both, they said the Asrock model runs like ~5-10C hotter than the SuperMicro model.
 
Last edited:
  • Like
Reactions: nasi

111alan

Active Member
Mar 11, 2019
291
109
43
Haerbing Institution of Technology
I have the EPYC 3251 SuperMicro board, at least coming from the SuperMicro X10SDV-TLNF4 D-1541 board and also comparing to a D-2141 as well it has lower idle power draw than both, especially compared to the 2141. The 3251 draws less power than the D-1541 but in general performs closer to the 2141, as this site's own review showed. The 3251 board in my 1U chassis runs at around 38-40 C at idle with one 40mm fan running at ~1,500-1,600 RPM (PUE2 Mode), and using the the mylar shroud that came with the D-1541 system. When I had the D-1541, with two 40mm fans at ~3,000 RPM it idled at around 50C.

For the EPYC 3251 there is also the Asrock Rack board with 2x 10 Gbe ports at ~$700 right with a $100 off code at Newegg, but the stock heatsink seems inferior to the SuperMicro which has two heat pipes and from someone I know who tested both, they said the Asrock model runs like ~5-10C hotter than the SuperMicro model.
Was mainly testing power consumption on Zen2. Here's some of the results I have. All the results are calibrated with a clamp current meter. I haven't tested on Zen1 yet, but since it doesn't have a huge cache, an integrated north bridge and LDO system it should consume less power, but I doubt it could beat Skylake, which can lower its frequency and voltage down to 0.8-1.2GHz and 0.6v. I suggest testing it carefully again with c state set to unlimited, nowadays even a good NVMe SSD could consumes more than 20w, and they aren't nearly as hot.
PWR.png

And this kinda proves my point.
Frequency Ramp, Latency and Power - AMD’s New EPYC 7F52 Reviewed: The F is for ᴴᴵᴳᴴ Frequency (anandtech.com)
 

aivxtla

New Member
Apr 19, 2020
10
6
3
The D-2141 is a Skylake chip, looking at least at the following review there was a huge idle power draw delta between the 3251 (19 Watts) and D-2141 (47 Watts) embedded Asrock Rack boards.

 

111alan

Active Member
Mar 11, 2019
291
109
43
Haerbing Institution of Technology
The D-2141 is a Skylake chip, looking at least at the following review there was a huge idle power draw delta between the 3251 (19 Watts) and D-2141 (47 Watts) embedded boards.

I don't think a xeon d should consume more than twice the power that of an 28-core platinum 8280 which ANAND tested(21w, and 18w for 20-core 6230). The 47W number is even on par with my 24-core 8259L with some background activities right now as I write this post. Maybe just another intel-bashing review people tend to do these days.
 

aivxtla

New Member
Apr 19, 2020
10
6
3
Well if I’m not mistaken even this site’s own review showed a pretty high idle power draw for a D-2141 based system, in this case a SuperMicro one. I think they just showed their own findings just not what you were expecting, don’t think it has anything to do with Intel bashing. Of course it doesn’t mean the D-2141 doesn’t have its own advantages like QAT and AVX512.


 
Last edited:

111alan

Active Member
Mar 11, 2019
291
109
43
Haerbing Institution of Technology
Well if I’m not mistaken even this site’s own review showed a pretty high idle power draw for a D-2141 based system, in this case a SuperMicro one. I think they just showed their own findings just not what you were expecting, don’t think it has anything to do with Intel bashing. Of course it doesn’t mean the D-2141 doesn’t have its own advantages like QAT and AVX512.


This is what I have, around 30w cpu and 33w +12v 8P input, with several browsers and a BT client active. And 8259L has 3 times the cores that of 2141. If i have to guess then some power saving options may be intentionally disabled in review-sample BIOSes, like for 6226R below.
idle.JPG1608539883139.png
 

aivxtla

New Member
Apr 19, 2020
10
6
3
I don’t think you can simply extrapolate your Platinum and EPYC 7XXX results onto the embedded platforms. For example the 16 Core 7F52 literally has 1 core enabled per CCX to reach a very high 256MB L3 cache (16MB L3 cache per core) which probably also means a lot more power draw, from all the infinity fabric connections between CCXs and dies as well compared to a dual die 16 core ZEN chip, not surprisingly it’s drawing more power at idle even compared to the core 48 core count EPYC model with 128 MB L3 Cache. The 3251 is a single die model with 2 CCX and 16 MB L3 Cache. Maybe those differences can account for the 3251 and also the Ryzen 9 in the chart you posted having much lower idle power draw?

An aquintance of mine of mine had similar results with the 2141 as the two reviews I mentioned previously, and no power savings features were disabled... and his was on the latest bios at the time he checked, at least as of January this year.

The 3251 idle results are closer to the the Ryzen 9 3950 results in that chart you posted.... Jus want you to know I appreciate your findings as well, more I learn the better :).
 
Last edited:

111alan

Active Member
Mar 11, 2019
291
109
43
Haerbing Institution of Technology
I don’t think you can simply extrapolate your Platinum and EPYC 7XXX results onto the embedded platforms. For example the 16 Core 7F52 literally has 1 core enabled per CCX to reach a very high 256MB L3 cache (16MB L3 cache per core) which probably also means a lot more power draw, from the infinity fabric as well compared to a dual die 16 core ZEN chip, not surprisingly it’s drawing more power at idle even compared to the core 48 core count EPYC model with 128 MB L3 Cache. The 3251 is a single die model with 2 CCX and 16 MB L3 Cache. Maybe those differences can account for the 3251 and also the Ryzen 9 in the chart you posted having much lower idle power draw?

An aquintance of mine of mine had similar results with the 2141 as the two reviews I mentioned previously, and no power savings features were disabled... and his was on the latest bios at the time he checked, at least as of January this year.

The 3251 idle results are probably closer to the the Ryzen 9 3950 results in that chart you posted.... Jus want you to know I appreciate your findings as well, more I learn the better :).
Well I just talked about this before, about the absence of LDO power regulators and separated north bridge, and less L3 cache could mean less power for Zen1 than Zen2 during idle. My problem is, a 8-core intel Xeon-D shouldn't consume way more power than a 24 or 28-core Platinum during idle, given they have almost the same architecture. I estimate that, if its power-saving functions correctly, its power should be a little lower than the 9900KS in that chart, due to its F-IVR design.