EPYC Zen 1 Naples vs. intel Alder Lake i7-12700F for home server running usual stuff?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Hauzer

New Member
Feb 22, 2022
4
0
1
I'm building a new server to replace my old hodgepodge of machines performing various functions that have served me well for a decade, including boxes for NAS, routing, firewall, PLEX server, backups from home devices etc. I want to consolidate all this stuff on one Proxmox machine running various VMs and dockers where applicable, and I'd like to build something that can last another 10 years with appropriate upgrades.

I've seen some great deals on 32 core/64 thread Naples CPUs on the Bay, but I see from Passmark that the 8 hot core/16 hot thread + 4 e-core i7-12700F offers a higher Passmark than any Naples processor while using 65W and a great price/performance point. However the Alder Lakes are limited to 128GB max which is certainly plenty for me right now, but if I decide to migrate to ZFS or something later on then space could get tight (I'm planning to grow into a ~100TB SnapRAID array at present).

I'm interested to hear any opinions on why Alder Lake over Naples would be a bad choice, as there may be many hidden gotchas I haven't considered (for example I know Alder Lake doesn't work well in Linux until 5.16 and Proxmox is still on 5.13, but I can disable the problematic e-cores until Proxmox catches up). Thanks guys and glad to be a part of the community at last!
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
I would look at two main factors. What are your single threaded performance needs (they seem low), and what do you prefer in terms of a platform (and what does it cost).

a server platform, IPMI, gobs of PCIe, might well be nice for this application. getting slower server RAM can be cheap which might help offset the cost of the motherboard.
 

Hauzer

New Member
Feb 22, 2022
4
0
1
Thanks @Spartus. I think my single thread needs are quite low as I don't do any PLEX transcoding, my file server is SnapRAID/mergeFS meaning it's typically only using one slow mechanical drive at a time, and PFSENSE seems to run pretty light (although I'm trying to upgrade to dual gigabit WAN which may change things).

Right now I certainly run a lot of lightweight services rather than a few hot ones that might be served better by fast cores and I don't expect that to change, however that intel Passmark caught my eye since as far as I know Passmark is a fairly server-oriented test rather than just loading up each core with a single job (Passmark scores for EPYC/Xeon server CPUs certainly reflect their the capabilities of those CPUs).

I guess I don't have a sense for the overhead of Proxmox/KVM context switching with 8 cores vs. 32 cores where there likely wouldn't be any switching at all, and if that's reflected in the Passmark?
 

Hauzer

New Member
Feb 22, 2022
4
0
1
@Wasmachineman_NL are there practical problems presented by Naples' complicated NUMA configuration when it comes to home users? e.g. size limits on VMs or some other thing? I'm not too familiar with the practical implications of chip architecture at this level - should have stayed in school!

One nice thing about Naples is that some mobos support both Naples and Rome, so down the line I could make the upgrade to Rome once Zen4 is out and Zen2 prices hopefully come down a bit (a lot).
 

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,884
621
113
@Wasmachineman_NL are there practical problems presented by Naples' complicated NUMA configuration when it comes to home users? e.g. size limits on VMs or some other thing? I'm not too familiar with the practical implications of chip architecture at this level - should have stayed in school!

One nice thing about Naples is that some mobos support both Naples and Rome, so down the line I could make the upgrade to Rome once Zen4 is out and Zen2 prices hopefully come down a bit (a lot).
Some programs hate NUMA/the topology of Zen 1 Epyc.
 

Brandon_K

New Member
Jan 17, 2021
26
17
3
Pittsburgh, PA
Keep in mind, as of right now the x6x0 chipset lineup (anything currently for LGA 1700) has trash for PCI-E expansion. I just ran in to this problem.

You'll get one x16 slot, of either PCIe4.0 or 5.0 variety. Every other slot, regardless if it's physically a x16 slot, will be limited to PCIe 3.0 x1 or x4.

In my case, I can't max out my HBA's or 10gbe card because of this. The M5110 (9217-8i) for the fast internal chassis drives get the x16 slot, the 9205-8e for the slow SMR drives in the drive shelf gets the x1(x16 physical) and the dual port 10gbe X520 gets the x4 slot (also x16 physical). All of these cards should be in x8 slots, which means the X520 won't be able to saturate both links and the drive shelfs certainly won't be able to operate at full speed.
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
yeah that sounds like an awful server platform. I'd rather deal with NUMA than useless PCIe lanes on a server, unless it's literally just for services.
Also, the Alder Lake CPUs seem to have really bad power efficiency (with those P cores). Not that Napes is great being that it's older, but you might be better served by Ryzen 3000 / 5000 for slight better PCIe flexibility and way better efficiency.
 

bayleyw

Active Member
Jan 8, 2014
305
102
43
My vote goes to Naples. The weird topology doesn't affect virtualization as much as other applications because you're not supposed to allocate VMs the size of the entire node anyway, and you can go up to 8 cores and still stay in one CCX. The single thread performance is really bad though, you're looking at comparable performance to a laptop from 2014.
 

Hauzer

New Member
Feb 22, 2022
4
0
1
Keep in mind, as of right now the x6x0 chipset lineup (anything currently for LGA 1700) has trash for PCI-E expansion. I just ran in to this problem.

You'll get one x16 slot, of either PCIe4.0 or 5.0 variety. Every other slot, regardless if it's physically a x16 slot, will be limited to PCIe 3.0 x1 or x4.
To be fair the Naples/Rome compatible mobos I've been looking are strictly PCIe 3.0. In terms of Alder Lake they're definitely not server boards but a mobo like this has PCIe 5.0 x16, PCIe 4.0 x16, 2x PCIe 3.0 x16 and PCIe 1.0 x16 which offers pretty good expansion for a home server especially if you spend an extra $100 for a CPU with onboard graphics.


The single thread performance is really bad though, you're looking at comparable performance to a laptop from 2014.
It's definitely bad but I see the hottest new Milan single thread performance is only about 50% higher than the Naples 7601 (for 10x the price) and still only about 60% as good as the Alder Lake single thread performance. Of course with all those Naples cores there may be zero context switching required versus Alder Lake, meaning a real-world performance boost that wouldn't necessarily show up in PassMark CPU - I keep coming back to this question! :D
 

Brandon_K

New Member
Jan 17, 2021
26
17
3
Pittsburgh, PA
To be fair the Naples/Rome compatible mobos I've been looking are strictly PCIe 3.0. In terms of Alder Lake they're definitely not server boards but a mobo like this has PCIe 5.0 x16, PCIe 4.0 x16, 2x PCIe 3.0 x16 and PCIe 1.0 x16 which offers pretty good expansion for a home server especially if you spend an extra $100 for a CPU with onboard graphics.

As I said, while the *physical* x16 ports may be there, they do not operate at the "physical speed" that you might be lead to believe.

That motherboard is a perfect example, this is a straight copy and paste from the link you provided.

Code:
Intel® 12th Gen Processors*
1 x PCIe 5.0/4.0/3.0 x16 slot
Intel® Z690 Chipset**
1 x PCIe 4.0/3.0 x16 slot (supports x4 mode)
2 x PCIe 3.0 x16 slots (support x4 mode)
1 x PCIe 3.0 x1 slot
* Please check PCIe bifurcation table in support site.
** Supports Intel® Optane Memory H Series on PCH-attached PCIe slot.
Like I mentioned above, one x16 PCIe 5.0 slot, two x16 slots that only operate at x4 and a x1 slot that operates at x1.

For home server, an actual server where you would want high bandwidth applications for HBA's and NIC's, it's extremely limiting. Even assuming you're using PCIe 3.0 HBA's (as quite a many of the ones that get purchased for home use are LSI 2008 chipsets, PCIe 2.0), you're still limited to 32Gbps in a x4 3.0 slot or worse, 16Gbps if it's a PCIe2.0 device.

I have a few X520-SR2 dual port 10gbe NIC's. They're x8 PCIe 2.0. Which means in any slot on that motherboard that you linked to, other than the single PCIe 5.0 slot, the maximum bandwidth that card is ever going to see is 16gbps. Not ideal for a card that can do 20gbps over it's 2x10gbe interfaces.

Same application with HBA's. I have a basic 9217-8i and a 9205-8e. Both are x8 PCIe 3.0 cards. Both cards are 8 lanes of SAS-2 (6Gbps), a total of 48Gbps. Slotted in to any of those PCIe 4.0/3.0 x16 slots, I just lost 16Gbps as the slot is limited to 32Gbps.

Much of this is due to the number of m.2 NVME ports they're putting on the motherboard now. That particular Asus has three m.2 slots on it, which is a total of 12 lanes of PCIe that would have typically gone to the slots.
 
Last edited: