Thought on the upcoming Core Ultra 2 ECC mainboards for home servers?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

niekbergboer

Active Member
Jun 21, 2016
175
86
28
47
Switzerland
Browsing the Supermicro site, I stumbled across the upcoming X14SAZ-F . On paper, that does not look bad: CPUs up to the Intel Core Ultra 9 285K (CPU Bench of 67871), up to 192 GB of ECC memory, IPMI, VT-d. This looks interesting.

Do we already have idle-power stats from comparable consumer boards?
 

unwind-protect

Active Member
Mar 7, 2016
580
230
43
Boston
Looks nice. Offhand the only nitpick is that it only has one M.2 slot.

Will we need Xeons for this or are some of the regular processors ECC-capable again?
 

nabsltd

Well-Known Member
Jan 26, 2022
695
485
63
It also only has 24 PCIe lanes connected to the processor, which are used by the x16 slot, one of the x4-in-x8 slots, and the M.2 slot. This means the MCIO (a place to plug in two more NVMe drives) is connected to the PCH.

You can't really use it as a storage platform, as the only way to add an HBA/RAID/NIC is the x16 slot (unless you can live with x4 from one of the other slots). It could make a decent compute node, but you'd still need to use that x16 slot for a NIC to connect to storage (and other compute nodes). I would rather have a pair of PCIe 5.0x8 slots instead of an x16 and an PCIe 4.0x4 from the PCH.

OTOH, this is the kind of motherboard where somebody could make a fortune from just home lab users by making a PCIe 5.0x16 card with a PCIe switch and 16 NVMe connectors. You could then run 16 PCIe 3.0 NVMe drives at full speed.
 

i386

Well-Known Member
Mar 18, 2016
4,565
1,720
113
35
Germany
Epyc/threadripper pro has spoiled me, I need at least 48 usable pcie lanes in 2x x16 and the others at least in x8 configurations :D
 

BlueLineSwinger

Active Member
Mar 11, 2013
191
80
28
Looks like they have a few options incoming:

There's also the X14SAZ-TLN4F, which drops the SATA and MCIO for 2x 10 Gbit copper ethernet.

There are three X14SAV boards, which are smaller ITX/FlexATX units in various configs (FYI the comparison is obviously wrong about there being no onboard ethernet).

And also the X14SAE(-F), which is full ATX, has additional SATA, as well as a Thunderbolt port.

All fairly uninspiring from a home server standpoint, IMHO. These are all aimed at desktops/low-end workstations, and the connectivity/expansion configured for such. It seems SuperMicro hasn't really been interested in the low-end server space in quite some time.
 
  • Like
Reactions: Zerokwel

Laugh|nGMan

Member
Nov 27, 2012
44
8
8
...then better go with x13sem-tf. Probably price difference will be small+ SapphireRapids/EmeraldRapids should be most troubleshooted cpu's and will be supported for many years and be usable for extended period of time.
But maybe i'm wrong.
 

sam55todd

Active Member
May 11, 2023
200
60
28
...then better go with x13sem-tf. Probably price difference will be small+ SapphireRapids/EmeraldRapids should be most troubleshooted cpu's and will be supported for many years and be usable for extended period of time.
But maybe i'm wrong.
Topic starter is also asking about idle power - which is understandable for home-server builds (although I kind on puzzled what's the point if investing $$$ and then trying to save $ on energy and not use system at full potential, normally the best way to extract max return on investment is by fully using it, but also some mission-critical scenarios actually do provide highest value via dealing with peak workload bottlenecks too {unleashing reserve capacity to ensure service reliability}, therefore it really depends..),
SPR/EMR platforms aren't very good at keeping low idle power levels (as most data-center grade enterprise platforms)
 
  • Like
Reactions: mrpasc

Alterra

New Member
Feb 26, 2023
16
6
3
W880 based boards seem pretty expensive indeed (not to mention that ECC DDR5 DIMMs prices are sky high now), but there are plenty of features and connectivity. Many Arrow Lake reviews indicate the idle power has slightly increased (some 5W) compared to Raptor Lake, but as they are focused on Windows gaming I would not put much weight on that. Idle on a Windows gaming PC may not be that idle either.

What may be more important is that I don't see any EDAC support in latest Linux kernel. Please tell me I am wrong. Since topic starter mentioned ECC I assume there is interest in ECC reporting, but this probably will be crappy or nonexistent without EDAC. There is an in-band ECC EDAC driver (igen6) with support for Arrow Lake, but I have no idea if any of the boards actually supports IBECC (or if anyone here cares). Windows might be better in this regard.

Also I can see only x8/x8 PCIe bifurcation on X14SAZ for some reason - I thought Arrow Lake is capable of x8/x4/x4 as well, at least X14SAE can do it. Maybe not a big deal but I wonder why this was omitted?

Anyway the Supermicro boards look moderately interesting. I am particularly eyeing the X14SAE, which seems to have finally dropped the legacy PCI slot from SAE series. I guess I need some other way to connect my SCSI scanner now!
 

etorix

Active Member
Sep 28, 2021
125
71
28
All fairly uninspiring from a home server standpoint, IMHO. These are all aimed at desktops/low-end workstations, and the connectivity/expansion configured for such. It seems SuperMicro hasn't really been interested in the low-end server space in quite some time.
AsRock Rack has been quite interested in bringing Ryzen CPUs into low-end servers, and the introduction of EPYC 4004 proves their point.
With Intel even dropping its Xeon E brand in favour of throwing some old Raptor Lake as "Xeon 6300" alongside Granite Rapids/Sierra Forest "6500/6700/6900" (and thus NOT introducing Core Ultra to servers…), I guess it's time to take a deep look into EPYC 4004.
 

niekbergboer

Active Member
Jun 21, 2016
175
86
28
47
Switzerland
AsRock Rack has been quite interested in bringing Ryzen CPUs into low-end servers, and the introduction of EPYC 4004 proves their point.
With Intel even dropping its Xeon E brand in favour of throwing some old Raptor Lake as "Xeon 6300" alongside Granite Rapids/Sierra Forest "6500/6700/6900" (and thus NOT introducing Core Ultra to servers…), I guess it's time to take a deep look into EPYC 4004.
I have got to say that, indeed, Intel's Xeon 6300-series launch yesterday left me very underwhelmed: 8 cores max, no AVX-512, in 2025, while Epyc 4004 exists. Those 4004 mainboards are not cheap though (I looked at Supermicro's H13SAE-F).
 
  • Like
Reactions: nexox

Zerokwel

New Member
Oct 21, 2022
8
2
3
I'm intrigued by the Supermicro X14SAZ-F for a home based server running Unraid and Plex with a dozen or more drives. I like that it's using Buffered ECC memory offers IPMI plus the affordable option of the Ultra processer with on chip graphics. While I know there are more budget friendly options I'm looking for the robust ruggedness and upgradability.

I see above on post #3 were user "Nabsltd" has some concerns about connecting storage, candidly his comments are a bit over my head. For an Arrow Lake Ultra motherboard is there any major red flags or downsides to keep me away..??
 

John McClane

New Member
Feb 21, 2025
3
3
3
I agree with @etorix, if you're looking at this class of processor (consumer based, low PCIe lane count), EPYC 4004 seems like the way to go. The AsRock Rack boards are nice and relatively cheap. EPYC 4004 can scale to 16 homogeneous cores. If it's just a plex server and the board has IPMI for headless operation, on chip graphics won't be critical. Quicksync might be the only thing Intel has going for it.

I don't really understand the storage comments. You'll be limited to SATA but there are plenty of expansion options. If you had to have all NVMe storage and 100G NIC's you wouldn't be looking at X14SAZ-F in the first place.
 

etorix

Active Member
Sep 28, 2021
125
71
28
You'll be limited to SATA
…exactly FOUR SATA ports in this case, which isn't much for storage. You can add a HBA, but this is a x8 device: In a x16 slot, it wastes 8 lanes. And if you have sizeable storage, you probably want faster than Gigabit networking to go with it. 100G is ridiculous in a home server, but 10G is not unreasonable: If not provided on-board, that's typically a second x8 device. (Aquantia NICs are not the most suitable choice for servers.)
x8 + x8 is a lot more useful than x16 when PCIe lanes are scarce to begin with.

Quicksync might be the only thing Intel has going for it.
If Quicksync is the only concern, adding a cheap, single slot, Arc dGPU to an AMD (or Atom, or Xeon-D) server is a solution.
 
Last edited:

nabsltd

Well-Known Member
Jan 26, 2022
695
485
63
All fairly uninspiring from a home server standpoint, IMHO. These are all aimed at desktops/low-end workstations, and the connectivity/expansion configured for such.
Especially considering that the X14SBI-F seems to be in the same area for new retail price ($550-650), and that has 3x PCIe 5.0 x16 and 3x PCIe 5.0 x8, plus 6x PCIe 5.0 x8 via MCIO, and 2x M.2 slots (PCIe 5.0 x2, so it's the only place this board cheaps out). With a Xeon 6511P 16 P-cores/32 threads (and 136 PCIe 5.0 lanes) processor selling for $850, the top-of-the-line Ultra 9 285 with 8 P-core/16 E-core/24 threads (and a measly 24 PCIe 5.0 lanes) selling for $550 doesn't look too good. That extra $300 lets you connect 28 NVMe drives at full PCIe 5.0 speed.

Then, too, the Core Ultras top out at 192GB of RAM, while the Xeon 65xx/67xx support up to 2TB.
 
  • Like
Reactions: jode and etorix

Alterra

New Member
Feb 26, 2023
16
6
3
X14SBI-F is in a different league for sure, but the idle power is probably much higher, too. Still that Arrow Lake pricing looks bad.

Then, too, the Core Ultras top out at 192GB of RAM, while the Xeon 65xx/67xx support up to 2TB.
I wonder - 64 GB UDIMMs seem to work on Z890:
- so I am sure they could technically work on W880, too. Maybe with some future BIOS update. Maybe if 64 GB UDIMMs ever become available.
 

etorix

Active Member
Sep 28, 2021
125
71
28
X14SBI-F is in a different league for sure, but the idle power is probably much higher, too.
Different performance league, but the costs of manufacturing to PCIe 5.0 make it that motherboards are in the same price league…
At which point, indeed, idle power is the only factor in favour of Xeon E/Xeon 6300/EPYC 4004 over Xeon 6500/EPYC 8004. But if we're talking about home servers, that's a big point.
 

Alterra

New Member
Feb 26, 2023
16
6
3
There is also the question of CPU price. While 6511P might be just moderately more expensive than 285K, it is a lot more expensive than for example 235. Yes, 285K was mentioned by the topic starter but such cheap CPUS are not an option with X14SBI-F. Same for iGPU, if it is needed for any reason. With all their shortcomings, I do like the Xeon E3/E line for light duty server and workstation work.

I'd also question the value of IPMI for home use. The motherboard will cost some $50 more and consume maybe 5 to 10W more power, idle or not. But maybe you guys have bigger homes with more servers. :)
 

nabsltd

Well-Known Member
Jan 26, 2022
695
485
63
X14SBI-F is in a different league for sure, but the idle power is probably much higher, too.
ATX server motherboards do run a bit more power than a mini-server board that is specifically targeted at low power. And historically Xeons seem to be much worse than their desktop counterparts. In particular, they don't seem to have the super low power C-states available. Add in the fact that most add-in cards designed for servers (like 100Gbit+ NICs) don't seem to care much about dropping to a lower power state, and you do end up with a lot more power used.

My take is that a server won't ever be truly idle long enough to take advantage of a lot of the power saving tricks that can work on a desktop machine.
 
  • Like
Reactions: nexox