LGA 1851 Arrow Lake "Servers"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Vidmo

Member
Feb 3, 2017
74
35
18
ok. understand it is important to sort things out.
bad cpu ?
- The system is completely stable even at running high load tasks, so I do not think that the CPU is the issue.

The motherboard locks up when any of these tools are used: CPU-Z, HWiNFO or HWMonitor.
 

RolloZ170

Well-Known Member
Apr 24, 2016
9,362
2,997
113
germany
The motherboard locks up when any of these tools are used: CPU-Z, HWiNFO or HWMonitor.
i already told you W11 runs all apps in a VM/sandbox by default.
CPU-Z, HWiNFO or HWMonitor want to writeto proteced things.
i would disable VBS and check what happens then.
 

Vidmo

Member
Feb 3, 2017
74
35
18
Supermicro has confirmed the defect. Until they can fix the BIOS and IPMI they suggest disabling the Network Feature Editing (NFE) in the IPMI. They provided a ipmitool raw command to do so.
 
  • Like
Reactions: Stovar

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Has anyone found these with 10g on them w/ ECC? Upgrading my unraid server after years (have a gutted x9 supermicro with a dual proc v2s system, 256gig ram mellanox doing 40g to my brocade 6610, 300~TB spinners (mostly 28/26TB now), 9207 and nvidia card in it). Power bill not fun but back in the day the build was cheap and electrical rates were way better.

Been tossing around getting one of these w/ a 265k eliminating the nvidia card, was hoping for 10g so I didn't have to use the mellanox, have a 9400 card for the backplane. Know I'm going to have to pucker on the ECC ram.

The Asus seems good bang for buck, but 2.5g, but I need the 4x nvme, which the Supermicro doesn't have and also no 10g.

The Gigabyte AI Top w880 is perfect layout wise but supposidly not available in the US (found it on provantage but it's only "special order")

Not sure I could bring myself to go non-ecc. Have considered going amd but idle power + having to have an intel graphics card in it isn't ideal.
 
  • Like
Reactions: zzz111

autoturk

Well-Known Member
Sep 1, 2022
290
271
63
Has anyone found these with 10g on them w/ ECC? Upgrading my unraid server after years (have a gutted x9 supermicro with a dual proc v2s system, 256gig ram mellanox doing 40g to my brocade 6610, 300~TB spinners (mostly 28/26TB now), 9207 and nvidia card in it). Power bill not fun but back in the day the build was cheap and electrical rates were way better.

Been tossing around getting one of these w/ a 265k eliminating the nvidia card, was hoping for 10g so I didn't have to use the mellanox, have a 9400 card for the backplane. Know I'm going to have to pucker on the ECC ram.

The Asus seems good bang for buck, but 2.5g, but I need the 4x nvme, which the Supermicro doesn't have and also no 10g.

The Gigabyte AI Top w880 is perfect layout wise but supposidly not available in the US (found it on provantage but it's only "special order")

Not sure I could bring myself to go non-ecc. Have considered going amd but idle power + having to have an intel graphics card in it isn't ideal.
Regarding the Asus, I'm not sure if you'll gain much from a power efficiency standpoint by having an integrated 10gb port, and you have more than enough PCIe slots to throw both your 9400 and a 10gb card on the board. The latter can go in the last PCIe x4 slot without losing any of the M.2 slots
 

Vidmo

Member
Feb 3, 2017
74
35
18
Has anyone found these with 10g on them w/ ECC?
None that I'm aware of. I use an LSI 9560-16i in the primary PCIe 5.0 x16 slot and an Intel X710T2L in the other. Works well. The nice thing about the Intel Core Ultra CPUs is that you have more PCIe lanes vs the previous generation.
 
  • Like
Reactions: Stovar

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Regarding the Asus, I'm not sure if you'll gain much from a power efficiency standpoint by having an integrated 10gb port, and you have more than enough PCIe slots to throw both your 9400 and a 10gb card on the board. The latter can go in the last PCIe x4 slot without losing any of the M.2 slots
Thanks and valid point - I thought I read somewhere that the mellanox are power hungry, but I guess I have it DAC anyway. I have like 5 darn Mellanox cards I've flashed (have an esxi and also a freenas 4u supermicro as well). Power is not my friend.
 

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
None that I'm aware of. I use an LSI 9560-16i in the primary PCIe 5.0 x16 slot and an Intel X710T2L in the other. Works well. The nice thing about the Intel Core Ultra CPUs is that you have more PCIe lanes vs the previous generation.
Thanks ya, thats what appeals to me about the w880. I've always hated the PCI bus dance. My last "gaming/main/music production" computer was a threadripper 3960x which I went with just for the lanes pretty much w/ buch of m.2's, audio card and I refuse to give up my intel p900 OS drive. Now running a 670e godlike which I had to get because I still wanted to keep the optane and it doesn't share the last slot bus wise and still run 16x graphics.

I thought about re-purposing the mobo (gigabyte trx40 extreme) and the 3960x, but supposidly it idles at stupid wattage. Haven't personally measured it but supposidly 100+. That and getting older ecc ram. Not to mention needing to get like an intel graphics card as well.
 

Zerokwel

New Member
Oct 21, 2022
12
9
3
Newegg just started selling the AsRock W880D4ID-2Q which has dual 25GB ports


Albeit with an eye-watering price of $559 and it's best to make sure the deep design and fewer expansion options would fit your needs.

I'm personally leaning towards the AsRock W880D4U as I cannot justify the close to two hundred more in price difference. Although I'm still considering the SuperMicro X14SAE or the Asus Pro WS W880-ACE SE but again both are unjustifiably $200 more.

I'm looking for feedback on how to justify the additional $200...?????
 
  • Like
Reactions: Stovar

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Newegg just started selling the AsRock W880D4ID-2Q which has dual 25GB ports


Albeit with an eye-watering price of $559 and it's best to make sure the deep design and fewer expansion options would fit your needs.

I'm personally leaning towards the AsRock W880D4U as I cannot justify the close to two hundred more in price difference. Although I'm still considering the SuperMicro X14SAE or the Asus Pro WS W880-ACE SE but again both are unjustifiably $200 more.

I'm looking for feedback on how to justify the additional $200...?????
I've seen those. Kinda cool/interesting little things, my only issue with them is only 2x M.2 and maybe cooling?. With my unraid box I've got a multiple cache pools of 4 drives a piece (reducing that down to just 2x2 nvme).

Also not quite sure on the ATX power on them says it's supported and maybe just needs an ATX splitter? I'm still using my supermicro 4u SQ dual power supplies going to two different APCs (replaced the fan wall to 120s). Also unsure if the 4u has micro/mini atx standoffs although I could tap some.

I am running both Unraid and Freenas (primary NAS cause ZFS) currently (and have a synology 1815+ collecting dust because it's going EOL at somepoint, and well questionable decicions on synologies part). Was considering just two unraid boxes as it supports ZFS now which I want for the NAS part (primary and backup, and I have a tape library) - one big/one small as a backup. I might consider one of the Asrock boards for the backup box.

The Asus has 4x m.2s and can add in a card bifurcated I think. I'm used to the Supermicro IPMI stuff though, fan scripts, etc. I'm sure the Asus IPMI is pretty lame - but I have a gl.inet KVM setup.

I really wanted the gigabyte one https://www.gigabyte.com/Motherboard/W880-AI-TOP (not available in the US)

I currently have 3x 4u fan wall replaced Supermicro servers 2x (unraid/freenas) with x9/dual procs and a 3rd which I should have thought about power a few years back before I built it (dual proc x10DRH-CT) which was going to be an esxi server. Never could get the passthrough right with my offbrand flashed LSI card but ended up not fiddling with it due to power - all servers, drives spinning and network switches were sucking down over 900 watts per my back of napkin and IPMI calculations. Power usage with everything up and my brocade 6650 and 6610 is like having a colonoscopy.

Have thought about doing proxmox as well but unsure if I want virtualization in front of the main critical NAS part. Only have done napp-it (way back in the day) and esxi, not that I want those in front of the file share either.
 
Last edited:

nexox

Well-Known Member
May 3, 2023
1,959
975
113
Kinda cool/interesting little things, my only issue with them is only 2x M.2
Three 4.0x4 OCuLink ports are much better than M.2 slots as far as I'm concerned (unless you need to use some for SATA,) an M.2 adapter is pretty cheap or you have the option of using U.2 drives.
 

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Three 4.0x4 OCuLink ports are much better than M.2 slots as far as I'm concerned (unless you need to use some for SATA,) an M.2 adapter is pretty cheap or you have the option of using U.2 drives.
Thanks for the tip - I've never even heard of it. Off to do some searching. Currently I have supermicro backplanes/hba with spinners and ssds (unraid server has 220TB total 12x spinners and 2 - 4x 1TB SSD raid 10 pools of 2.5" ssds - all on the backpalane) - was going to move those ssds to nvmes on the mobo. Wonder how the OCulLink could fit into the setup - would have to find some place to mount the u.2 drives. Or would it go OCuLink -> external M.2 or U.2 case thingy?
 

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Three 4.0x4 OCuLink ports are much better than M.2 slots as far as I'm concerned (unless you need to use some for SATA,) an M.2 adapter is pretty cheap or you have the option of using U.2 drives.
Well partially answered my own question. with some searching - can't believe I never heard of this before.

On the downside this throws up a ton of new design options and the ability to use some used enterprise SSDs which I wanted to anyway. Jeez thanks man for sending me down a new rabbit hole ;)
 
  • Like
Reactions: nexox

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
Anyone got any tips on where to find any decent 2x 32 or 2x 48 ECC Udimm sticks for these? Not many options and out of stock everywhere, at fairly insane prices.

The Asus w880 has very limited ECC on their compatibility list 32GB or greater. Found slight availability for kingston sticks, but not on the list (not that that has stopped me before)
 

Vidmo

Member
Feb 3, 2017
74
35
18
Anyone got any tips on where to find any decent 2x 32 or 2x 48 ECC Udimm sticks for these? Not many options and out of stock everywhere, at fairly insane prices.
I'm using 4 x Kingston DDR5 32GB 4800MHz ECC KSM48E40BD8KM-32HM

I originally had two of those in my X13SAE-F and just brought them over to the X14SAE-F as they can now run at full rate on that platform. Then bought two more off eBay a few months ago.
 
  • Like
Reactions: Zervun

Zervun

Member
Feb 2, 2019
66
16
8
Oregon
I'm using 4 x Kingston DDR5 32GB 4800MHz ECC KSM48E40BD8KM-32HM

I originally had two of those in my X13SAE-F and just brought them over to the X14SAE-F as they can now run at full rate on that platform. Then bought two more off eBay a few months ago.
Thanks, interesting that the mobo manual says that for 4 sticks it only supports 1R and these are 2R, guess if it works it works ;)

Are you using any u.2 drives with this?
 

Vidmo

Member
Feb 3, 2017
74
35
18
The manual does state that it will support 4400 MT/s (2DPC, 2R DIMM) which is what these are running at. Still higher rate than the X13 with this same memory.

No u.2 drives here. Storage consist of 5x speed NVMe (boot drive) and internal raid via LSI MegaRAID 9560-16i. I also have an external raid device connected via thunderbolt. Using up all of the PCIe lanes coming from the Intel CPU. If I need too I can still use the PCIe lanes on the chipset for more NVMe drives running at 4x speed.
 
  • Like
Reactions: Zervun