LGA 1700 Alder Lake "Servers"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.
Jan 3, 2023
55
19
8
On the Supermicro vs Asus. It doesn’t surprise me that the Asus board outperforms the Supermicro board, because Supermicro tends to be very conservative with their timings in favor of stability. They also strictly enforce the default PL1 and 2 power limits whereas Asus may not. You need to check those to ensure the CPUs are operating identically.
 
  • Like
Reactions: AlOX and RolloZ170
Jan 3, 2023
55
19
8
Also I have been running my S13SAE with a PL1 of 225W for several months now. Rock solid. You can use the PL overrides in the BIOS to do this (overriding the default). Just make sure you have proper cooling in place.
 
  • Like
Reactions: Vidmo and AlOX

0verflowx101

New Member
Apr 1, 2023
1
0
1
Also I have been running my S13SAE with a PL1 of 225W for several months now. Rock solid. You can use the PL overrides in the BIOS to do this (overriding the default). Just make sure you have proper cooling in place.
Do the S13SAE support wide range of CPU coolers?

I couldn't find anything on the official page regarding the combability
 
Jan 3, 2023
55
19
8
Do the S13SAE support wide range of CPU coolers?

I couldn't find anything on the official page regarding the combability
The socket dimensions are standard and the clearances appear to be just as good as with gaming boards. I am using the Noctua NH-U12A cooler.
 

infuriatedream

New Member
Feb 6, 2023
7
1
3
I've built an ASUS Pro WS W680-ACE system last night with 2x KSM48E40BD8KM-32HM 32GB ECC and a 13700K.
I installed Windows to update the ME Firmware and checked ECC, Windows reported ECC status "5" which is good AFAIK. And of course I updated the BIOS to 2305. After the BIOS Update & settings reset, memory was @4800MHz and >1h MemTest86 10.3 reported no problems.

Then tried to install Proxmox 7.4 which failed due to the X server not working with the iGPU which I was unable to solve. So I put a semi-old Radeon in the system and removed it right after installing proxmox (removing the GPU reordered the network ports in proxmox! Weird, but this was something I knew how to fix.)

Everything is fine now, very happy with the ASUS W680 system. So glad I did hold off and avoided the utter horror that is the MW34SP0.
Board was a pleasure to work with, Thermalright contact frame & Dark Rock Pro 4 cooler fit well. Board has no RGB header(s) btw :)

My BIOS setting recommendations for low power usage as a Virtualization host:
  • Advanced =>
    • Platform Misc => Native ASPM Enabled
    • CPU Configuration => Active Efficiency Cores ⇒ 0
    • CPU Configuration => CPU Power Management Control =>
      • C-States: Enabled
      • Enhanced C-States: Enabled
      • Package C-State Limit: C10
    • System Agent => VMD Setup ⇒ Enable VMD: Disabled
    • PCH Storage Configuration:
      • Aggressive LPM: Enabled
    • Thunderbolt Configuration ⇒ PCIe Tunnering over USB4 ⇒ Disabled
    • APM Configuration:
      • Restore Power Loss: Power On
      • Power on by PCI-E: Enabled
    • Onboard Devices
      • HD Audio: Disabled
      • Connectivity Mode: Disabled (why? this board has no WiFi&BT? setting makes no sense)
      • Q-Code LED Function: Auto (after POST, Q-Code displays cpu temperature!)
      • Serial Port: disabled / Parallel Port: disabled
  • Boot ⇒ Boot Configuration: Fast Boot: disabled
  • Tool ⇒ Armoury Crate ⇒ disabled

With Proxmox installed and no further optimizations (CPUgovernor is still 'performance') and 1 semi-idle Windows 10 VM running the system is currently using ~24-25W (with one 980 Pro 'OEM Edition' M.2 1TB installed so far). I'm sure further optimizations could be possible both in the BIOS and in Proxmox to reduce power draw even a little further.
 
Last edited:

unwind-protect

Active Member
Mar 7, 2016
414
156
43
Boston
I've built an ASUS Pro WS W680-ACE system last night with 2x KSM48E40BD8KM-32HM 32GB ECC and a 13700K.
I installed Windows to update the ME Firmware and checked ECC, Windows reported ECC status "5" which is good AFAIK. And of course I updated the BIOS to 2305. After the BIOS Update & settings reset, memory was @4800MHz and >1h MemTest86 10.3 reported no problems.

Then tried to install Proxmox 7.4 which failed due to the X server not working with the iGPU which I was unable to solve. So I put a semi-old Radeon in the system and removed it right after installing proxmox (removing the GPU reordered the network ports in proxmox! Weird, but this was something I knew how to fix.)

Everything is fine now, very happy with the ASUS W680 system. So glad I did hold off and avoided the utter horror that is the MW34SP0.
Board was a pleasure to work with, Thermalright contact frame & Dark Rock Pro 4 cooler fit well. Board has no RGB header(s) btw :)

My BIOS setting recommendations for low power usage as a Virtualization host:
  • Advanced =>
    • Platform Misc => Native ASPM Enabled
    • CPU Configuration => Active Efficiency Cores ⇒ 0
    • CPU Configuration => CPU Power Management Control =>
      • C-States: Enabled
      • Enhanced C-States: Enabled
      • Package C-State Limit: C10
    • System Agent => VMD Setup ⇒ Enable VMD: Disabled
    • PCH Storage Configuration:
      • Aggressive LPM: Enabled
    • Thunderbolt Configuration ⇒ PCIe Tunnering over USB4 ⇒ Disabled
    • APM Configuration:
      • Restore Power Loss: Power On
      • Power on by PCI-E: Enabled
    • Onboard Devices
      • HD Audio: Disabled
      • Connectivity Mode: Disabled (es gibt doch eh kein WLAN&BT?)
      • Q-Code LED Function: Auto (after POST, Q-Code displays cpu temperature!)
      • Serial Port: disabled / Parallel Port: disabled
  • Boot ⇒ Boot Configuration: Fast Boot: disabled
  • Tool ⇒ Armoury Crate ⇒ disabled

With Proxmox installed and no further optimizations (CPUgovernor is still 'performance') and 1 semi-idle Windows 10 VM running the system is currently using ~24-25W (with one 980 Pro 'OEM Edition' M.2 1TB installed so far). I'm sure further optimizations could be possible both in the BIOS and in Proxmox to reduce power draw even a little further.
Wait, you turn the efficiency cores off for this?
 
  • Like
Reactions: reasonsandreasons

ddr5ecc

New Member
Feb 5, 2023
19
11
3
I opened a support case with the misreporting of the 80bit totalwidth (should report 72bit totalwidth)

I'm interested if they fix this.
 

Alex15326

New Member
Apr 5, 2023
4
1
3
I've built an ASUS Pro WS W680-ACE system last night with 2x KSM48E40BD8KM-32HM 32GB ECC and a 13700K.
I installed Windows to update the ME Firmware and checked ECC, Windows reported ECC status "5" which is good AFAIK. And of course I updated the BIOS to 2305. After the BIOS Update & settings reset, memory was @4800MHz and >1h MemTest86 10.3 reported no problems.

Then tried to install Proxmox 7.4 which failed due to the X server not working with the iGPU which I was unable to solve. So I put a semi-old Radeon in the system and removed it right after installing proxmox (removing the GPU reordered the network ports in proxmox! Weird, but this was something I knew how to fix.)

Everything is fine now, very happy with the ASUS W680 system. So glad I did hold off and avoided the utter horror that is the MW34SP0.
Board was a pleasure to work with, Thermalright contact frame & Dark Rock Pro 4 cooler fit well. Board has no RGB header(s) btw :)

My BIOS setting recommendations for low power usage as a Virtualization host:
  • Advanced =>
    • Platform Misc => Native ASPM Enabled
    • CPU Configuration => Active Efficiency Cores ⇒ 0
    • CPU Configuration => CPU Power Management Control =>
      • C-States: Enabled
      • Enhanced C-States: Enabled
      • Package C-State Limit: C10
    • System Agent => VMD Setup ⇒ Enable VMD: Disabled
    • PCH Storage Configuration:
      • Aggressive LPM: Enabled
    • Thunderbolt Configuration ⇒ PCIe Tunnering over USB4 ⇒ Disabled
    • APM Configuration:
      • Restore Power Loss: Power On
      • Power on by PCI-E: Enabled
    • Onboard Devices
      • HD Audio: Disabled
      • Connectivity Mode: Disabled (es gibt doch eh kein WLAN&BT?)
      • Q-Code LED Function: Auto (after POST, Q-Code displays cpu temperature!)
      • Serial Port: disabled / Parallel Port: disabled
  • Boot ⇒ Boot Configuration: Fast Boot: disabled
  • Tool ⇒ Armoury Crate ⇒ disabled

With Proxmox installed and no further optimizations (CPUgovernor is still 'performance') and 1 semi-idle Windows 10 VM running the system is currently using ~24-25W (with one 980 Pro 'OEM Edition' M.2 1TB installed so far). I'm sure further optimizations could be possible both in the BIOS and in Proxmox to reduce power draw even a little further.
Hello,

Is this power consumption (24-25W) with the IPMI card or without it? I am asking since I'm in the same situation as you (deciding to probably not buy the gigabyte motherboard), but I don't know if the power consumption is comparable. If this is with the IPMI card, it could possibly mean that without it and a disabled LAN port (or two), it could go down another 10W or more.


Also another question to you and to the other members:

Has anyone installed a TrueNAS Scale system on the ASUS board and confirm that it supports iGPU passthrough? I have read through the whole thread and some other forums, but can't seem to find information about that.

I want to be able to use it for hardware decoding and such, without using a dedicated GPU (planning to buy a i5-13600k), but your problem with installing Proxmox has me somewhat worried whether this is a software problem (of Proxmox), or something wrong with the motherboard.
 

ddr5ecc

New Member
Feb 5, 2023
19
11
3
where ? intel, supermicro, asus or kingston ?
Asus.

Supermicro reported correctly.
It is not a problem with Intel or Kingston.

Asus has not a single ECC recommendation on the QVL list but says on the Homepage they are supporting ECC.

This MoBos price tag is very high so I want them to deliver - and ECC is the only differentiating factor (I have the non IPMI-Version) compared to a Z790.

I think they can fix this in next BIOS version.
 

twin_savage

Member
Jan 26, 2018
55
30
18
33
maybe the slots are only wired 2x 36bit ?
we will see.
Sounds like a bios issue on the supermicro motherboard board if the same 80b modules were swapped between them.
It was my understanding DDR5 ECC DIMMs came in 80b widths as kind of a gold standard and also 72b varieties that the hyperscalers insisted on to save on price at the cost of reduced functionality; that reduced functionality isn't 1 bit vs 2 bit detection, it is more along the lines of dealing with errors depending on where they occur in the memory chain.
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,140
1,546
113
Perhaps the width listed under the features in the spec is wrong, because that same data sheet says that the module is made up of " twenty 2G x 8-bit FBGA components. " which would indicate 80b.
thought similar. but micron sells RDIMM with 80bit AND 72bit width.
because that same data sheet says that the module is made up of " twenty 2G x 8-bit FBGA components. " which would indicate 80b
eight 8bit and two 4 bit will not work because the chip timings are different.
 
Last edited:

twin_savage

Member
Jan 26, 2018
55
30
18
33
thought similar. but micron sells RDIMM with 80bit AND 72bit width.
Yes, there definitely are both of those kinds of RDIMMs in the wild. I had assumed all (or atleast the vast majority) the UDIMMs were 80b since the hyperscalers don't use them, and 72b was mostly requested by the hyperscalers.


Edit:
I looked around for reference to the hyperscalers requesting 72b DIMMs and a Geona launch analysis was the only thing I could find:

"
The support for 72-bit and 80-bit DIMMs is noteworthy. Most servers will use 80-bit ECC, but some hyperscalers want to cut down to 72-bit. There are still some ECC capabilities relative to the 64-bit that non-ECC memory has, but less than the mission-critical 80-bit that is widely used. The advantage here is that there is 1 less DRAM die for parity checks. The “Bounded Fault” capability also assists with this because if errors are detected in the memory devices, these issues can be mapped.

"

.
 

infuriatedream

New Member
Feb 6, 2023
7
1
3
Wait, you turn the efficiency cores off for this?
Only because of virtualization. I'm pretty sure even the latest VMware ESXi version refuses to boot on ADL/RPL CPUs if different cores are present.
I'm using ProxMox which is more tolerant as far as I know, but I'm really skeptical if it is capable of intelligently assigning the right cores to the right VMs depending on workload. Since 8 very high performance cores is plenty for now I decided to steer clear of potential problems and will maybe enable them in the future.

Is this power consumption (24-25W) with the IPMI card or without it? I am asking since I'm in the same situation as you (deciding to probably not buy the gigabyte motherboard), but I don't know if the power consumption is comparable. If this is with the IPMI card, it could possibly mean that without it and a disabled LAN port (or two), it could go down another 10W or more.
I have the version without IPMI.

The power consumption could potentially be lower with further BIOS settings (there are additional ASPM settings which I have not changed so far, also maybe disabling SATA completely might help).

And the (high quality!) power supply I'm using is not one which is know for excellent efficiency at low loads. Swapping that for one that is known for high efficiency at low loads (Corsair RM550X 2021 is supposedly excellent but currently hard to get) will almost certainly shave off an additional 5W. And within Linux there are things like powertop and choosing a more efficient CPU governor that I have not done. Given the very basic optimizations done so far I'd say the idle power draw has potential to be pretty good. Which is by the way one of the main reasons why I didn't go for a 7950X based system (which would likely be more efficient under load, but certainly less efficient at idle).

... confirm that it supports iGPU passthrough ...
... your problem with installing Proxmox has me somewhat worried whether this is a software problem (of Proxmox) ...
I really have no clue at all about iGPU passthrough but I'm sure my slight installation woes were purely because the X server graphics driver in Proxmox 7.4 is just a little too old to recognize the raptor lake gpus.
 
Last edited:

Alex15326

New Member
Apr 5, 2023
4
1
3
I have the version without IPMI.

The power consumption could potentially be lower with further BIOS settings (there are additional ASPM settings which I have not changed so far, also maybe disabling SATA completely might help).

And the (high quality!) power supply I'm using is not one which is know for excellent efficiency at low loads. Swapping that for one that is known for high efficiency at low loads (Corsair RM550X 2021 is supposedly excellent but currently hard to get) will almost certainly shave off an additional 5W. And within Linux there are things like powertop and chosing a more efficient cpu governor that I have not done. Given the very basic optimizations done so far I'd say the idle power draw has potential to be pretty good. Which is by the way one of the main reasons why I didn't go for a 7950X based system (which would likely be more efficient under load, but certainly less efficient at idle).
Thank you for the information! I too read about the Corsair PSU and could've potentially bought it, but decided on a Corsair HX750 Platinum supply. It is one of the few good PSUs from the LTT PSU list that still have a 25A 5V and 3.3V rail and a total of 150W shared on both (most other PSUs are around 20A) which I can use for an SSD-only NAS build, since the 12V rail is pretty much useless if you don't have a hungry GPU or HDDs, but it is commonly the only one on which you can get the full PSU wattage.

I really have no clue at all about iGPU passthrough but I'm sure my slight installation woes were purely because the X server graphics driver in Proxmox 7.4 is just a little too old to recognize the raptor lake gpus.
Thank you for the clarification! I decided to do a bit of research while waiting for your answer and it seems that there is a separate channel kernel update that can help with the iGPU, since a linux kernel version of >=5.16 supposedly adds support for 13th gen iGPUs and there are 5.19 and 6.1/2 versions of the kernel available for version 7.x of Proxmox via apt-get.

Also it seems TrueNAS Scale won't have support for this until they update their kernel to 6.1 LTS for their Cobia release later this year allegedly. Until then you can run TrueNAS via virtualization, which is frowned upon.