LGA 1700 Alder Lake "Servers"

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

collinmcguire

New Member
May 31, 2023
3
0
1
I have two Asus Pro W680-ACE IPMI builds, one with i9-13900k, the other with i7-13700k. Both are servers that get run pretty hard. Both work well.

I do not remember what memory I used other than I selected it from the list of memory that SuperMicro lists as compatible with their W680 based X13SCA-F-O. I've always used Supermicro boards in my server builds, but, the X13SCA-F-O were a bit hard to come by when I did these, so, I took a chance on the Asus.
Do you regret not going with a supermicro boards? I am actually looking to do the same style server build with the ASUS board because it is $400 new on newegg and the x13SAE is open box for $350 but $500+ new. It seems like they have very similar features except the pcie 3.0 slots on the asus board are full x16 which gives you a little more flexibility.

If you don't mind me asking what do you run on your LGA 1700 server? My desktop is alder lake but I haven't run proxmox on it and that is what I intend to do with the server build. My main reason for asking is because the P+E cores used to be a little but of an issue for the linux kernel scheduler but has supposedly been fixed in the newer kernels and proxmox can be have its kernel updated. I haven't come across anyone yet who is doing an LGA 1700 server

I also searched for an x13SAC board but couldn't find one do you know where I would be able to find it?
 

jas67

New Member
Jun 15, 2023
6
3
3
My only regret so far in not going with Supermicro is that the IPMI is an add-in PCI-e board with some extra small wires to connect for power and reset, vs being integrated into the board with the Supermicro.

I've not had any experience with the Supermicro X13 boards, but, I will say, the X11SCA-F-O that I replaced with one of these was a great board, BUT.... I after upgraded the IMPI firmware to the latest, I could no longer access the HTML5 KVM console. Thankfully, I still had the firmware files form a previous upgrade, and was able to back to that version. I don't know if Supermicro started charging for HTML5 KVM access or not, but, do know that they have a history of making some IPMI features paid upgrades.

Supermicro boards are second to none with respect to quality. If the HTML5 KVM is a free feature with the X13, you can not go wrong with the X13SAE-F-O.

That said, I've been very happy with the Asus boards, though on the one with the i9-13900k, I was unable to use the X1 PCIe slot closest to the CPU for the IPMI board due to the size of the D15 cooler that I used on it. I had to instead sacrifice one of the four X16 slots (IIRC, the two furthest from the CPU are X4 electrically anyway). No big deal on that one, as I have no other PCI-e boards in that server, just 3 M.2 SSDs.

If you have any legacy PCI cards that you want to use, the X13SAE-F-O interestingly enough has one of those. Both have a pair of X16 PCI-e 5.0 slots. The X13SAE-F-O has two X4 PCI-e 3.0 slots, where the Asus also has two X4PCI-e 3.0 slots, but, with X16 connectors, allowing at least for X8 or X16 boards, albeit at a slower speed. The Asus does have a single X1 slot closest to the CPU, that, would be populated with the IPMI card if you're not using a huge cooler.

Both boards have a total of 16 lanes of PCI-e 5.0 available, either all allocated to one slot, or split x8/x8 between the two. Both support PCI-e bifurcation for risers.

So, I/O wise, the two boards are pretty equivalent except for the PCI slot on the Supermicro.

The Asus has a Slim-SAS connector that can be used in either SATA or PCI-e 4.0 mode. Supermicro does not list this on their info page.
There are adapters that can be used to add a fourth M.2 SSD connected via this connector.

The Asus boards have been rock-solid in these servers.

Both servers are running Ubuntu 22.04LTS with the HWE kernel 5.19 something to get "thread director" support for the P/E architecture. Both run docker containers for various web service, the i9-13900k one additionally has VMWare Workstation running on it. I have added a systemd service to VMWare Workstation to allow for automatically starting some VMs as used to be supported in earlier versions of Workstation for more of a server use case. I also run several VMs with remote consoles via VNC for SW development, as this system is way faster than my work laptop.

In addition to VMWare, the i9-13900k is running the following applications in Docker containers:
  • GItLab
  • GitLab runner for CI/CD builds
  • Bugzilla
  • 4 MediaWiki instances
  • Docker Registry
  • Several proprietary web services

The i7-13700k server is running the following applications in Docker Containers:
  • GitLab runner for CI/CD builds
  • Owncloud
  • OS-Ticket
  • Docker Registry
  • Several proprietary web services
Both are also running NGinx and Apache.

Both also have users interactively logging in an doing SW builds in docker containers.

Performance is very good on both. I will say, at least for our use cases, the i9-13900k doesn't provide all that much more performance than the i7-13700k. Both have 8 performance cores. The i9 adds 8 additional efficiency cores.

When I get the time, I will be putting up an Ansys compute server instance on the i9 server. That will definitely put a load on it.


As for where to get an X13SAC, I wouldn't know. I usually just do google searches in addition to checking usual suspects, Newegg and Amazon.
I actually got the two Asus W680 boards through B&H Photo Video, as they were the only ones that had them in stock at the time.

EDIT:

More info. Storage in the i9-13900k server is 3 M.2 2TB Samsung 970 Evo Plus NVMe (left from previous motherboard, if new, I'd have gone with 980 Pro).

Storage in the i7-13700k server is 1 2TB Samsung 970 Evo Plus NVMe (again, left over from previous server), also older RAID 6 array connected via LSI 9361-8i SAS RAID controller (X8 PCIe).

Both servers are running 64 GB (2x32GB) DDR5 4800 ECC UDIMM memory.
 
Last edited:
  • Like
Reactions: collinmcguire
May 20, 2020
43
27
18
To answer the question on what folks are running: I have 2 of the SM boards, one 12900K, one 13900K. Had some issues getting Windows 11 to be happy on the 13900k but it was really early in the CPU release.

Both have E cores disabled and are number crunches for CFD simulation. Both watercooled, with wide open PL1, PL2. Both have been rock stable.
 

collinmcguire

New Member
May 31, 2023
3
0
1
Thank you both for the great information in your replies. Definetly helps to clarify a few things for me.

I know IPMI is the standard with servers but do you use it all that often to make it worth it? the reason I ask is the non IPMI version of the Asus board is only $320.

Since there is an iGPU available for troubleshooting anything and I do not have other servers or plan to have any more so there will be no out of band network I can’t really see the usage. I am definitely new to the server world so is there something I’m missing other than having the comfort of being able to set the server up from my couch

edit: how’s the power consumption at idle/low usage workloads if you have them?
 
Last edited:

Stankyjawnz

Member
Aug 2, 2017
50
13
8
35
The IPMI is definitely convenient, agreed it's not essential for most home users. Having the bios setting to turn on following power restoration would probably solve 99% of my issues. On my last system id get a few random times when it would hang at bios saying press f1 to continue or hang on shutdown. In those cases the remote kvm or power cycle can get you out. I do think a lot of people use it for initial configuration then it doesn't do much. I will probably use it for the fan control.

With the IPMI card and an HBA, mine idles around 53w doing a few docker containers including some 24/7 loads (security cameras). Without the add on cards it was idling around 33w with a lot of stuff turned off, including the sata controller. I'd be curious if anyone else has numbers to share.
 
  • Like
Reactions: collinmcguire

collinmcguire

New Member
May 31, 2023
3
0
1
That seems on par for LGA 1700 and not terrible at all for the options you get with this setup. My 12600k idles around 15-20w in windows with nothing running besides some background apps that run at startup. No tuning done besides enabling c states.
 

drampelt

New Member
May 8, 2023
7
3
3
To add on to the feedback that the Asus w680 seems to work with ECC for processors other than 13900k, I tried the new memtest86 version 10.5 as mentioned by @drampelt

Asus w680 bios 2403
i5-12600k
2x MEM-DR532MD-EU48

Everything looks ok to me. Hopefully this revision of memtest helps remove confusion whether ECC is working with DDR5 ecc.
Yep, with the new memtest version everything looks correct now on my system!

memtest 10.5.jpg
 

jas67

New Member
Jun 15, 2023
6
3
3
Thank you both for the great information in your replies. Definetly helps to clarify a few things for me.

I know IPMI is the standard with servers but do you use it all that often to make it worth it? the reason I ask is the non IPMI version of the Asus board is only $320.

IPMI is a requirement for my use case as I manage these servers remotely. It is very handy to be able to be able to do a hard reset remotely if something hangs (though rarely, almost never happens), or to be able to troubleshoot a network connection, such as if multiple network interfaces enumerate in a different order after a kernel upgrade (had it happen one time).

I agree, IPMI is not strictly necessary for home systems, or systems that you are physically near all the time.
 

Alex15326

New Member
Apr 5, 2023
4
1
3
To add to the information about power usage, I have around 70W idle with a 13600k CPU, 64GB ECC RAM, 14 SATA SSDs, 1 HBA, 1 10gb NIC, the IPMI card, 2 Coral TPUs and fans (case, CPU, added small fans to the HBA and NIC, since the HBA was overheating and I use a PC case). I run TrueNAS with a Linux VM and will soon add a Windows VM (haven't setup everything yet, so power usage may yet go up).

One thing to note about power usage and C-States is that C-States is linked to PCI devices. If you have at least 1 PCI device, which doesn't have ASPM support, you will never see lower(higher) CPU C-States (even if you have them enabled) because it doesn't allow the CPU to go to lower power states. In my case even things from the motherboard don't have ASPM support, but also the IPMI card doesn't seem to have ASPM support, almost no HBA card has it and only specific NICs have ASPM support.

So if you want to have low power usage, your best bet is to have your NAS storage on a separate machine (if you need one) and use this with as few PCI devices as possible and have them support ASPM to see true power savings (no IPMI card, no HBA and NIC with ASPM support). Otherwise it doesn't matter what you add to it, because it will already have "wasted" power usage and you can shave around 5W with some settings from BIOS and powertop.

About the IPMI card I too used it mostly for initial setup, since I didn't have an additional monitor and the only complaint I have from it is that its sensor readings aren't correct, which means controls from it aren't reliable for fans (also the temperature sources are limited) and you need to use some software defined sources and curves (haven't done this yet, since the current kernel for TrueNAS doesn't support this IPMI model). Also you can only control 4-pin PWM fans and since most case and additional fans are 3-pin, you need a separate fan hub, otherwise they will run at full speed.
 
May 20, 2020
43
27
18
Thank you both for the great information in your replies. Definetly helps to clarify a few things for me.

I know IPMI is the standard with servers but do you use it all that often to make it worth it? the reason I ask is the non IPMI version of the Asus board is only $320.

Since there is an iGPU available for troubleshooting anything and I do not have other servers or plan to have any more so there will be no out of band network I can’t really see the usage. I am definitely new to the server world so is there something I’m missing other than having the comfort of being able to set the server up from my couch

edit: how’s the power consumption at idle/low usage workloads if you have them?
As others have noted, IPMI value all depends on your environment. For me, it pays for itself after 1-2 unscheduled uses.

I’ll need to check my notes for idle power. I think it was around 50-60W. 2 NVMe, 4 system fans and AIO pump. No add in cards and 2x32 GB.
 

Serverofhome

New Member
Jul 18, 2023
1
1
3
Been using a IMB-X1314 for a few months on Unraid, great board. Unraid identifies the ram as ecc (MTA18ASF4G72AZ-3G2R). It didn’t have SR-IOV in the bios which I wanted, but Asrock support created a custom bios on 1.50 with it enabled, along with a couple other toggles. Amazing support! They also tested bifurcation using a ASUS Hyper M.2 x16 card, which successfully identifies 2 nvme drives. W680 chipset doesn’t support x4/x4/x4/x4, so can’t use all 4 slots on the Asus Hyper though unfortunately.

I haven’t tested SR-IOV yet, but with it enabled + this Unraid app it should allow splitting the integrated graphics across multiple VMs/host. Send a PM if anyone would like a copy and more info.
 
  • Like
Reactions: reasonsandreasons

Jaddie

New Member
Mar 1, 2023
2
0
1
I know IPMI is the standard with servers but do you use it all that often to make it worth it? the reason I ask is the non IPMI version of the Asus board is only $320.
Amazon puts these on sale. I paid $329 for mine with IPMI in early June. It shipped in late June. Set an alert on Keepa or Camelx3 to let you know when the price drops.

(Also, for what it's worth, Super Micro is out of 32GB ECC UDIMMs, so I ordered from Crucial.for $228.60 plus tax.)

I plan to buy a 13500 or 13600K unless y'all advise differently (use case is first Unraid server whose primary purpose is storage, Plex, 'aars, and Sabnzbd).
 

Stankyjawnz

Member
Aug 2, 2017
50
13
8
35
Amazon puts these on sale. I paid $329 for mine with IPMI in early June. It shipped in late June. Set an alert on Keepa or Camelx3 to let you know when the price drops.

(Also, for what it's worth, Super Micro is out of 32GB ECC UDIMMs, so I ordered from Crucial.for $228.60 plus tax.)

I plan to buy a 13500 or 13600K unless y'all advise differently (use case is first Unraid server whose primary purpose is storage, Plex, 'aars, and Sabnzbd).
Here is my 0.02c with an i5-12600k:

If they made an 12th/13th gen i3 with ecc support I probably would have got that. The i5-12600k is already overkill for plex / nas / nvr. I think the best value right now are the i5-12600k or i7-12700k which are about $200 or $250 right now. I think if you set reasonable PL1 and PL2 wattage limits the K processors will not be terribly power inefficient. Intel gives the 13500 a lower tdp because of the lower base clock but to me it doesn't make a ton of sense because the cpu should either be idling in C-states (~800mhz or less) or turboing. The 12700k would get you 2 more p cores rather than 4 ecores with the 13500 and should have higher multithread performance. There's a chance 12th/13th gen intel drop further with raptor lake refresh release later this year.
 
Last edited:

jas67

New Member
Jun 15, 2023
6
3
3
Amazon puts these on sale. I paid $329 for mine with IPMI in early June. It shipped in late June. Set an alert on Keepa or Camelx3 to let you know when the price drops.

(Also, for what it's worth, Super Micro is out of 32GB ECC UDIMMs, so I ordered from Crucial.for $228.60 plus tax.)

I plan to buy a 13500 or 13600K unless y'all advise differently (use case is first Unraid server whose primary purpose is storage, Plex, 'aars, and Sabnzbd).
The 13500 is a great bang-for-the-buck, and also power efficient processor. Benchmark numbers are similar to, or slightly lower than the 12700K, but, power consumption is also lower.
 

vamega

Member
Nov 8, 2022
49
12
8
Has anyone here looked at either of these motherboards?
  1. GigaIPC mITX-Q67EB
  2. ASRock Industrial IMB-1231
Both are on the Q670 chipset, and support vPro. When paired with a i5-13500K vPro AMT should work.
Seems like a great board to build a multi-function NAS/Router/Plex Host with.

Any reason to prefer one over the other. They look pretty comparable, but the GigaIPC one is $50 cheaper.
Differences I can see:
  1. The GigaIPC one has 1 2.5G ethernet (i226V) and one 1G ethernet port (i219LM), with vPro on the 1G port.
  2. The ASRock IMB-1231 has to 2.5G ethernet ports, but both at i225 based.
I worry a little about the i225v3, since I've read it was plagued with problems which caused Intel to create the i226? Is that truly the case?

The ASRock website has a listing for the IMB-1238, which claims to support PCIE 5.0, and DDR5 RAM, and is based on the i226 ethernet chipset. Swaps one 2.5G port to a 1G port
Haven't seen any listings for it though, and my inquiry to ASRock seems to have gone into a black hole.
 
Last edited:

vamega

Member
Nov 8, 2022
49
12
8
Found two more mini-itx boards that look very interesting.

Asus R680EI-IM-A - PCIE 5.0, ECC RAM, i210 Gigabit Ethernet, i226 2.5G Ethernet. 4 SATA ports. R680E Chipset
AValue EMX-R680P - 4(!) i226LM/V 2.5G ports. 2 SATA ports, R680E Chipset (but no ECC according to the specs). 2x M.2 PCIE Gen 4x4 connections and a PCIE Gen 4 x2 E Key.

That AValue one looks incredible. Have not been able to find any pricing information on it though.
 

vamega

Member
Nov 8, 2022
49
12
8
Got some pricing information on the previous two boards.


Asus R680EI-IM-A - ~$290 (on provantage.com)
AValue EMX-R680P - Got a quote for $380 from a representative. Would work well if I was going to use this as a router as well, but I think I'm going keep my router on a separate physical machine.
 

vamega

Member
Nov 8, 2022
49
12
8
Found another ASUS board, this one based on the Q670 chipset.

Asus Q670-IM-A - ~$290. No ECC. i210AT and i225LM for 1G and 2.5G ethernet. 4 SATA ports. PCIE 5.0 x16 slot.

Got a quote on the AValue EMX-R680P . $380.
Would probably go with this if I was going to virtualize my router, but for now I think I'll keep that to a separate physical machine.

I might just be waiting for the IMB-1238. The GigaIPC is pretty good, and is priced very well. But the DDR4 memory, and PCIE 4.0 slot makes me hesitant.
 

Hazily2019

Active Member
Jan 10, 2023
216
118
43
There's a company called icc-usa.com, they routinely sell 12900K, 13900K, Ryzen 9 5950x servers to the high frequency trading firms... keep in mind that these servers are generally expensive ...