My only regret so far in not going with Supermicro is that the IPMI is an add-in PCI-e board with some extra small wires to connect for power and reset, vs being integrated into the board with the Supermicro.
I've not had any experience with the Supermicro X13 boards, but, I will say, the X11SCA-F-O that I replaced with one of these was a great board, BUT.... I after upgraded the IMPI firmware to the latest, I could no longer access the HTML5 KVM console. Thankfully, I still had the firmware files form a previous upgrade, and was able to back to that version. I don't know if Supermicro started charging for HTML5 KVM access or not, but, do know that they have a history of making some IPMI features paid upgrades.
Supermicro boards are second to none with respect to quality. If the HTML5 KVM is a free feature with the X13, you can not go wrong with the X13SAE-F-O.
That said, I've been very happy with the Asus boards, though on the one with the i9-13900k, I was unable to use the X1 PCIe slot closest to the CPU for the IPMI board due to the size of the D15 cooler that I used on it. I had to instead sacrifice one of the four X16 slots (IIRC, the two furthest from the CPU are X4 electrically anyway). No big deal on that one, as I have no other PCI-e boards in that server, just 3 M.2 SSDs.
If you have any legacy PCI cards that you want to use, the X13SAE-F-O interestingly enough has one of those. Both have a pair of X16 PCI-e 5.0 slots. The X13SAE-F-O has two X4 PCI-e 3.0 slots, where the Asus also has two X4PCI-e 3.0 slots, but, with X16 connectors, allowing at least for X8 or X16 boards, albeit at a slower speed. The Asus does have a single X1 slot closest to the CPU, that, would be populated with the IPMI card if you're not using a huge cooler.
Both boards have a total of 16 lanes of PCI-e 5.0 available, either all allocated to one slot, or split x8/x8 between the two. Both support PCI-e bifurcation for risers.
So, I/O wise, the two boards are pretty equivalent except for the PCI slot on the Supermicro.
The Asus has a Slim-SAS connector that can be used in either SATA or PCI-e 4.0 mode. Supermicro does not list this on their info page.
There are adapters that can be used to add a fourth M.2 SSD connected via this connector.
The Asus boards have been rock-solid in these servers.
Both servers are running Ubuntu 22.04LTS with the HWE kernel 5.19 something to get "thread director" support for the P/E architecture. Both run docker containers for various web service, the i9-13900k one additionally has VMWare Workstation running on it. I have added a systemd service to VMWare Workstation to allow for automatically starting some VMs as used to be supported in earlier versions of Workstation for more of a server use case. I also run several VMs with remote consoles via VNC for SW development, as this system is way faster than my work laptop.
In addition to VMWare, the i9-13900k is running the following applications in Docker containers:
- GItLab
- GitLab runner for CI/CD builds
- Bugzilla
- 4 MediaWiki instances
- Docker Registry
- Several proprietary web services
The i7-13700k server is running the following applications in Docker Containers:
- GitLab runner for CI/CD builds
- Owncloud
- OS-Ticket
- Docker Registry
- Several proprietary web services
Both are also running NGinx and Apache.
Both also have users interactively logging in an doing SW builds in docker containers.
Performance is very good on both. I will say, at least for our use cases, the i9-13900k doesn't provide all that much more performance than the i7-13700k. Both have 8 performance cores. The i9 adds 8 additional efficiency cores.
When I get the time, I will be putting up an Ansys compute server instance on the i9 server. That will definitely put a load on it.
As for where to get an X13SAC, I wouldn't know. I usually just do google searches in addition to checking usual suspects, Newegg and Amazon.
I actually got the two Asus W680 boards through B&H Photo Video, as they were the only ones that had them in stock at the time.
EDIT:
More info. Storage in the i9-13900k server is 3 M.2 2TB Samsung 970 Evo Plus NVMe (left from previous motherboard, if new, I'd have gone with 980 Pro).
Storage in the i7-13700k server is 1 2TB Samsung 970 Evo Plus NVMe (again, left over from previous server), also older RAID 6 array connected via LSI 9361-8i SAS RAID controller (X8 PCIe).
Both servers are running 64 GB (2x32GB) DDR5 4800 ECC UDIMM memory.