UPDATE: see below on FAN control
UPDATE#2: see section on GPU passthrough
UPDATE#3: SlimSAS 8i to 2 * U.2 cable now available from Tyan
UPDATE#4: Solution for blank BIOS
UPDATE#5: PSA for issues caused by the board having 8 instead of 9 mounting screws
UPDATE#6: How to control onboard fans via IPMI
UPDATE#7: PSA for issues caused by memory - BIOS reports error codes of DEAD or BAAD
Update: if you're building with this motherboard, make sure your case does not have a standoff that will short circuit the memory: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/page-4#post-271548
I've started on my home-lab, hyperconverged build to replace my old NAS server, gaming PC and workstation. I decided to go AMD and initially go for Ryzen or Threadripper - but thanks to several helpful forum members e.g @TXAG26, @zer0sum and others, I was able to snag parts for cheap on eBay and other retailers - and go with EPYC and a proper server build with IPMI
Here's my build:
The ESXi hosts my NAS, which is a Windows server with 4 x 4TB HDDs (using the LSI2308 HBA), as well as a Windows 10 VM that uses the GTX 1080 in passthrough mode for some gaming sessions. And of course, my workstation - which was formerly a quad core Intel NUC with 32GB - becomes just an Ubuntu VM on the server.
The Tyan motherboard:

The S8030GM2NE isprobably the cheapest PCIe 4.0 EPYC motherboard around. It doesn't have the dual 10GbE LAN adapters of its sibling (S8030GM4TNE) and has only 5 PCIe slots. However it does have SAS Mini-HD and SlimSAS connectors (Instead of OCulink) and IPMI powered by the usual ASPEED 2500 chip. It also has two M.2 NVMe slots. However, if you use a video card with an oversized fan (the Gigabyte GPU has a 130mm fan and barely clears any components next to it) one of those M.2 slots is effectively useless
The Tyan BIOS is also basic. The IPMI is similarly basic and allows you to mount virtual images from a network - provided you use the Java KVM utility. The HTML5 KVM webapp doesn't have this functionality. The sensors it has are also basic - although I don't know if that is a Tyan thing or an EPYC thing. If you're used to Dell's BIOS and IPMI or even Supermicro ... Tyan is just a lot more basic. For example here's all the temp sensors the board can display:

The good thing is that the board layout means the MOSFETs are relatively cool. I tested the board and RAM with Memtest86. The memory MOSFETs didn't cross 52c. Nor did the CPU exceed 50c.

Memtest reveals the memory bandwidth to be 13.98 GB/sec and the latency to be 42.835ns. I also set the cTDP and the Package Power Limit control to 180W, as well as setting Determinism to Power

For installing ESXi, I made sure that the IOMMU was enabled

The board does support PCIe bifurcation on all slots, if that is something that is needed

The most disappointing aspect of the board is the fan control - it can be set to one of two values - MANUAL and FULL SPEED. That's it. Right now all the temps you see above were with the fan duty cycle set to the default of 30%. I am going to create a support ticket and ask Tyan how to get the fan to vary with CPU or SYS temps

UPDATE: thanks to @jpmomo pointing out that the fan control works - in a fashion - I set the duty cycle to 15 and load tested the motherboard. The winner here is clearly the Supermicro SNK-P0064AP4 heatsink. At the default 30% duty cycle, it does 1600RPM and is super quiet. The heatsink also ensures that the CPU doesn't cross 68C under load. That is an amazing result. Pushing the duty cycle down to 15% makes the CPU fan spin at 1300RPM but it will spin up - once the CPU crosses 75C. I can't believe how good this heatsink is. Of course the 7302P is "only" a 180W processor so there's that
The other disappointment was that supposedly one can access the BIOS via the IPMI (without the KVM) - there's a BIOS icon in the WebUI but all it does is show a blank page, even after the proper credentials have been entered
(see this post)


I tried to flash the LSI2308 card from IR mode to IT mode, but booting on the board with FreeDOS and using the DOS sas2 flasher didn't work. Had to use sas2flash.efi and also an older EFI shell that would support the sas2flash utility.
Another problem was that installing ESXi 7.0 didn't work. I'm going to test with more settings but I got 'decompression' errors despite using a USB drive with the image - and not the KVM virtual image. Luckily ESXi 6.7U3 installed like a champ and I was able to create a Win10 and a Server2019 VM.
For passing the GPU and the LSI controller thru to the VM, I was able to set that in the ESXi console

and the following values
In the Win10 vm's VMX file:
UPDATE: also added this line, it's now rock solid:
The GPU is actually two devices passed thru - one is the GPU itself and the other is the HDMI audio controller (edit: dev_id 10f0 is the HDMI Audio device) that is part of the GPU. I simply set pciPassthruX.msiEnabled=FALSE for every device associated with the GPU except the GPU itself. And then repeated it in the passthru.map - after commenting out the default NVIDIA entry (It says '10de ffff bridge false'')
With these modifications I was able to start the Windows VM, install Windows 10 2004 and the GPU drivers for NVidia. After (re)booting I lost the ability to view the guest console in the ESXi WebUI, but VNC and Remote Desktop work fine. Graphics acceleration works fine and I can run 3DMark without problems (well ... almost)
3DMark has a problem with Ryzen and EPYC sensors and to prevent it from hanging when it starts collecting system info, you need to install an OLDER version of FutureMark system info (ver 5.2 link here) - once that was installed, 3DMark completed the benchmarks successfully, and gave me a TimeSpy score of 7158 for graphics and 7670 for the CPU. I was able to install the Aorus Engine app from Gigabyte and OC the graphics card


For the Windows Server 2019 VM I was able to pass through the LSI controller without problems. I attached a SATA SSD (ADATA) to it and ran CrystalDiskMark just for fun - it gave me the expected SATA speeds.

That's all I have time for today, but will be updating this post with more benchmark results - especially memory benchmarks both native and in the VMs, more installation experiences (will be installing Ubuntu 20.04 LTS, Percona and Jupyter for sure). I will also get a power meter and measure the actual watts consumed. And I need to mod some PSU cables and connectors so that the motherboard fits nicely into the chassis and doesn't have PCIe and EPS cables all over the place. Never modded PSU cables before - so would love input!
I also noticed that the SAS controller runs hot! Hence I will be trying to shoe horn three fans into the case so that there's airflow over the SAS controller, the Mellanox adapter and NVMe drive. If you look at the RPC-4308, it already has two fans at the back of the case to extract air outwards -wondering if that is enough to ensure airflow over the DIMMs . The SNK-P0064AP4 heatsink and five 80mm Arctic Cooling fans serve to keep the DIMMs and CPU running cool. CPU temps are 32c idle and DIMMs are 33c to 37C
Update 3#
The SlimSAS 8i to 2 x U.2/NVMe cable is now available - I got one for $75. It's good quality and comes with a Molex connector and not the usual SATA power connector. Now to find a decent low-TDP NVMe drive

Update 4#: if you have a blank BIOS screen and want to change the BIOS settings remotely, here's how: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-318397
Update 5#: This particular motherboard has 8 instead of the usual 9 standoff screws. Which means that MOST cases will have a standoff right below the memory channels and will cause one or more channels to disappear when it touches the solder contacts. Check and remove all such standoffs before installing the motherboard. See https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-271548
Update 6#: Want to controls the onboard fans via IPMI: thanks to @dante4 here's how: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-374420
Update 7#: Is your BIOS showing DEAD or BAAD when booting? Thanks to @bateman and @mtg - we know it's memory related. More details: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-392230 and https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-392857
UPDATE#2: see section on GPU passthrough
UPDATE#3: SlimSAS 8i to 2 * U.2 cable now available from Tyan
UPDATE#4: Solution for blank BIOS
UPDATE#5: PSA for issues caused by the board having 8 instead of 9 mounting screws
UPDATE#6: How to control onboard fans via IPMI
UPDATE#7: PSA for issues caused by memory - BIOS reports error codes of DEAD or BAAD
Update: if you're building with this motherboard, make sure your case does not have a standoff that will short circuit the memory: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/page-4#post-271548
I've started on my home-lab, hyperconverged build to replace my old NAS server, gaming PC and workstation. I decided to go AMD and initially go for Ryzen or Threadripper - but thanks to several helpful forum members e.g @TXAG26, @zer0sum and others, I was able to snag parts for cheap on eBay and other retailers - and go with EPYC and a proper server build with IPMI
Here's my build:
- Motherboard: Tyan S8030GM2NE
- CPU: 16 core EPYC 7302P (Using the HP Upgrade Kit from Provantage)
- RAM: 128GB using 8 * 16GB DDR4-3200 RDIMMs (Samsung M393A2K43DB3)
- COOLER: Supermicro 4U AMD heatsink (SNK-P0064AP4)
- GPU: 8GB GTX1080 Mini (from Gigabyte eGPU Gaming Box )
- SSD: Samsung 970 EVO 1 TB
- NET:
HP 544/Mellanox ConnectX-3ConnectX-4 LX - HBA: LSI 2308
- PSU:
Corsair AX1200iSeasonic Focus SSR-850PX - CASE: RPC-4308 (It's a short depth 4U chassis with removable HDDs RPC-4308 )
The ESXi hosts my NAS, which is a Windows server with 4 x 4TB HDDs (using the LSI2308 HBA), as well as a Windows 10 VM that uses the GTX 1080 in passthrough mode for some gaming sessions. And of course, my workstation - which was formerly a quad core Intel NUC with 32GB - becomes just an Ubuntu VM on the server.
The Tyan motherboard:

The S8030GM2NE is
The Tyan BIOS is also basic. The IPMI is similarly basic and allows you to mount virtual images from a network - provided you use the Java KVM utility. The HTML5 KVM webapp doesn't have this functionality. The sensors it has are also basic - although I don't know if that is a Tyan thing or an EPYC thing. If you're used to Dell's BIOS and IPMI or even Supermicro ... Tyan is just a lot more basic. For example here's all the temp sensors the board can display:

The good thing is that the board layout means the MOSFETs are relatively cool. I tested the board and RAM with Memtest86. The memory MOSFETs didn't cross 52c. Nor did the CPU exceed 50c.

Memtest reveals the memory bandwidth to be 13.98 GB/sec and the latency to be 42.835ns. I also set the cTDP and the Package Power Limit control to 180W, as well as setting Determinism to Power

For installing ESXi, I made sure that the IOMMU was enabled

The board does support PCIe bifurcation on all slots, if that is something that is needed


UPDATE: thanks to @jpmomo pointing out that the fan control works - in a fashion - I set the duty cycle to 15 and load tested the motherboard. The winner here is clearly the Supermicro SNK-P0064AP4 heatsink. At the default 30% duty cycle, it does 1600RPM and is super quiet. The heatsink also ensures that the CPU doesn't cross 68C under load. That is an amazing result. Pushing the duty cycle down to 15% makes the CPU fan spin at 1300RPM but it will spin up - once the CPU crosses 75C. I can't believe how good this heatsink is. Of course the 7302P is "only" a 180W processor so there's that
(see this post)

Here's the build (not yet in the case)
I tried to flash the LSI2308 card from IR mode to IT mode, but booting on the board with FreeDOS and using the DOS sas2 flasher didn't work. Had to use sas2flash.efi and also an older EFI shell that would support the sas2flash utility.
Another problem was that installing ESXi 7.0 didn't work. I'm going to test with more settings but I got 'decompression' errors despite using a USB drive with the image - and not the KVM virtual image. Luckily ESXi 6.7U3 installed like a champ and I was able to create a Win10 and a Server2019 VM.
For passing the GPU and the LSI controller thru to the VM, I was able to set that in the ESXi console

and the following values
In the Win10 vm's VMX file:
Code:
hypervisor.cpuid.v0='FALSE'
pciPassthru1.msiEnabled='FALSE'
Code:
pciPassthru.use64bitMMIO="TRUE"
In the /etc/vmware/passthru.map
Code:
# <ven_id> <dev_id> <reset_method> <setting>
10de 10f0 d3d0 false
With these modifications I was able to start the Windows VM, install Windows 10 2004 and the GPU drivers for NVidia. After (re)booting I lost the ability to view the guest console in the ESXi WebUI, but VNC and Remote Desktop work fine. Graphics acceleration works fine and I can run 3DMark without problems (well ... almost)
3DMark has a problem with Ryzen and EPYC sensors and to prevent it from hanging when it starts collecting system info, you need to install an OLDER version of FutureMark system info (ver 5.2 link here) - once that was installed, 3DMark completed the benchmarks successfully, and gave me a TimeSpy score of 7158 for graphics and 7670 for the CPU. I was able to install the Aorus Engine app from Gigabyte and OC the graphics card


For the Windows Server 2019 VM I was able to pass through the LSI controller without problems. I attached a SATA SSD (ADATA) to it and ran CrystalDiskMark just for fun - it gave me the expected SATA speeds.

That's all I have time for today, but will be updating this post with more benchmark results - especially memory benchmarks both native and in the VMs, more installation experiences (will be installing Ubuntu 20.04 LTS, Percona and Jupyter for sure). I will also get a power meter and measure the actual watts consumed. And I need to mod some PSU cables and connectors so that the motherboard fits nicely into the chassis and doesn't have PCIe and EPS cables all over the place. Never modded PSU cables before - so would love input!
I also noticed that the SAS controller runs hot! Hence I will be trying to shoe horn three fans into the case so that there's airflow over the SAS controller, the Mellanox adapter and NVMe drive. If you look at the RPC-4308, it already has two fans at the back of the case to extract air outwards -
Update 3#
The SlimSAS 8i to 2 x U.2/NVMe cable is now available - I got one for $75. It's good quality and comes with a Molex connector and not the usual SATA power connector. Now to find a decent low-TDP NVMe drive

Update 4#: if you have a blank BIOS screen and want to change the BIOS settings remotely, here's how: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-318397
Update 5#: This particular motherboard has 8 instead of the usual 9 standoff screws. Which means that MOST cases will have a standoff right below the memory channels and will cause one or more channels to disappear when it touches the solder contacts. Check and remove all such standoffs before installing the motherboard. See https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-271548
Update 6#: Want to controls the onboard fans via IPMI: thanks to @dante4 here's how: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-374420
Update 7#: Is your BIOS showing DEAD or BAAD when booting? Thanks to @bateman and @mtg - we know it's memory related. More details: https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-392230 and https://forums.servethehome.com/index.php?threads/tyan-s8030gm2ne.28914/post-392857
Last edited: