1U Supermicro Server 6x 10GBE RJ45 X10SLH-LN6TF LGA 1150 H3 X10SLH-N6-ST031

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Fritz

Well-Known Member
Apr 6, 2015
3,371
1,375
113
69
Glad I avoided this weirdo. Reminds me there is some strange HW out there that will cause you grief. Much of it finds itself on this forum. :p
 

EasyRhino

Well-Known Member
Aug 6, 2019
499
370
63
I was able to fit it in am old htpc case with unusually large front clearance. See post #188

There may be other cases like that. I think there was a cooleraster matx candidate.... But only one of their cases.
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
Hooked up my age-old Kill-a-Watt. Here's some accurate numbers for everyone, since I never trust these Supermicro PDCs.

Setup:
  • Supermicro 1U Chassis that the X19SLH-LN6TF comes in, stock PWS-341P-1H 340W PSU
  • E3-1270V3, passive heatsink, 32 GB DDR3-1600Mhz
  • Standard 4 counter-routing fans (the two dummies are empty, as usual for this chassis)
  • 79F ambient temp
  • No HDDs, no SATADOM, No PCIe card in Riser (Riser present though)
  • No RJ45 cables connected, not even IPMI
  • Supermicro TPM Module installed
  • BIOS Reset to Optimum Defaults (default power savings settings)
  • BMC Enabled
  • Dell 1000W rack UPS, load output stable @ 115 VAC
The X540 nics were hardware disable/powered-off using the JPL1/JPL2/JPL3 jumpers during those test runs.

I chose Ubuntu installer as the default Linux kernel has reasonable C state power savings where as FreeBSD (pfSense) does not out of the box (need a package). However, they could be optimized further on both OS.

IPMI Plugged in, system powered off:
  • Spikes to 25 W (BMC booting?)
  • 10.0 W idle
  • 13 W with Link/Cable
All 3 x X540 NICs enabled:
  • First boot/waiting on BMC: 75.8 W
  • BIOS Tests: 93.4 W
  • Post-Boot, Fans Calm Down, idle Built-in EFI Shell: 72.1 W (dips to 66.7 W at times) for 5 minutes
  • Boot to Ubuntu 21.04 Installer: 64.1 W - 71 W for 5 minutes
Just 1 x X540 NIC enabled:
  • First boot/waiting on BMC: 70.8
  • BIOS Tests: 89.9 W
  • Post-Boot, Fans Calm Down, idle Built-in EFI Shell: 65.4 W (dips to 62.9 W at times) for 5 minutes
  • Boot to Ubuntu 21.04 Installer: 59.9 W - 62.9 W for 5 minutes
All 3 x X540 NICs disabled:
  • First boot/waiting on BMC: 68.2 W
  • BIOS Tests: 86.7 W
  • Post-Boot, Fans Calm Down, idle Built-in EFI Shell: 61.2 W (dips to 60.5 W at times) for 5 minutes
  • Boot to Ubuntu 21.04 Installer: 57.0 W - 60.5 W for 5 minutes
There were various spikes, like waiting on the BMC it spikes 3-5W higher at times. Also, booting all 3 x X540 nics hit over 105 W sometimes during Ubuntu's boot sequence.

Correction: Updated BMC idle with power off, as with a link it idles at 13 W all the time.
 
Last edited:
  • Like
Reactions: Bert and Marsh

Bert

Well-Known Member
Mar 31, 2018
819
383
63
45
Looks like power usage is high even when NICs are disabled. I remember this was also reported before. Can we assume that disabling NICs with jumper does not power them down?
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
Looks like power usage is high even when NICs are disabled. I remember this was also reported before. Can we assume that disabling NICs with jumper does not power them down?
It's not that high.

The test results above show where power drops about ~3-4 Watts, per NIC, that you disable via the jumpers.
  • 71 W - All 3 NICs enabled
  • 63 W - Only 1 NIC enabled
  • 60 W - No NICs enabled
So, if you also take the 13 W for the BMC away, (and the 2W from the PDU/PSU always-on state of Supermicro/server systems), you are left with a system that would idle around 45 W - pretty much the exact same as a desktop version of the CPU series with 4 sticks of ram.

All in all, this is much more power efficient than sticking X540 PCIe cards in a board - as each PCIe Intel X540 would pull around 7-11W due to the onboard power regulators (which you get for free on the board already, with onboard components).

It's not the board or NICs that are power hungry, it's the BMC and Supermicro (and most others) PSU and PDUs in server chassis.

If you want lower power compared to a desktop, then disable BMC, connect it to a KVM (that uses power itself), and use a normal PSU.

If you want BMC, then you'll need to pay for it in watts. If you want the pretty 1U supermicro chassis, you'll need to pay for it in watts.

---

Just to add onto that, my SC846 with a single X10 LGA2011-3 V3 CPU, quad mem ddr4 sticks, 1 x SSD - idles at 68 W with a single PSU/PDU and BMC. Disabling BMC (12 Watts) and using a straight ATX Platinum PSU, I see about 52 W. However, once you connect the backplane (9 Watts) and LSI9211-8i (11 Watts), now you are at 84 W with the SM PSU/PDU + BMC. That's before HDDs.
 
Last edited:
  • Like
Reactions: abq, Bert and Fritz

GhettoSuperstar

New Member
Mar 12, 2020
17
9
3
I have PF Sense Ver 2.5.2 installed. I was able to unlock the BIOS. I used a bios on here that help facilitate unlocking the BIOS.
With my Kill-a-Watt, It average 60watts. This fits fine in a Dell Vostro 200 case. You just have to modify the case by removing drive cages and modding the backplate area. CPU usage never goes above 5%. I also put 3 Noctua 40x20mm fans on the x540 heatsinks and I put one 40x20mm fan on the Mellanox ConnectX-3 controller.

Setup:
  • Dell Vostro 200 case, with SilverStone Technology 300 Watt TFX PSU 80 Plus Bronze SST-TX300-USA
  • Intel(R) Xeon(R) CPU E3-1265L v3 @ 2.50GHz , passive heat-sink, with a 120mm fan on top
  • 8 GB DDR3-1600Mhz Ballistix (4GBx2)
  • IBM Mellanox ConnectX-3 VPI IB/E controller with 10Gbit QSFP Fiber Connection
  • 42C core temp
  • 1 240gb SSD SATA drive
  • BMC Enabled
  • 4 Noctua NF-A4x20 PWM, Premium Quiet Fan
 

penguinslovebananas

New Member
Sep 25, 2020
7
1
3
I noticed reading through the forum that someone updated the bmc firmware with the one posted for the x10slh-f board on supermicro's site. Have anyone tested this and found it safe to do so. If you need them my specs are:

pfSense 2.5.2
E3-1230v3
4x - samsung 8gb pc3l-12800 ecc udimm
intel i350-t4 quad port gig nic
silicom i350am2 six port gig nic
2x - 64gb supermicro satadom boot drive (RAID 1)

I appreciate you taking the time to read my message and provide help.
 

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
I noticed reading through the forum that someone updated the bmc firmware with the one posted for the x10slh-f board on supermicro's site. Have anyone tested this and found it safe to do so. If you need them my specs are:

pfSense 2.5.2
E3-1230v3
4x - samsung 8gb pc3l-12800 ecc udimm
intel i350-t4 quad port gig nic
silicom i350am2 six port gig nic
2x - 64gb supermicro satadom boot drive (RAID 1)

I appreciate you taking the time to read my message and provide help.
IMO, the retail version of the X10SLH-F does not come with the 3x Intel X540-T2 10GBaseT NICs - and therefore no way to control (initialize?) them.

The custom BIOS we have on the X10SLH-LN6TF does have the BIOS controls for them.
 

penguinslovebananas

New Member
Sep 25, 2020
7
1
3
IMO, the retail version of the X10SLH-F does not come with the 3x Intel X540-T2 10GBaseT NICs - and therefore no way to control (initialize?) them.

The custom BIOS we have on the X10SLH-LN6TF does have the BIOS controls for them.
I have the bios, I was looking at updating the BMC firmware with the one on the x10slh-f page
 

Bert

Well-Known Member
Mar 31, 2018
819
383
63
45
I originally bought this motherboard to use it as a 6 port 10Gb unmanaged switch and also it can be used as a server to host a file server/VMs/firewall etc. After spending several weeks, I just found that easiest way to get that is simply installing linux (mine is debian) and enabling a bridge:


then everything else works out of the box with perfect line speed of 9.41Gb/sec as measure from iperf. I was not able to measure total agg speed but this is already more than good enough for me. It is amazing to see how performant the bridge implementation in linux, quite amazing I must say.

Just wanted to share what I tried so far:
- PfSense on Hyper-V: I spent 2 weeks on online guides to configure a bridge on pfsense but it is too complicated and couldn't figure out. Pfsense, being a firewall is not designed to run a switch.
- Windows Bridge on Windows Server 2016: This is a built-in bridge functionality in windows and has been there for decades but it looks MS didn't update the implementation and its performance is horrible.
- VyoS pn Hyper-V: This also took me several weeks to make it work and got lots of help from @Marsh. VyOS was an overkill for switch purposes as it designed to be as router. I was missing some crucial steps in hyper-v configuration which prevented me to make VyOS running on Hyper-V. After all that, I must say VyOS didn't work good as I was limited to ~5 - 6 Gbit/s. My guess is that this is due to the inefficiency of running VyOS on Hyper-V.

I think a better set up would be using Proxmox instead of Debian but I am fine with using VirtualBox on Debian.
 
  • Like
Reactions: techtoys and Marsh

eduncan911

The New James Dean
Jul 27, 2015
648
506
93
eduncan911.com
+100 for Proxmox. Once you install it, figure out the odd/different networking (it makes a lot of sense once you "get it"), you'll never ever go back to anything. Doubly so when you learn about Proxmox's native OVS built-right in as a first-class citizen (Open Virtual Switch, or Open vSwitch).

Especially if you setup a NAS with shared storage on that 10G network. Then you can "move" VMs around from machine to machine all through a simple interface (given, changes to IP DNS has to be resolved first).

Also, if you have more than one server: they all can be joined to a cluster, managed by a single interface (like vSphere).

I have Proxmox installed on SoC devices as small as 4 GB of ram running lightweight containers (and a VM or two). You basically disable the heavy Proxmox services to bring it down to around 1GB of usage.

It really irks me that I don't have the same control over all of my RPi devices like I can a Proxmox cluster.
 
Last edited:

EasyRhino

Well-Known Member
Aug 6, 2019
499
370
63
I tried setting up both opnsense and pfsense with both proxmox and esxi.

I couldn't get the *sense to successfully bridge the Ethernet ports on either.

With proxmox I was able to set up a Linux bridge there and pass it to *sense, so that worked, but I couldn't get GPU pass thru to work which was kind of the point of other VMS.

So it's just bare metal opnsense now. And works fine
 
  • Like
Reactions: Bert

discoeels

Member
May 8, 2013
40
7
8
I recently got this board off EBay. Flashed the new Bios from Ez and then reset IPMI with IPMItools. It's a cool board. I'm a huge fan of oddballs but I decided to go with something that I can put in my CSE-731I-300B chassis (X10-SLF). The bios fan control completely freaked out when I mixed an 80mm Noctua with 120mm fans so YMMV depending on your setup. A fun project but I need "just works" for labs/self-study.
 
  • Like
Reactions: Samir

Marsh

Moderator
May 12, 2013
2,642
1,496
113
I transplant the system board from the 1u chassis to a iStar 2u matx case

Fan profile set to Optimal speed, no need to run ipmitool to set threshold.
2 Front fan running at 1100rpm, CPU fan is at 1600 rpm.
Nice and quiet , just low hum.
 

EasyRhino

Well-Known Member
Aug 6, 2019
499
370
63
yeah in my home router setup I just screwed a 120mm case fan on top of the built in heatsink. it spins at minimal speed and the Xeon 1225v3 stays cool under the modest load. There's actually a very small 'click' sound in the power supply fan, but it's only noticeable if you stick your head up to the side.
 
  • Like
Reactions: Samir
Jan 24, 2020
69
24
8
if you put a good passive heatsink on the CPU and a fan over the X-540 heatsinks then it's near silent (even with the side off my case)

I am not using mine much anymore so may look to sell soon (UK only)
 
  • Like
Reactions: Samir and ullbeking