Does anyone have Supermicro H12SSL series motherboard experience?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mattlach

Active Member
Aug 1, 2014
344
97
28
Doing some mprime (Linux version of Prime95) stability testing before blessing it as stable, but I have a good feeling about this thing now.

It did give me a scare at first though. Would run for a few seconds and then kill all the threads.

Not quite sure what is going on, but I googled it and lots of people are having the same issue with mprime.

When run from the configuration menu, it will be killed after a short while of running. But if you unpack the download afresh and run it with "./mprime -t" to immediately start an all core stress test, it works just fine.

Seems more like some kind of bug, and not a hardware issue, since the "mprime -t" method seems to be stable.

64.png

Looks like the System Monitor GUI app in Linux Mint has some trouble with large amounts of memory. It is totally tallying that wrong.


The "free" command from the command line gets it right though.


Doing all of this from my desktop via the IPMI/BMC's console pass through. Pretty convenient. No need to hook up monitors and keyboards.


With this all core load the cores seem to be clocking at 2771 - 2772 Mhz, which is below advertised base clock of 2.8, but not by much.


Still that is a tiny bit disappointing, but probably not indicative of a problem.


Core temp is about 63C, and the CPU fan is at about 67% speed.


Might just be Supermicro doing their normal hyper-conservative thing.


I wonder if it is just bouncing off the TDP limiter. (I should probably check what it is set to in BIOS) It is pulling about 295w from the wall with all cores at full load according to my Kill-A-Watt.

For shits and giggles did a single thread test. Core clocks up to 3676. So again, the same few Mhz short of the max boost clock. I'm guessing there are some conservative Supermicro clock settings preventing it from hitting max clocks.
 

sam55todd

Active Member
May 11, 2023
115
28
28
Congrats on successful build results.
Does this 57h:30m have Pass 4/4 (like on prev. screenshot) under the "result" popup? or there's some other number?
Just to estimate how long would it take for 256GB/128GB etc.. (assuming same RAM speed / channel configs since CPU is unlikely bottlenecking here).
 
Last edited:

mattlach

Active Member
Aug 1, 2014
344
97
28
Congrats on successful build results.
Does this 57h:30m have Pass 4/4 (like on prev. screenshot) under the "result" popup? or there's some other number?
Just to estimate how long would it take for 256GB/128GB etc.. (assuming same RAM speed / channel configs since CPU is unlikely bottlenecking here).
I'm not 100% sure I understand your question, but I also took this screen shot:

memtest stats.png

Presuming the same RAM bandwidth and speed which in this case means:
- Registered ECC RAM (this slows things down a little in exchange for error correction and a buffer)
- Same clock (3200MT/s)
- 8 Channel RAM

It should scale linearly. So I would expect 245GB of the same speed Registered ECC RAM in an 8 channel configuration to complete this test in about half the time as my 512GB build. And about a quarter of the time with 128GB.

Not going to know 100% for sure unless I test it though.
 

sam55todd

Active Member
May 11, 2023
115
28
28
Thank you for posting this, number of passes was missing variable for linear approximations, but I suspect it's always equals 4
1702850420943.png

Moreover your last screenshot shows very important part with temperature ranges:
1702850495730.png
Stress-test Max=59C for DDR4 RDIMM seems like a good target.
 
  • Like
Reactions: mattlach

mattlach

Active Member
Aug 1, 2014
344
97
28
Thank you for posting this, number of passes was missing variable for linear approximations, but I suspect it's always equals 4
View attachment 33264

Moreover your last screenshot shows very important part with temperature ranges:
View attachment 33265
Stress-test Max=59C for DDR4 RDIMM seems like a good target.
Yeah, it's pretty good, though my test was open air on top of my desk with some fans pointed at it, so not reflective of the actual airflow in the 4u case where it is going. I think the airflow will be better in its final configuration due to the fan wall in the Supermicro SC846 case moving a lot of air, especially modified as mine is with three 120mm Noctua Industrial 3000rpm fans.
 

mattlach

Active Member
Aug 1, 2014
344
97
28
Alright. Just passed 48+ hours of prime95 mixed (well, mprime, but same thing) late last night.

I'm ready to call this thing stable, and leave my positive reviews on eBay.

Not going to lie, I was concerned about a Chinese seller of a high ticket item, but this tugm4470 store really came through.

Looks like they are a major used parts broker and integrator, not just some guy, but the experience was good, and I'd definitely buy there again based on this experience.

Also, atechcomponents where I got the RAM was also stellar.

Both of them are quality sellers.
 
Last edited:
  • Like
Reactions: sam55todd

mattlach

Active Member
Aug 1, 2014
344
97
28
It's crazy how small the H12SSL series motherboards are in the case compared to the monster X9DRI-F.

The end result of this is that one of the two 12v 8pin EPS connectors is like 2mm too short to reach the closest EPS connector.

PXL_20231227_060327767-sml.jpg

Luckily I live near a Microcenter. They have 8" extensions in stock. I don't trust extensions (I've literally had one catch fire in the past, but it looks like I don't have much of a choice.
 

hmw

Active Member
Apr 29, 2019
582
231
43
It's crazy how small the H12SSL series motherboards are in the case compared to the monster X9DRI-F.
The end result of this is that one of the two 12v 8pin EPS connectors is like 2mm too short to reach the closest EPS connector.
I had the exact same problem with the CSE-826 case. In the end, I cut the 10 zip ties that Supermicro puts on the cables, actually unscrewed and opened the metal covers that cover the distribution board and gently flattened and pulled till I got the extra 1-2 cms needed. Wear gloves so you don't scratch yourself. If the cables are going over other cables, that's your 1-2 cms of slack right there
 
  • Like
Reactions: mattlach

mattlach

Active Member
Aug 1, 2014
344
97
28
I had the exact same problem with the CSE-826 case. In the end, I cut the 10 zip ties that Supermicro puts on the cables, actually unscrewed and opened the metal covers that cover the distribution board and gently flattened and pulled till I got the extra 1-2 cms needed. Wear gloves so you don't scratch yourself. If the cables are going over other cables, that's your 1-2 cms of slack right there
Thanks for the reply.

I did the same thing. Cut the zip ties. That's how I got the far EPS (near I/O Shield) and ATX connector to reach, but the short one was shy just a couple of millimeters even with all the zip ties cut.

I also opened the cover to try to get a few more millimeters, but they were soldered straight to a board in there, and I could not get any more length out of it. At least not without tugging to the point where I was worried I might damage something.

So, I'll be getting an EPS extension cable, and hoping for the best.

I had an extension cable like that catch fire once in my old X79 system, but that was using a overvolted and overclocked i7-3930k at 4.8Ghz and 1.45v using some rather high end cooling, and then proceeded to do a handbrake transcode on a huge 4k file.

196494_IMG_20190223_165838.jpg

196496_IMG_20190223_165913.jpg

I let the magic smoke out, but I was running things significantly out of spec, which I wouldn't with my server, so hopefully with two EPS connectors and in spec power loads something like this won't happen.
 

hmw

Active Member
Apr 29, 2019
582
231
43
If you're OCD like me and most of the forum - just grab some Mini-Fit Jr connectors and terminals and crimp with really short 16AWG wire. Or buy something like this ready made - they usually go for $8

1703730312690.png
 

Darkang

New Member
Dec 14, 2022
7
2
3
Hi,
It has few months since you had your nice server running, congratulations . Can you please share if you ae still happy with its performance? what type of load are you potting on?
Thank you
 

mattlach

Active Member
Aug 1, 2014
344
97
28
Hi,
It has few months since you had your nice server running, congratulations . Can you please share if you ae still happy with its performance? what type of load are you potting on?
Thank you
Hey.

Sorry just saw this.

I am very happy with it.

It is running a Proxmox install with KVM and LXC as the primary of three nodes in my little mini cluster.

Quite frankly, it is overwhelmingly overkill for my applications. I knew I was going a bit overkill, but this was kind of more than I expected. I completely underestimated how much more capable this machine would be than my old dual socket Ivy Bridge Xeon E5-2650v2's Once I was freed up from the performance sucking Meltdown/Spectre mitigations.

That said, I can live with that. I'd rather have overkill and room to grow, than err on the side of being short and not having the capacity I need.

It is kind of like a home lab, but it is for more than just testing. I jokingly refer to it as my "home production server. The server runs a bunch of VM's and linux containers that support various things I do around the house, some of which other household members would not like if they go down. The biggest two are massive file storage on the network (192TB using ZFS) and the container I have dedicated to a MythTV PVR backend for TV content. Between those two features it covers all the media library and both live TV and PVR needs of the house.

There are also many other guests, including dedicated (for segregation) servers for Wifi control server, SFTP file server for sharing files with my friends, and a ton of little clients and servers I use and like to have up and running all of the time so I don't have to worry about rebooting my desktop, etc.

If I were to do it all over again, I'd probably get a lower end EPYC CPU and save some money, as I am not even close to putting this thing through it's paces, except for the occasional high load activity, which is pretty rare.

Since putting the server in place and getting all the VM's up and running, the highest I have loaded the CPU has been about 25%, with a load of about 29, but that was a pretty narrow peak. I forget what I was doing right then. Might have been using lbzip2 to compress a huge tarball in multithreaded mode, while simultaneously running the normal VM/container load. This hasn't happened in over a month though.

I have also loaded it up at about ~35 Gbit network traffic for narrow peaks, where big file transfers from multiple clients coincided, but that is also an extremely rare occasion, that hasn't happened in over a month.

I appreciate the capacity when I have it, as it makes certain things way more effortless and quicker, but if I am honest about it - as previously stated - this machine is leaps and bounds more than what I need. Here is what more typical long term loads look like:


1712866147929.png


So yeah, absolutely massive overkill. But I love it.
 
  • Like
Reactions: vvkvvk

name stolen

Member
Feb 20, 2018
50
17
8
With this all core load the cores seem to be clocking at 2771 - 2772 Mhz, which is below advertised base clock of 2.8, but not by much.

...... ......

For shits and giggles did a single thread test. Core clocks up to 3676. So again, the same few Mhz short of the max boost clock. I'm guessing there are some conservative Supermicro clock settings preventing it from hitting max clocks.
Especially for the first one, this looks like bus speed misreporting, or maybe it's actually a percent off.

100 * 28 = 2800, so 99 * 28 = 2272. I'm guessing hwinfo64 or hardinfo2 (for instance) will be showing the bus speed as 99 MHz in these instances, and I don't know if it actually is, but that's where the calculation comes from.

I'm less certain on the single threaded instance you cited since the numbers were a little less helpful geometrically, but I bet it stems from the same situation.

Nice system :)
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,426
1,641
113
I'm guessing hwinfo64 or hardinfo2 (for instance) will be showing the bus speed as 99 MHz in these instances, and I don't know if it actually is, but that's where the calculation comes from.
should be 100mhz but it is not exact. its measured by a function with help of a motherboard timer.
of course some motherboards can have 99.0 mhz, should not but we have already seen horses puking right in front of the pharmacy.
edit:
the timer as the reference can also be wrong.
 
Last edited:

name stolen

Member
Feb 20, 2018
50
17
8
Thanks, RolloZ170. Specifically @mattlatch had previously wondered, and maybe even worried a little, about that clock speed discrepancy. I just wanted to let them know that they probably can't get that 28 MHz back, because it's not really missing, it's just a calculation. No need to worry about cooling, or your VRMs not getting enough juice, or any of that. Just enjoy the power :)