AMD EPYC 7302p+ Supermicro H11SSL-i version 2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
but of course 'usable' may mean "the stock fans are running at full speed all the time".
it depends on the chassis revision: chassis for "purely" or newer platforms have 10.5k rpm fans in the midwall (FAN-0185L4) and would probably run at 50%, while older chassis have 7k rpm (FAN-0126L4), 6.3k rpm (FAN-0094L4 and FAN-0095L4) or even 5k rpm (FAN-0062L4) fans and could run at 70% or even 100% with a passive heatsink...

I would say get the SNK-P0063AP4 or another 2u active heatsink to have extra cooling capacity and keep the midwall fans at a "low" rpm ._.
 
  • Like
Reactions: kpfleming

mach3.2

Active Member
Feb 7, 2022
128
84
28
I would assume something about supermicro or a vendor code.

Epycs are often locked to brands and info about it is usualy on the spreader or socket depending on how its sold.
I would suspect it's just the seller's stamp to identify the CPU as his product in the event it's returned back to him for swaps/refund.
 

Cruzader

Well-Known Member
Jan 1, 2021
539
544
93
I would suspect it's just the seller's stamp to identify the CPU as his product in the event it's returned back to him for swaps/refund.
i dont think ive seen that done with anything but invisible ink type stuff for a decade on components.
a visible stamp is not hard to transfer/replicate by anybody aiming to defraud.
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
it depends on the chassis revision: chassis for "purely" or newer platforms have 10.5k rpm fans in the midwall (FAN-0185L4) and would probably run at 50%, while older chassis have 7k rpm (FAN-0126L4), 6.3k rpm (FAN-0094L4 and FAN-0095L4) or even 5k rpm (FAN-0062L4) fans and could run at 70% or even 100% with a passive heatsink...

I would say get the SNK-P0063AP4 or another 2u active heatsink to have extra cooling capacity and keep the midwall fans at a "low" rpm ._.
Ahh... I miswrote what I intended to say :) I'm looking to combine a 7232p (8-core 120W TDP) with the H11SSL-i, not a 7302p (16-core 155W TDP).
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
I'm a bit confused by that reply; I said 155W, you replied 155W or 180W, then linked to the page which says 155W :)

Anyway, thanks to this thread I've ordered a combo with a 7232p; now I'll need to get a SM CPU cooler and a PCIe board for my M.2 drives (since I have two on the current board and the H11SSL-i only has one M.2 socket). Will be a fun Thanksgiving-weekend upgrade project!
 

osrk

Member
Sep 2, 2019
36
31
18
I've ordered from him in the past. CPU came quick and without problems. I would order from him again. CPU prices are great, motherboards are a little more but very decent still. I think these are decommissioned parts from Chinese datacenters.
 

Sacrilego

Retro Gamer
Jun 23, 2016
135
159
43
48
I just got my package today. Much faster than I anticipated.
Decent packaging, everything in mint condition.
This is my first Epyc build. I'm quite excited. It's replacing a dual E5-2920 v2. Just in time for Vsphere 8
 

Sacrilego

Retro Gamer
Jun 23, 2016
135
159
43
48
Can I see the output of `lscpu -e` from a Linux boot? Thanks in advance.
Here you go:
Code:
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE    MAXMHZ    MINMHZ
  0    0      0    0 0:0:0:0          yes 3000.0000 1500.0000
  1    0      0    1 1:1:1:0          yes 3000.0000 1500.0000
  2    0      0    2 2:2:2:1          yes 3000.0000 1500.0000
  3    0      0    3 3:3:3:1          yes 3000.0000 1500.0000
  4    0      0    4 4:4:4:2          yes 3000.0000 1500.0000
  5    0      0    5 5:5:5:2          yes 3000.0000 1500.0000
  6    0      0    6 6:6:6:3          yes 3000.0000 1500.0000
  7    0      0    7 7:7:7:3          yes 3000.0000 1500.0000
  8    0      0    8 8:8:8:4          yes 3000.0000 1500.0000
  9    0      0    9 9:9:9:4          yes 3000.0000 1500.0000
 10    0      0   10 10:10:10:5       yes 3000.0000 1500.0000
 11    0      0   11 11:11:11:5       yes 3000.0000 1500.0000
 12    0      0   12 12:12:12:6       yes 3000.0000 1500.0000
 13    0      0   13 13:13:13:6       yes 3000.0000 1500.0000
 14    0      0   14 14:14:14:7       yes 3000.0000 1500.0000
 15    0      0   15 15:15:15:7       yes 3000.0000 1500.0000
 16    0      0    0 0:0:0:0          yes 3000.0000 1500.0000
 17    0      0    1 1:1:1:0          yes 3000.0000 1500.0000
 18    0      0    2 2:2:2:1          yes 3000.0000 1500.0000
 19    0      0    3 3:3:3:1          yes 3000.0000 1500.0000
 20    0      0    4 4:4:4:2          yes 3000.0000 1500.0000
 21    0      0    5 5:5:5:2          yes 3000.0000 1500.0000
 22    0      0    6 6:6:6:3          yes 3000.0000 1500.0000
 23    0      0    7 7:7:7:3          yes 3000.0000 1500.0000
 24    0      0    8 8:8:8:4          yes 3000.0000 1500.0000
 25    0      0    9 9:9:9:4          yes 3000.0000 1500.0000
 26    0      0   10 10:10:10:5       yes 3000.0000 1500.0000
 27    0      0   11 11:11:11:5       yes 3000.0000 1500.0000
 28    0      0   12 12:12:12:6       yes 3000.0000 1500.0000
 29    0      0   13 13:13:13:6       yes 3000.0000 1500.0000
 30    0      0   14 14:14:14:7       yes 3000.0000 1500.0000
 31    0      0   15 15:15:15:7       yes 3000.0000 1500.0000
 

Sacrilego

Retro Gamer
Jun 23, 2016
135
159
43
48
Thank you! That confirms what I wrote here:

Anyway, that's quite the setup. Thanks for sharing. Enjoy!
Thanks, I'm having a lot of fun with this setup so far. I've never had an EPYC setup, so there's a lot new for me to learn all it can do along with its kinks.

Every PCIe slot can be bifurcated which is awesome for me since I have two Asus Hyper M2 cards ready.
Read earlier that it was fussy with certain GPUs. Tested with a 1060 and a 1070, no issues so far.

Using the latest nightly TrueNAS scale, Virtualization works great. Setup 2 VMs with Windows 10 passing through the 1060 and 1070 and a usb KB and Mouse for each without issue. One server, two gamers!

Idle power consumption seems to be significantly lower than my Dual e5-2920 v2 setup. I'll know for sure when I remove it, but the Epyc idles around 67 watts with the 1060 and 2 SSDs on there.

Mounting ISOs with the IPMI was problematic. Just couldn't get it to work through SMB. I had to resort to using the java client and it was very slow for some reason. Not to mention that I had to relax java security to even get it to work.
HTML5 KVM works fine.

IPMI was not factory reset. Had to reset it with ipmicfg.

When booting uefi with csm disabled, the graphics card priority doesn't seem to work properly. Re-enabling legacy support solves this.

Fan control doesn't seem to push up to 100% when set to optimal. When benchmarking with Cinebench it can throttle. Changing the fan to full speed helps, but I prefer keeping it as optimal.

I'm quite happy so far. I'll leave it on the bench for a few more days to mess with. If someone has questions or something you would like me to try on it, feel free to let me know.
 
Last edited:

Sacrilego

Retro Gamer
Jun 23, 2016
135
159
43
48
I'll be using it in a Supermicro SC 846BA
I'm still waiting for my vmug licenses to become available to start upgrading to 8.
 

nk215

Active Member
Oct 6, 2015
412
143
43
49
Idle power consumption seems to be significantly lower than my Dual e5-2920 v2 setup. I'll know for sure when I remove it, but the Epyc idles around 67 watts with the 1060 and 2 SSDs on there.
Please report back the power consumption when you have it in a case. in my experience, the server fan cases are power-hungry. The 1060 idles at around 15 watts (I get that from nvidia-smi). Modern SSD uses almost nothing.
 

Sacrilego

Retro Gamer
Jun 23, 2016
135
159
43
48
Please report back the power consumption when you have it in a case. in my experience, the server fan cases are power-hungry. The 1060 idles at around 15 watts (I get that from nvidia-smi). Modern SSD uses almost nothing.
I ran some tests against a dual e5-2680 v2 setup.

The systems were tested outside their cases to eliminate fans and other peripherals from interfering with power usage.
Power was measured at the wall using a kill-a-watt.

Please don't take these results too seriously. Testing would require a lot more time to get more accurate results, but I believe these results are still interesting.
It was obvious that there would be improvements, but I was curious about how much of an improvement it would be.

Testing consisted of booting TrueNAS bluefin Nightly build from 11/03/2022 with the onboard video as primary and the GTX 1060 passed through to a Windows 10 VM.
Letting the system idle for 5 minutes after the IP addresses are shown on screen and recording the idle power usage.
Starting the Windows VM first in a 6 core 12 thread vcpu configuration then a full all cores and threads vcpu configuration, running Cinebench R23 then recording power usage and Cinebench scores.
The final test is done with all cores and threads running Cinebench in a loop with Unigine Heaven to fully load the system, recording power usage and final heaven benchmark score.

Equipment used:
Power Supply is a Delta 1000w 80 Plus Gold
GPU used is an EVGA GTX 1060 6gb
2 SSDs, one for OS, the other for VM Storage
The dual e5 system consists of a Gigabyte GA-7PESH1 with (2) Xeon E5-2680 V2 and 64GB of DDR3 RDIMMs in Quad Channel configuration on both CPUs.
The Epyc system consists of a Supermicro H11SSL-i with an EPYC 7302p and 192gb or DDR4 RDIMMS in a 6-channel configuration.

Power Usage and scores:
Dual E5-2680 V2
Powered off 17w
Reached 200w while booting
Idle 103w
216w running Cinebench with 6 cores 12 threads. Score 6434
310w running Cinebench with 20 cores 40 threads score 12583
414w fully loaded Cinebench and Heaven benchmark. Fps 84.4 min 8.7 max 154.2 score 2127

Epyc 7302p
Powered off 5w
Reached 115w while booting
Idle 63W
136w running Cinebench with 6 cores 12 threads score 11046
175w running Cinebench with 16 cores 32 threads score 18964
286w fully loaded Cinebench and Heaven benchmark. Fps 113.1 min 9.1 max 235 score 2850
 
Last edited:

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
My CPU/board combo will arrive in a couple of days, and I've purchased a Dynatron A26 plus Noctua AF6x25 fan to use on it. Should I trust the thermal paste supplied on the A26 or use something else?
 

ano

Well-Known Member
Nov 7, 2022
632
259
63
orderd 4 sets of h12 with 7402p, they are upgradeing me to 7402 instead, which is ok.

orderd with 8x32 and 8x16 to test out with ZFS.

Guessing ZFS is going to the limitations as usual.

only have to decide on 9300, 9400 or 9500 hbas... (all flash cse-216-be1c)
 

PlasticZeus

New Member
Mar 25, 2019
14
5
3
orderd 4 sets of h12 with 7402p, they are upgradeing me to 7402 instead, which is ok.

orderd with 8x32 and 8x16 to test out with ZFS.

Guessing ZFS is going to the limitations as usual.

only have to decide on 9300, 9400 or 9500 hbas... (all flash cse-216-be1c)
If you are planning on adding NVMe JBODs, you have to go 9500. 9400 only doesn NVMe internally. If it's just SAS3 JBOD's, save money and go 9305w-16e.
 
  • Like
Reactions: servethenudist