AMD EPYC 7302p+ Supermicro H11SSL-i version 2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

inquam

New Member
Jan 29, 2017
8
1
3
44
Like many I built a dual Xeon e5-2670 in a S2600CP a few years back.

I have 128 GB ram (16 x 8GB)
3 LSI SAS2308-8i SAS controllers connected to a backplane that hosts 24 3.5 inch HDD's
a Intel P3605 SSD
a Nvidia Quadro P2000
and a Intel X710 network card.

I run about 30-40 Docker containers on an Ubuntu system. They include Plex, Radarr, Sonarr, Readarr, Prowlarr, Lidarr, Sabnzbd, qBittorrent, TeamCity, servers for SVN and Git and a some others that don't do much unless interacted with.

On top of that I also run 3 VM's using KVM that include a Windows Server and two Windows 10 machines used by TeamCity as build agents.These all sit idle 99% of the time.

The current setup regularly hovers around 300-400W "idling" with these containers running.

Since it has started to show it's age and energy costs have gone up my first thought was to try and re-use my 3950X machine to replace the server when I get a 7950X to replace machine with. But a consumer grade platform has such limited connection capabilities that it would be hard to plug everything into the motherboard. I almost started contemplating using a m.2 -> PCIe-adapter and connect one of the LSI cards that way and then mayyyybbbeeee it will work. But it felt like a hack and not a real solution to upgrading my server.

Then i stumbled across the myriad of 2'nd gen Epyc's available on Ebay and started feeling that this is the way to go. I will retain an abundance of PCIe connectivity and could easily transplant all of my peripherals over.

But the question is, what would the difference in performance be and would I save any energy if I switched to something like a AMD EPYC 7542 and a H12SSL-i? My plan is to go with 8 x 32GB sticks of memory.

Not sure how well they conserve power when under low load or if another model would be better suited like the 7402 or 7502.
I would like to have cores and power to last some years into the future but also try to get a system that is more efficient that what I have today.
 
  • Like
Reactions: OrdinanceB

lopgok

Active Member
Aug 14, 2017
223
156
43
I suspect you will reduce power consumption by a factor of two.
Remember the EPYC TDP assumes you will be using AVX instructions. Without them, your TDP will be at least 20% lower. I have a 7302 (16 core) and my machine idles around 100w, and I can't get it above 200w no matter what I do.

I didn't notice the 24 hard drives. As pointed out, they will use a fair amount of power. I upgraded to 10tb helium drives, which reduced my power consumption a bit.
 
Last edited:

kpfleming

Active Member
Dec 28, 2021
416
214
43
Pelham NY USA
The TDP of your Xeons is 115W, so that means that even if they were running near maximum load while the system is 'idle' (which is course not the case) they would only account for 50-60% of your measured power consumption. If the system is relatively idle, the CPUs are probably consuming less than 120W total for the two of them, which means the remaining 180W-280W are RAM, HDDs, GPU, NIC, etc.

24x 3.5 SAS HDDs is probably 120W of idle power alone.
 

inquam

New Member
Jan 29, 2017
8
1
3
44
The TDP of a CPU seldom equates to power usage though. I saw a test of a single e5-2670 that was doing something like 160W under load.
They also seem to lack some modern power saving states if my interpretation of powertop was correct. Here is where I'm thinking the more modern CPU could make up some savings. If I got down to 200W continuous draw with the small load it has today instead of the 400W peaks I see now I would be happy.

Doing work the HGST drives would use something like 7-8W per drive if i remember correctly and idle it was around 3-4W. They seldom do work much work and very seldom together (only when scrubbing and syncing... I run snapraid). A very quick test putting all the SAS drives to sleep rendered me seeing about 270W from the server.

When I say "idle" I mean I'm not manually doing stuff. But running all the stuff it does there will be media library scans etc going on at regular intervals and Windows VM's sitting idle will do what Windows systems do... hehe. So for instance a snapshot right now shows

1673632715304.png
 
Last edited:

ano

Well-Known Member
Nov 7, 2022
694
295
63
the epyc is so much faster than your current cpus, its not even funny
 

ano

Well-Known Member
Nov 7, 2022
694
295
63
Yeah his current cpu cannot ryzen to the same epyc performance
ohh the dadhumor :p

worklab has intel 3'rd gen misc + amd 2nd an 3rd gen epyc (soon genoa) we didnt really start testing the amd stuff until2022 a larger scale and.. wowie, for pretty much all our workloads.


only had a few systems on amd the last few years, but in2022 we deployed as much as 25% amd, for 2023 we will probably see 50-75% depending on how the 4th gen intel stuffturns out :eek:




for ontopic, 36 disk system with H12 + 7443p + 8 ramsticks, 2 OS sata sm883a + avg 400w with mild load, and 450 under heavier IO thats maxing all the spindles (writes), 2 raid controllers

I'm getting 20 more AMD servers next week, should be a fun month. if only we could get some switches to go with them.


we swapped some DL380's (older) with dual e5-2690v3 for new DL325gen10plusv2 with lower end SKU (7313 due to licenses spla issue) singel, and that specific workload is up around 15x times, what sevral of thoose dual DL380's did, a singel 7313 will do faster
 

inquam

New Member
Jan 29, 2017
8
1
3
44
Yea, part of me is thinking that the added performance will get things done so much faster that it will have more time to idle and thus save energy in the long run.
 

Cruzader

Well-Known Member
Jan 1, 2021
656
657
93
if only we could get some switches to go with them.
The switching part is what has been holding me back from already having replaced my standard 2u boxes with epyc.

2x25gbe connectx4 is already available down in the 50-60$ if buying a few of them.
(already what i buy to use as 2x10 since like 5$ diffrence and gets me pcie gen3 instead of 2)
But the 25gbe switching is not so fun pricing on.

The 2x 40gbe mloms i got in the cisco boxes as 1x40 4x10 is still esxi8 supported.
For the epyc its either bite the bullet on 2000€ in switching or back down to full 10g.
 

ano

Well-Known Member
Nov 7, 2022
694
295
63
The switching part is what has been holding me back from already having replaced my standard 2u boxes with epyc.

2x25gbe connectx4 is already available down in the 50-60$ if buying a few of them.
(already what i buy to use as 2x10 since like 5$ diffrence and gets me pcie gen3 instead of 2)
But the 25gbe switching is not so fun pricing on.

The 2x 40gbe mloms i got in the cisco boxes as 1x40 4x10 is still esxi8 supported.
For the epyc its either bite the bullet on 2000€ in switching or back down to full 10g.
I bought connectx-6 for everything, got a box of 25? ish on my desk here ;) but dx010 switches, are gone, everything else with OS is gone (100gbe) :|

and yes, SFP28 switches are more than QSFP+ most of the time :|
 

jei

Active Member
Aug 8, 2021
191
113
43
Finland
The package contained "Commercial Invoice" declaring $48 value. This was the "low tax" option. Finnish customs don't really care anything about this paper, it's up to the receiver's conscience to declare the correct amount. I'm pretty sure that if I would declare Fedex International Priority package weighing near 2kg "USD $48 (including shipping)" they would automatically flag the package for further inspection..
 

Cruzader

Well-Known Member
Jan 1, 2021
656
657
93
I'm pretty sure that if I would declare Fedex International Priority package weighing near 2kg "USD $48 (including shipping)" they would automatically flag the package for further inspection..
hypothetically speaking if you were to send even a 300-500kg pallet through marked that low it raises no flags at all.
Fedex etc simply does not care and for priority they generaly self-declare and direct delivery, not in the regular customs system.

To even see shipments marked as customs cleared with content verified before picked up from sender at all is not too unusual.
 

jei

Active Member
Aug 8, 2021
191
113
43
Finland
Fedex etc simply does not care and for priority they generaly self-declare and direct delivery, not in the regular customs system.
Fedex does not care, but Fedex has no role here, it's 100% finnish customs agency. They will notify Fedex to release the package when everything is in order. It would be better if the sender prepaid taxes, in that case it will not route to local customs at all.
 

Cruzader

Well-Known Member
Jan 1, 2021
656
657
93
Fedex does not care, but Fedex has no role here
If sender slap a IOSS number on it Fedex can declare themself that everything is in order and deliver it directly.
And they dont care whos number or what value.
 

Sean Ho

seanho.com
Nov 19, 2019
814
383
63
Vancouver, BC
seanho.com
Like many I built a dual Xeon e5-2670 in a S2600CP a few years back.

I have 128 GB ram (16 x 8GB)
3 LSI SAS2308-8i SAS controllers connected to a backplane that hosts 24 3.5 inch HDD's
a Intel P3605 SSD
a Nvidia Quadro P2000
and a Intel X710 network card.

I run about 30-40 Docker containers on an Ubuntu system. They include Plex, Radarr, Sonarr, Readarr, Prowlarr, Lidarr, Sabnzbd, qBittorrent, TeamCity, servers for SVN and Git and a some others that don't do much unless interacted with.

On top of that I also run 3 VM's using KVM that include a Windows Server and two Windows 10 machines used by TeamCity as build agents.These all sit idle 99% of the time.
As an alternative suggestion that might be cheaper, you might consider getting rid of some of the PCIe by consolidating the HBAs to a single controller plus cheap HP SAS2 expander (your 24x spinners are only very rarely going to saturate 8 lanes of SAS2; even better if you upgrade to denser drives). You can also drop the P2000 by offloading Plex to a separate, cheap 7th-gen uSFF/TMM with QSV -- with significantly reduced power draw. That would reduce your PCIe needs to the range of say X470D4U or X11SSM-F. With X11SCH-F you could even keep Plex on the same machine, using a chip with iGPU. Or a -TF board to let you drop the X710. Lots of possibilities.