Build Advice on Xeon SP or EPYC? ASUS or SuperMicro Board?

Epyc or Xeon?

  • Epyc

    Votes: 7 46.7%
  • Xeon with Supermicro

    Votes: 8 53.3%
  • Xeon with Asus

    Votes: 0 0.0%

  • Total voters
    15
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cuco

Member
Feb 13, 2018
57
4
8
38
Hey,
i am new here. I here the blog very frequently and really enjoy it. Thanks for the great job you all do!
I currently own an E3-1240v3 with a supermicro board with only two Expansion slots. I use my server with WIndows Server 2016 Essentials and a Pool Software called StableBit Drivepool.
I am quiete Happy with that. Most demanding is my Plex Server in times of 4k files that need to get converted.I use direct play at home but on the road and my friends need to transcode. And no new movie works. So a new Server has to come.
I also think about adding a GPU for Hardware transcoding. In my actual setup this is not possible.

I already have a Chenbro SR107 Chassis with Backplanes. So i just need a Motherboard, CPU and Ram.
The System uses 12 2,5" SAS/Sata Drives and 16 3,5" Drives. I have 3 Extrenal SAS Enclosures. Therefore i route some SAS ports to PCIe brackets.

I thought about EPYC but it is basically not available. 7401p parts would be fine. But no chance here in Germany. Going with the 7351p could be an option. But i am a bit conceirned regarding Power Consumption.
The Tests showd that idle is equal to Xeon SP. But Load is much more, but also Performance.

On the other Hand there are some sweet Xeon SP parts like the Silver 4114 or 4116. I also have a great deal for an Gold 6130. But could not find any Numbers on Power consumption of the 125W TDP parts.
Perhaps yomeone out of the STH Labs can help here ?

When i go with XEON i have to decide:
  • Supermicro X11SPH-nCTPF: SAS onboard and SFP+ (perhaps the 10G-T version is also an option)
  • ASUS Z11PA-U12/10G-2S: SFP+ onboard
Personally i would prefer the ASUS Board. But the Supermicro has an additional PCIe x8 Lane between CPU und PCH. See the attached pictures.
The ASUS Board doesnt have that. I am concirned if there would be a bottleneck.
I would use the 8 Sata ports, 2 10Gbit Ports and the M.2 x2 port. All with the DMI3?
So perhaps the Supermicro is the Better choice? Or shouldn't i worry about that?

I every case i will add a:
  • Broadcom/LSI SAS HBA 9300-24i
  • SAS Expander or LSI 9300-4i4e
  • perhaos a GPU for Hardware Transcoding

Thank you for your Help and advice!
And sorry for my bad english.

cuco
 

Attachments

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I prefer Supermicro IPMI. Asus isn't as good updating their firmware for Java fixes.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
Ok. And what do you think about the bottleneck?
There is no bottleneck. The two systems are just configured differently. ASUS gives you more slots directly connected, SM hangs a slot off the PCH and needs a lane for the BMC. ASUS has a x2 M.2 slot, SM has a x4 M.2 slot. ASUS has x8 oculink, SM as x4 oculink. SM has 1 x16 slot, ASUS as 2 x16 slots. Unless you have unusually specific requirements, they're not much different.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
EPYC will use a lot more power than a Xeon Silver but it is faster, even the 7351p.

Another part of EPYC is that you end up using 8 DIMMs per socket versus 6 for Xeon Silver to get full memory bandwidth. That means two extra DIMMs on EPYC. Better for RAM capacity if you need it, but not as good for power consumption and initial purchase price. Minimally you can get by with 4 DIMMs (one per NUMA node) on EPYC and 1-2 on Skylake.

I have not used the ASUS board, but I use a version of the Supermicro X11SPH and it has been solid for many months.
 

cuco

Member
Feb 13, 2018
57
4
8
38
and would you recommend the 10G-T Version or the SFP+?
I dont have 10G Hardware for now. It is just for future. We will move to a house the next year and then i want to have Ethernet in all rooms.
SFP+ Switches are cheaper and there passiv ones right?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
@cuco that is going to depend on which networking you want to use. SFP+ is generally the better of the two, but cabling is more expensive and there is a lower installed base in existing walls.
 

cuco

Member
Feb 13, 2018
57
4
8
38
The walls should become normal RJ45 only the Server would have SFP+.
@Patrick: Do you have any informations about Xeon Golds Idle Consumption? Is it equal to Silver and just max Load is higher?
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
The walls should become normal RJ45 only the Server would have SFP+.
@Patrick: Do you have any informations about Xeon Golds Idle Consumption? Is it equal to Silver and just max Load is higher?
Usually higher, but single digits watts.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
CPU transcoding of 4k videos is going to lag even with the silver. My 1275v6 gives better performance in plex than my silver 4114. You may want to add a graphics card to your existing setup and check out the performance of GPU transcoding before deciding to rebuild the entire system.

I use a 3 year old i5 desktop with a 780ti for 4k plex transcoding.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
CPU transcoding of 4k videos is going to lag even with the silver. My 1275v6 gives better performance in plex than my silver 4114. You may want to add a graphics card to your existing setup and check out the performance of GPU transcoding before deciding to rebuild the entire system.

I use a 3 year old i5 desktop with a 780ti for 4k plex transcoding.
yes, intel seems to have decided to segment the market by restricting clock speeds, so you have to buy into the gold line and spend for a bunch of features you don't want in order to get raw single thread performance. that's a big part of why I've been so underwhelmed by intel's current product lineup.
 

cuco

Member
Feb 13, 2018
57
4
8
38
I am already considered adding a cpu. but my actual motherboard lacks the slots. i also intended to upgrade. so right now its the time.

Does anyone know how the ASUS IPMI WebGui Looks like?
 

BobbyB

Member
Dec 26, 2016
33
10
8
74
As an alternative to Skylake-SP and Epyc consider instead the upcoming Xeon E-2176G? 6core, graphics onboard, DDR2667, graphics onboard, much less power. Less memory bandwidth and unbuffered ECC, a lot fewer PCIe lanes as drawbacks but I doubt that's a bottleneck for you.
Leaked few days back, god knows if and when details come out and if it will be compatible in existing X11 socket 1151 motherboards (I wish), should be out early Q2 on paper.
 

cuco

Member
Feb 13, 2018
57
4
8
38
yes i saw that. basically i waited for these parts the last years. but now i decided to go bigger :cool:

Last question:
Xeon Gold 6130 and Supermicro X11SPH-nCTPF
or
Epyc 7401p and Gigabyte MZ31-AR0
?

Are they equal regarding performance? The 7401p is in passmark only very little faster then Epyc 7351p. Gold 6130 seems to be little ahead. But in cpu spec2006 7401p seems to be 50% quicker.
 

Attachments

cuco

Member
Feb 13, 2018
57
4
8
38
Plex Transcoder is Multi Threaded i think. So the total score should be just fine. Only VC1 Codec should be single thread relevant. Isn't it?
 

cuco

Member
Feb 13, 2018
57
4
8
38
Thanks for the help here. I decided to go with a Epyc system.

So now i have to decide which RAM i want to take.
i thought of using:
Kingston KSM26RS4/16HAI 16GB 2666 1Rx4
but Samsung M393A4K40BB2-CTD 32GB 2666 2Rx4
is much cheaper.
Do 4 2Rx4 Dims use 4 Memory channels? Or do i also need 8 of them to use the quad channel interface?


 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Epyc CPUs have 8 memory channels, not quad-channel.
You need to populate each channel with one DIMM for optimal performance, regardless of the number of ranks per DIMM. ->8 DIMMs per CPU
The bare minimum with Epyc for workloads that do not require a lot of memory bandwidth is one DIMM per die/NUMA-node, so 4 DIMMs per CPU. This way, each core has direct access to parts of memory without communication overhead through infinity fabric.