PMS 4.0...PMS 5.0...PMS 6.0...No PMS 7.0! Plex/Storage server upgrade [PICS]

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

stresslvl0

New Member
Jan 7, 2019
7
1
3
I've been following your build as I'm building something similar. I think I'm going to stick with the Xeon Silver line and add something like a P400 to pass-thru for transcoding (with patched drivers to go past the 2 stream Nvidia limit), with my fingers crossed for Linux decode support coming soon enough. I started looking at the E-2176G but that seems too limiting in terms of motherboard/RAM (I want room for 8 DIMMs so I can use 8GB DIMMs and add more later - it looks like the only SuperMicro ATX board for that CPU is the X11SCA-F with only 4 DIMMs).

Definitely can't wait to see how this goes though, can't wait to see some photos when it's finished! I'll probably be ordering mine soon.
 
  • Like
Reactions: IamSpartacus

jingram

Member
Oct 21, 2018
57
8
8
I got most of my build done last night. Waiting on 2 more sticks of ram to arrive tomorrow.

I paid around 180 per 16GB DIMM Direct from SM’s eStore. Prices seem to fluctuate. First batch I got was in the 160 range.

Will post pics of the build completed build.

One thing that did take me for a minor loop was getting an OS installed and bootable on the NVMe drives. The other very stupid realization on my part was that the BMC controller drove the vga port alone and it wasn’t shared. Didn’t even think about it. So while I have this slick new setup and Emby hardware transcode is working great on the iGPU, monitor output stinks on the aspeed BMC. Quality of output to desktop is atrocious. Will be fine once I’m back in a good state and pretty much run headless, but it was a stupid oversite on my part based upon the board I specced out.
 

Ixian

Member
Oct 26, 2018
88
16
8
My last two SM boards I don't think I've ever even hooked the VGA up to a monitor, I just use iKVM right out of the box for the initial setup. Headless start to finish.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
My last two SM boards I don't think I've ever even hooked the VGA up to a monitor, I just use iKVM right out of the box for the initial setup. Headless start to finish.
Same here, I've never hooked video up to any of my SuperMicro boards. Only iKVM.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
DDR4 is very expensive compared to what you can buy used DDR3 for.
I'm not even comparing it to DDR3. I'm comparing it to my 32GB RDIMMs which are about $250. Where as it's about $380 for 32GB (16GB x 2) for UDIMMs. It's actually making me reconsider sticking with the Xeon Scalable platform I originally was going with since I already have the board for that in hand. It would just be me making a serious bet that Plex and Linux will start supporting the Quadro P2000 for hardware transcoding (both encode and decode) in the near future.
 

Ixian

Member
Oct 26, 2018
88
16
8
Run a Windows VM in Unraid and pass the GPU to it. Then run Plex on it.

That may not have sounded appealing to you at first, but it might be the best option now all things considered. It's also not difficult to manage and will work just fine with the GPU for encoding/decoding. Whereas it is unlikely Unraid will support Nvidia drivers soon, ruling out Docker anyway, and support for Linux VMs depends heavily on both ffmpeg and/or Plex-Emby updating it to support it. And even assuming they eventually do at first I would put serious money on it not being stable for a while, whereas on Windows, it more or less is today.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Run a Windows VM in Unraid and pass the GPU to it. Then run Plex on it.

That may not have sounded appealing to you at first, but it might be the best option now all things considered. It's also not difficult to manage and will work just fine with the GPU for encoding/decoding. Whereas it is unlikely Unraid will support Nvidia drivers soon, ruling out Docker anyway, and support for Linux VMs depends heavily on both ffmpeg and/or Plex-Emby updating it to support it. And even assuming they eventually do at first I would put serious money on it not being stable for a while, whereas on Windows, it more or less is today.
I've considered it, and still am. But being a Windows systems/network admin by trade I'm really dreading having to maintain a Windows server for Plex. Having to patch MS systems every month with those monstrocity cumulative updates is the last thing in the world I want to be doing on my home server.

On top of maintaining a whole Windows OS, RAM management is a lot less seamless than just letting my Plex docker dynamically use whatever available RAM it needs, including for my /transcode directory.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
So in comparing the below two builds, I'm leaning more towards the Xeon Scalable build now that I've laid out the price comparison. With the Xeon scalable build I don't need to buy a 10Gb NIC (onboard) or an HBA (comes with 12 SATA ports, plus enough spare PCIe ports for dual Optanes). On top of this, it gives me a very expandable system if I ever decided I needed more cores/RAM. I'll miss out on iGPU transcoding obviously but I'll still always have the option to go with a P2000 in a Windows VM if I really feel it's needed.

Xeon E-2100 build - $1,900+
  • Intel Xe0n E-2176G - $380
  • SuperMicro X11SCZ-F - $310 (in hand)
  • 64GB DDR4 RAM - $800
  • Intel X520-DA - $100
  • 12Gbps HBA - $300+
Xeon Scalable build - $2,100
  • Intel Xeon Gold 5115 - $1,200
  • SuperMicro X11SPM-TPF - $425 (in hand)
  • 64GB DDR4 RAM - $500 (in hand)
 
  • Like
Reactions: BennyT

Ixian

Member
Oct 26, 2018
88
16
8
I've considered it, and still am. But being a Windows systems/network admin by trade I'm really dreading having to maintain a Windows server for Plex. Having to patch MS systems every month with those monstrocity cumulative updates is the last thing in the world I want to be doing on my home server.

On top of maintaining a whole Windows OS, RAM management is a lot less seamless than just letting my Plex docker dynamically use whatever available RAM it needs, including for my /transcode directory.
Heck, I wouldn't bother with Windows Server (if that is what you meant), Windows 10 home is fine for a Plex server and requires very little maintenance. I don't have to do jack to manage mine.

So in comparing the below two builds, I'm leaning more towards the Xeon Scalable build now that I've laid out the price comparison. With the Xeon scalable build I don't need to buy a 10Gb NIC (onboard) or an HBA (comes with 12 SATA ports, plus enough spare PCIe ports for dual Optanes). On top of this, it gives me a very expandable system if I ever decided I needed more cores/RAM. I'll miss out on iGPU transcoding obviously but I'll still always have the option to go with a P2000 in a Windows VM if I really feel it's needed.

Xeon E-2100 build - $1,900+
  • Intel Xe0n E-2176G - $380
  • SuperMicro X11SCZ-F - $310 (in hand)
  • 64GB DDR4 RAM - $800
  • Intel X520-DA - $100
  • 12Gbps HBA - $300+
Xeon Scalable build - $2,100
  • Intel Xeon Gold 5115 - $1,200
  • SuperMicro X11SPM-TPF - $425 (in hand)
  • 64GB DDR4 RAM - $500 (in hand)
I know you've already bought some of this but still curious why you went the Gold route considering your needs; you could get something like a used 2680v3 and board with 10G/HBA support and possibly the memory too for the cost of that CPU. Other than about 35w power difference - which depending on what you pay for power will cost you maybe single digit dollars per year extra - it will work just as well, if not better. You're not using it for anything new the scalable series brings to the table.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Heck, I wouldn't bother with Windows Server (if that is what you meant), Windows 10 home is fine for a Plex server and requires very little maintenance. I don't have to do jack to manage mine.

Windows is Windows, they still need monthly Cumulative updates.


I know you've already bought some of this but still curious why you went the Gold route considering your needs; you could get something like a used 2680v3 and board with 10G/HBA support and possibly the memory too for the cost of that CPU. Other than about 35w power difference - which depending on what you pay for power will cost you maybe single digit dollars per year extra - it will work just as well, if not better. You're not using it for anything new the scalable series brings to the table.

Few reasons.

1. When I do upgrades I don't like going with previous generation hardware even though I know it's often a better value. When I upgrade I like to go with recent hardware if not cutting edge so that I don't feel the need to upgrade it again in a year.

2. TDP. I don't want to go for more than an 85w TDP CPU because this server sits in a closet. The 2680v3 is 45w higher.

3. I don't see any MicroATX E5v3 boards with dual SFP+ and onboard HBA that supports 10+ drives.


P.S. I haven't actually bought the Xeon Gold 5115 yet. I'm still considering the Xeon Silver 4114 for $670.
 
Last edited:

Ixian

Member
Oct 26, 2018
88
16
8
Few reasons.

1. When I do upgrades I don't like going with previous generation hardware even though I know it's often a better value. When I upgrade I like to go with recent hardware if not cutting edge so that I don't feel the need to upgrade it again in a year.

2. TDP. I don't want to go for more than an 85w TDP CPU because this server sits in a closet. The 2680v3 is 45w higher.

3. I don't see any MicroATX E5v3 boards with dual SFP+ and onboard HBA that supports 10+ drives.


P.S. I haven't actually bought the Xeon Gold 5115 yet. I'm still considering the Xeon Silver 4114 for $670.
This is my actual setup for my second (ESXI/FreeNAS) server:
IMG_2567-2 - Copy.jpg

Supermicro X10SRM-F ($280, new)
E-2680v3 ($290, used)
64gb DDR4 ECC RDimms 16x4 ($115x4/$460, new)
Intel x540 10GBase-T NIC ($140, used, the SFP+ variants are usually cheaper)
$1170 total. Only the CPU and NIC are used, and I'm not worried about Intel failing on me.

Plus a DC3700 I use for SLOG/L2ARC/General purpose fast cache. (that was $300, new, and for Unraid probably unnecessary).

The board has 10x SATA/SAS ports, 4 via a standard 8087 cable and the other 6 through regular SAS/SATA.

It has 8x WD Red 10TB hdds, 2x Samsung Evo 860 SSDs, and a Evo 970 NVMe drive I use for boot and virtual image hosting. It's not obvious from the picture but there's actually room for 2 additional 3.5/2.5 drives on the bottom there, should I ever need them. It's a Node 804 case.

It also sits in a closet. My master bedroom closet, on top of my gun safe, in fact. No problems with heat or noise.

Not trying to tell you you are wrong - do what feels right to you - just pointing out there are other ways to skin this particular cat. You'll notice I still have room for a x16 PCI GPU if I want to add one. Dual slot even, if I use a short riser with a right angle connector (Thermaltake makes one) though I doubt I would ever do that, and the P2000 I would use is single slot in any case.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
This is my actual setup for my second (ESXI/FreeNAS) server:
View attachment 10134

Supermicro X10SRM-F ($280, new)
E-2680v3 ($290, used)
64gb DDR4 ECC 16x4 ($115x4/$460, new)
Intel x540 NIC ($140, used)
$1170 total. Only the CPU and NIC are used, and I'm not worried about Intel failing on me.

Plus a DC3700 I use for SLOG/L2ARC/General purpose fast cache. (that was $300, new, and for Unraid probably unnecessary).

The board has 10x SATA/SAS ports, 4 via a standard 8087 cable and the other 6 through regular SAS/SATA.

It has 8x WD Red 10TB hdds, 2x Samsung Evo 860 SSDs, and a Evo 970 NVMe drive I use for boot and virtual image hosting. It's not obvious from the picture but there's actually room for 2 additional 3.5/2.5 drives on the bottom there, should I ever need them. It's a Node 804 case.

It also sits in a closet. My master bedroom closet, on top of my gun safe, in fact. No problems with heat or noise.

Not trying to tell you you are wrong - do what feels right to you - just pointing out there are other ways to skin this particular cat. You'll notice I still have room for a full size x16 PCI GPU if I want to add one.

You make quite a compelling point, especially being that I'm using the same Node 804 case with Noctua's as well. Curious where you have your 3 SSD's mounted? Also, which CPU cooler are you using to be able to get near silent effective cooling of that 2680? I am very worried about heat/noise because I spend a lot of time near this server and I'm very sensitive to noise. I don't hear my current server at all. Going from a 45w CPU to a 120w CPU is a little unsettling.

Your recommendation would rule out me using dual optanes and without that I'd need at least 4 SSDs to put in my Unraid cache pool to be able to write to it at 1GBps from my PC (I do lots of very large (hundreds of GBs) transfers to my server weekly).
 
Last edited:

Ixian

Member
Oct 26, 2018
88
16
8
I only have 2 SSDs, both mounted behind the front plate (if you already have the 804 it's easy to spot where they go, and yes they stay cool there). In the pic, look at the bottom in front of the motherboard; you can fit two additional SSDs or even 3.5 HDDs - I have 2 804 cases, and actually do this with the other one. 3.5 drives are a little bit of a tight fit there but work and cool fine. 2.5 in SSDs would be cake to install. There's your 4 right there.

However the X10SRM-F only has 10 SATA3 onboard, so you'd be short 2 for the SSDs. The PCI slots however can be bifurcated on the X10SRM-F, so instead of where I have the DC3700 you could install a cheap PCIe card for 2 additional NVMe drives and there's your dual Optanes right there.

The CPU cooler is the Noctua i4. As the X10SRM-F has a narrow ILM it's one of the few options you'll have that doesn't sound like a vacuum cleaner too. I use a script in FreeNAS to control the BMC for fan speed; at idle the server is virtually silent, and I have the fans set to max out at 80% RPM so even under load it isn't noisy (and at that speed temps are still well within range - my drives for example never go above 40c).

With Unraid you have it even easier because you can just use the IPMI plugin, which does the same thing but has a nice GUI to configure everything. It works great with Supermicro boards.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I only have 2 SSDs, both mounted behind the front plate (if you already have the 804 it's easy to spot where they go, and yes they stay cool there). In the pic, look at the bottom in front of the motherboard; you can fit two additional SSDs or even 3.5 HDDs - I have 2 804 cases, and actually do this with the other one. 3.5 drives are a little bit of a tight fit there but work and cool fine. 2.5 in SSDs would be cake to install. There's your 4 right there.

However the X10SRM-F only has 10 SATA3 onboard, so you'd be short 2 for the SSDs. The PCI slots however can be bifurcated on the X10SRM-F, so instead of where I have the DC3700 you could install a cheap PCIe card for 2 additional NVMe drives and there's your dual Optanes right there.

The CPU cooler is the Noctua i4. As the X10SRM-F has a narrow ILM it's one of the few options you'll have that doesn't sound like a vacuum cleaner too. I use a script in FreeNAS to control the BMC for fan speed; at idle the server is virtually silent, and I have the fans set to max out at 80% RPM so even under load it isn't noisy (and at that speed temps are still well within range - my drives for example never go above 40c).

With Unraid you have it even easier because you can just use the IPMI plugin, which does the same thing but has a nice GUI to configure everything. It works great with Supermicro boards.

Your argument here is quite persuasive sir. I'm giving this serious thought now. Any idea what your idle power usage is since my setup would be quite similar to yours in terms of components?
 

Ixian

Member
Oct 26, 2018
88
16
8
Your argument here is quite persuasive sir. I'm giving this serious thought now. Any idea what your idle power usage is since my setup would be quite similar to yours in terms of components?
My UPS is showing 110w with it idle right now. My drives are spun down (I imagine you'll do the same with Unraid, since it is easier to keep drives spun down with a NSA). Figure 170-190w under load, if I remember. That's going off my UPS though - the server is the only thing hooked up to it at the moment, but I don't know what overhead it adds/how accurate it is.

I do know my power bill hasn't shot up outrageously and my closet isn't a steam bath, though :)

My other server is a D-1541, similar setup and number of drives, and when it was running Unraid it idle'd at around 95w and peaked at about 130-140, for comparison.

Edit: Note I am using the 10TB Reds, which are known for lower power consumption (around 7w spun up). That's well over a third lower than the 8TB version, and of course faster drives (7200 Golds, etc.) will consume even more. With 8 of them that will have a big impact on the totals.
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
My UPS is showing 110w with it idle right now. My drives are spun down (I imagine you'll do the same with Unraid, since it is easier to keep drives spun down with a NSA). Figure 170-190w under load, if I remember. That's going off my UPS though - the server is the only thing hooked up to it at the moment, but I don't know what overhead it adds/how accurate it is.

I do know my power bill hasn't show up outrageously and my closet isn't a steam bath, though :)

My other server is a D-1541, similar setup and number of drives, and when it was running Unraid it idle'd at around 95w and peaked at about 130-140, for comparison.

What are you using the Xeon D-1541 for just out of curiousity? That would be pretty beefy for just a backup server.
 

Ixian

Member
Oct 26, 2018
88
16
8
It was my Unraid media server, running Emby and all my media related dockers. I switched all that to my ESXi setup because I wanted to consolidate and I am more used to DYI - I run my own Debian-based docker host, for example, with Portsnap. Not as easy for the layman as Unraid's docker management is but I can do a lot more with it.

I made the 1541 a backup FreeNAS server because it idle's pretty low (and in sleep mode it consumes almost no power) but it's still powerful enough to take back on my basic media server needs whenever I do something dumb with my main server (my recent adventures with Ceph, for one) and don't want to upset everyone in my house :) I run a separate pfsense firewall with HAProxy which makes it very easy, for example, to switch Emby servers between the two, since I have that setup with a dedicated domain name/Letsencrypt cert.