Server Upgrade Advice - Need Sanity Check

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

werkkrew

New Member
Sep 3, 2024
11
0
1
Philadelphia
Hello -

New to the forums here.

My current homelab consists of various old computers but the main one I am concerned with today is my primary Unraid server.

Budget is around $1700-2000 all in.

HP z820 Workstation running UnRaid
  • Dual socket Intel Xeon E5-2697-V2 (12c/24t each) - 14k passmark each
  • 128gb DDR3 ECC
  • 2x 1tb Intel NVMe SSDs (Intel 665p)
  • 8x internal 3.5" drive slots
  • ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card (only 1 slot used due to lack of bifurcation support)
  • LSI SAS9201-16e 16-Port (for external chassis)
  • 16 HDD's - 8 internal to the tower, 8 external in a custom 16 bay chassis
  • Dual port Intel 1gb Ethernet adapter (connected using LACP)
  • System idles around 250w
  • DIY 16-bay external disk chassis - total of 24 slots
Server is used for
  • Essentially entire homelab software stack
  • General purpose NAS storage
  • Plex + Supporting apps (low user count, max 5-6 streams peak)
    • Relatively large library (~4000 movies, tons of shows, 60TB or so)
  • Immich (including the ML stuff)
  • Eventually Frigate (including the AI stuff)
  • Total of approx. 40 containers
  • 1-2 VM's (1 windows, 1 linux)
  • No continuously running intensive workloads but I do some things that can be pretty bursty like Friday night plex loads or big batch image/video processing jobs
Primary goal of the upgrade is performance, the current system is starting to feel a bit slow especially with my plex library. Reducing the idle power consumption a bit would be nice. Current system (z820 + shelf) idles at nearly 300w.

I would also like to go rack mount since I am buying a rack for some network stuff. So a consumer based tower build based on a Meshify 2XL or whatever isn't really what I'm after.

Looking for something with a lot of headroom to grow and something I wont want/need to upgrade for years to come. I self-host as much as is humanly possible.

After some discussion with some folks here is what I have come up with:
  • Motherboard: Supermicro H12SSL-i. (ebay bundle with cpu and memory ~$1000)
  • CPU: AMD EPYC 7402P
  • CPU Fan: Supermicro 4U EPYC (~$40)
  • Memory: 8x16gb DDR42133
  • HBA: LSI 9400-16i (~$80)
  • NIC: Supermicro AOC-STGN-I2S (Intel based 10gb) (~$30)
  • GPU: Intel ARC A380 (~$100)
  • SSD: In addition to my 2x intel NVMe which will be used for unraid "cache", Supermicro AOC-SLG3-2M2 + 2 1tb of the fastest SSD's I can afford for Docker/VM
  • Chassis: Supermicro CSE-847BE1C-R1K28LPB (4U 36-bay) - I won't ever use all 36 bays so a 24 bay would be fine but they are in the same price range it seems. (~$400)
Total without SSD's - ~$1510

My questions/concerns:

My main concern is the memory - the board/cpu supports DDR4-3200 but as far as I can tell the only ECC modules available are 64GB. As I understand it the EPYC works a lot better with 8 modules. So 8 slower modules would be preferable to 2 faster ones?

Is Zen3 or Intel worth considering? The Zen2 is from 2019 which is already a bit old but it does seem like a great value.

Is the HBA correct? Based on the photos of the chassis the expanders seem to terminate into 4 of those dual ported SAS connectors so it seems like I need a 16 port HBA and the 9400 is pretty new but I could probably get away with a 9200 or 9300 series for less money.

Any other thoughts, comments, concerns?

Thanks so much!
 

jode

Member
Jul 27, 2021
84
60
18
Is the HBA correct? Based on the photos of the chassis the expanders seem to terminate into 4 of those dual ported SAS connectors so it seems like I need a 16 port HBA and the 9400 is pretty new but I could probably get away with a 9200 or 9300 series for less money.
Supermicro supports a bunch of backplanes for this model. The important thing to look for: does the backplane contain expander or not - the exact model number will tell you. There is also an expander model that supports SAS redundancy and that requires more ports. Supermicro has manuals for chassis and backplanes online

I assume the backplane supports 12G SAS. A single cable (4 ports) can support up to (roughly) 48gb bandwidth. You could get away with a single cable per backplane if you only run HDDs and save a few bucks on HBA/cables. OTOH, scoring a 9400-16i for ~$80 is sweet and anything else is probably not worth the savings.
 
  • Like
Reactions: nexox

werkkrew

New Member
Sep 3, 2024
11
0
1
Philadelphia
Supermicro supports a bunch of backplanes for this model. The important thing to look for: does the backplane contain expander or not - the exact model number will tell you. There is also an expander model that supports SAS redundancy and that requires more ports. Supermicro has manuals for chassis and backplanes online

I assume the backplane supports 12G SAS. A single cable (4 ports) can support up to (roughly) 48gb bandwidth. You could get away with a single cable per backplane if you only run HDDs and save a few bucks on HBA/cables. OTOH, scoring a 9400-16i for ~$80 is sweet and anything else is probably not worth the savings.
Thanks for the response. The specific listing I am looking at is (ebay link).

The listing states then the rear backplane (12 drives) is a BPN-SAS3-826EL1 and the front one (24 drives) is a BPN-SAS3-846EL1

I apologize for my ignorance, I am new to the supermicro world. I have spent most of my life in and around datacenters, servers and enterprise storage but I have never pieced one together myself in this manner. In my formal position it's always from vendors like Dell, HPE, etc.
 

nexox

Well-Known Member
May 3, 2023
1,514
730
113
In no particular order:

That listing says "Supermicro BPN-SAS3-846EL1 (front) SAS Backplane with single expander and BPN-SAS3-826EL1 (rear) Backplane." so two SAS3 backplanes with a single expander, if you only need the front one then probably unplug the rear one from power, because the expander chip will burn a few watts even with no disks connected. An 8i card would be just fine.

I wouldn't bother with that old PCIe 2.0 NIC, unless you really need Intel for driver reasons, you're much better off with a Mellanox ConnectX 4 Lx, which are often available for $20-25 if they have a Lenovo or HP sticker.

I don't know exactly how memory configuration affects those CPUs, but you can get DDR4 2400 for just a little more than 2133, might be worth it.

I would also not trust those Intel 665p SSDs, they're probably pretty good for consumer QLC drives, but they're still consumer QLC drives, with only 300TBW endurance (under the best possible workload), sounds particularly unsuited for any kind of caching.

Finally, those PSUs are probably too big for reasonable efficiency, they're 80+ Platinum, but that doesn't mean anything if you're running them under 20% load, and they'll really tank when you get below 10% or so. You can probably get lower power Platinum rated PSUs for a decent price, and if you don't absolutely need redundancy you can run just one PSU at a time and manually swap in the rare case of a PSU failure.
 
  • Like
Reactions: jode

werkkrew

New Member
Sep 3, 2024
11
0
1
Philadelphia
In no particular order:

That listing says "Supermicro BPN-SAS3-846EL1 (front) SAS Backplane with single expander and BPN-SAS3-826EL1 (rear) Backplane." so two SAS3 backplanes with a single expander, if you only need the front one then probably unplug the rear one from power, because the expander chip will burn a few watts even with no disks connected. An 8i card would be just fine.
I appreciate the reply, this is exactly the sort of information I am looking for. The price difference between the 9400-16i's and 9400-8i's I see out there is only about $30-40 so I would probably just get the 16i anyway.

I wouldn't bother with that old PCIe 2.0 NIC, unless you really need Intel for driver reasons, you're much better off with a Mellanox ConnectX 4 Lx, which are often available for $20-25 if they have a Lenovo or HP sticker.
Perfect, I will add this to my list.


I don't know exactly how memory configuration affects those CPUs, but you can get DDR4 2400 for just a little more than 2133, might be worth it.
I don't know all of the details but based on my limited research these CPU's are 8-channel and perform best with 8 dimms.

I would also not trust those Intel 665p SSDs, they're probably pretty good for consumer QLC drives, but they're still consumer QLC drives, with only 300TBW endurance (under the best possible workload), sounds particularly unsuited for any kind of caching.
I am only using the Intel SSD's because I have them already so I suppose I will use them until they wear out for my Unraid "cache". In terms of "caching" in unraid, it's really only used to provide a faster temporary download location for my usenet and torrent stuff since I found writing directly to the array was not able to saturate my 1gbps internet link. When the intel ssd's wear out, I will replace them.

Do you have any sort of recommendation on SSD for my containers/vms? I notice Plex can be quite slow (probably due to sqlite) and I get a lot of messages about slow database response times so I am hoping to solve that problem by throwing hardware at it.

On my radar right now is a pair of 1TB Crucial T705's or Sabrent Rocket 5's but I am curious if I can get some used U.2 enterprise drives for a bit less? Some of the listings for this chassis include a BPN-SAS3-826EL1-N4 backplane for the rear which would give me 4 NVMe slots.

Finally, those PSUs are probably too big for reasonable efficiency, they're 80+ Platinum, but that doesn't mean anything if you're running them under 20% load, and they'll really tank when you get below 10% or so. You can probably get lower power Platinum rated PSUs for a decent price, and if you don't absolutely need redundancy you can run just one PSU at a time and manually swap in the rare case of a PSU failure.
I would definitely not run both PSU's at once. I'm not in need of that sort of uptime or redundancy so the 2nd PSU would effectively just be a spare. I will look into other PSU's but I don't really know what sort of specs I would need there, but if it they would pay for themselves in power savings that would be great.
 
Last edited:

nexox

Well-Known Member
May 3, 2023
1,514
730
113
In terms of "caching" in unraid, it's really only used to provide a faster temporary download location for my usenet and torrent stuff since I found writing directly to the array was not able to saturate my 1gbps internet link.
Your download clients may have configuration options to use more memory as cache, the defaults are often quite low.

Do you have any sort of recommendation on SSD for my containers/vms?
M.2 makes this difficult, servers moved past it for everything but boot drives a long time ago, but there are still some options, perhaps the Samsung PM983 or Micron 7450 Pro, you want something with power loss protection and ideally MLC NAND. If you have a spare PCIe slot you can get much better performance and value with a couple NVMe drives in PCIe card form factor.
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
Worth the money. I switched from spinners and raid to a single U.3 drive instead and retained the capacity of the raid in a single disk that's over 10x faster. Kioxia is what I went with after issues with two Micron drives. It runs cooler and has been flawless since installing about a year ago now. I have it hooked up using a cheap M2 adapter and oculink cable.
 

werkkrew

New Member
Sep 3, 2024
11
0
1
Philadelphia
M.2 makes this difficult, servers moved past it for everything but boot drives a long time ago, but there are still some options, perhaps the Samsung PM983 or Micron 7450 Pro, you want something with power loss protection and ideally MLC NAND. If you have a spare PCIe slot you can get much better performance and value with a couple NVMe drives in PCIe card form factor.
I'm fine with U.2 or whatever form factor. In fact some of the chassis I am looking at have the BPN-SAS3-826EL1-N4 backplane which would give me room for 4x NVMe drives on the drive sleds. I just don't know much about what to look for in this space, especially if there are concerns about NVMe performance when going through an expander/HBA.
 

nexox

Well-Known Member
May 3, 2023
1,514
730
113
I'm fine with U.2 or whatever form factor. In fact some of the chassis I am looking at have the BPN-SAS3-826EL1-N4 backplane which would give me room for 4x NVMe drives on the drive sleds. I just don't know much about what to look for in this space, especially if there are concerns about NVMe performance when going through an expander/HBA.
With Supermicro you'll wire each NVMe port from the backplane directly to a specific adapter like the Supermicro AOC-SLG3-4E4R, the HBA connects to different ports for SAS drives. As far as drives to choose, some Micron U.2 drives require strong airflow through the drive body via little openings in the front, that will only be feasible with full noise server fans, so perhaps avoid them. Other than that just get something that uses MLC, they all have PLP and even the slowest, oldest model you can find is going to be pretty good.
 

jode

Member
Jul 27, 2021
84
60
18
I just don't know much about what to look for in this space, especially if there are concerns about NVMe performance when going through an expander/HBA.
This expander only supports nvme gen3, but you'll get full bandwidth.
An oddity with this expander is that you need to populate the 4 slots in the correct order (1..4) for the expander to recognize them as nvme. The manual explains which port is which.
Depending on your use case you may find that this is excellent if you pick up a few Intel Optane 905's.
 
  • Like
Reactions: nexox

Navvie

Member
Nov 21, 2020
42
15
8
This looks a very similar set up to mine.
Supermicro 847 with SAS3 backplanes. (Not sure if my 12 drive backplane supports NVMe)
HS12SSL-i with an EPYC 7282
2U supermicro cooler
256GB of DDR4-2400mhz

The 847, 36 bay chassis uses 2U for the rear 12 drives, leaving 2U for the motherboard. All coolers and PCIE cards need to fit in 2U.