New Network and Server Stack

DeSusBassist

New Member
Oct 7, 2019
2
0
1
Hey folks,

I’ve been noticing that my UnRaid server is getting a tad long in the tooth in terms of speed of docker containers running and the VMs. Granted the server is running on an old FX-8320 and 16GB of DDR3 ram.

So I think it’s time to upgrade but I don’t know which path to take. Currently I’ve got an order to Amazon for the following parts list...https://pcpartpicker.com/list/xcnPVc
(Note that all Purchased Items will be put into the new server from the old one.)

On the other hand however, I also thought that maybe a used Dell R710 like this one...Click Here, would also be a viable option. It would be specced with 64 GB of Ram and the 6 drive caddies as blanks.

The UnRaid Server runs the following:

2x Ubuntu 18.04 + Bind 9 DNS Server
2x Ubuntu 18.04 + PiHole
1x Ubuntu 18.04 + Poste.io Mail Server
1x Ubuntu 16.04 + UniFi Controller + Lets Encrypt

Docker containers:
  • Crashplan Backup Pro
  • Plex Media Server
  • NGinx Reverse Proxy
  • Syncthing
  • Bitwarden
  • MariaDB
  • WordPress Website
  • Nextcloud
  • DuckDNS
One more note, I've been wanting to tinker with PfSense and HAProxy. I currenly have a full UniFi network setup USG 3P->USW-16-POE->AP-AC-Lite, AP-AC-Pro, US-8-60W and a couple of cameras and a cloud key gen 2+. I was looking at this Dell R210 ii to run PfSense as I believe the chip supports AES-Ni and also I can add that NIc from the Unraid build to the Dell R210ii for more physical ports.

Let me know your opinions and if I am over building these servers. I’m not trying to break the bank or spend more than like $1000-$1200 all together.
 

Mithril

Active Member
Sep 13, 2019
172
43
28
So, *personally* I wouldn't spec out a NAS of any type without ECC ram.

Ryzen CPUs can use ECC ram, not sure if that motherboard can.

I also *personally* wouldn't go with a Zen1 CPU in a new build at this point. I would also *personally* pair a Zen 2 or Zen 3 (when they come out) with a x570 motherboard simply due to double the bandwidth to the southbridge, meaning more bandwidth for the non CPU PCIe slots. This is especially true If you are using a GPU that will be occupying 16 or 8 lanes, leaving either 4 lanes (typically routed to an M.2 slot) or 8 + 4 lanes direct to the cpu.

The IPC uptick of Zen2 also means that a 3600 is often faster and rarely slower than a 2700, despite 2 fewer cores. We see the same uplift with a 5600 being just generally as fast or faster then a 3700/3800, again despite fewer cores.

If your goal is 64GB that's quite workable with consumer boards and DDR4, 4 RAM slots means that will be cheaper/more compatible as well. 32GB dimms for DDR4 exist, but YMMV.

R710s are fairly old-school. Yes, lots of RAM and lots of PCIe, but decently loud, heavy, and wll suckdown a fair bit of power. I *personally* wouldn't want one outside of a garage or other simi noise isolated room. And while yes, cores are *generally* king for doing lots of VMs/docker, at some point IPC, cache size and yes even power use makes enough of a difference both in general, and for a homelab.

For future-looking at least go with a x570 board, as that should support 2xxx to 5xxx cpus allowing for drop-in upgrades in the future. Get one that can support ECC memory even if you don't use it now. Get one with 8x 8x bifurcation support on the board (or better yet 8x 4x 4x for a passive NVME x2 adapter later) since you want to use a GPU, but there's no reason to tie up all 16x lanes.

For running at stock, save money and use the boxed cooler. You can always upgrade it later, but it will work perfectly and the money *right now* is best spent on a better motherboard and/or CPU.