High capacity, energy efficient, performant NAS build options?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Tuckie

New Member
May 9, 2021
6
0
1
I'm looking to build a new energy efficient rackmount homelab server to handle a glut of hd video and high resolution image editing over the network. I've been limping along using a Synology RS2414+ for way too long. I've run out of room on it. It's slow. it's started hard-locking on me.

Rather than another overpriced Synology that underperforms, I've decided on a(n overpriced) server build that will also allow me to handle a variety of vm/docker containers as well.

The general rule is: performance, low cost, and low energy usage... pick two. For some stupid reason I'm caught up on the idea of high performance, low energy usage :p

Requirements:

• Running 8+ drives in a ZFS RAID-Z2
• 10Gbe SFP+
• At least 16 hotswap bays for future expandability (and maybe taking the old disks out of my synology)
• Non proprietary where possible, I want this to be able to grow with me.

I've already purchased 8, 16TB Exos drives as I found a good deal.

The case I'm looking at:
SC846BA-R920B | 4U | Chassis | Products | Super Micro Computer, Inc.
I like that it has the sas3 backplane for the simple hookup, and so I don't have to worry about bandwidth limitations of sas2 if I start throwing in SSDs.

The Memory:
With ZFS, it seems that it is recommended to have 1GB per TB of disk, so I was thinking that 128GB of ECC would meet my needs. I don't have a particular model in mind, but I was just thinking of an ebay deal.

The boot/cache drive:
Samsung 980 PRO, 2TB

Build option 1:
The motherboard:
X11SPH-nCTPF | Motherboards | Products | Super Micro Computer, Inc.
I like the built in SAS controller, but I'm a bit nervous about having to flash it to HBA/IT Mode. The extra PCIe slots for nvme or GPUs down the road is nice too.

The CPU: I'm thinking that the Xeon Gold 5215 strikes a good balance given its TDP. Does anyone know if the Xeon Gold 5215L would work in this board as well? I found a good deal on one, but not sure if a board needs to be specifically approved. (even if I don't max out the memory)

Build option 2:
I'm very tempted by this from a power usage standpoint, but if seems a bit cramped for my needs, plus I think it would still need an HBA (...does it? The details on the onboard MiniSAS HD ports is a bit limited) A2SDi-H-TP4F | Motherboards | Products | Super Micro Computer, Inc.

Build option 3:
Is there a route I'm forgetting here? Short of going full SSD, if you were doing a high throughput, low energy, NAS build what parts would you use?
 

uldise

Active Member
Jul 2, 2020
209
72
28
Or would you mind AMD route? if you need additional PCIe lanes/slots, then i would suggest this route. i built on H12SSL-C with Epyc 7282 recently and it works very well. BUT, no SFP+ built-in, you can add add-in card for it(i'm did that for my system). BUT you get 128 PCIe gen 4 lanes with 7 PCIe slots, 5 of them full x16.
 

Tuckie

New Member
May 9, 2021
6
0
1
That's what I get for early morning copy paste, that's exactly the board I was looking at. :p

What type of SAS 3 connector is on that board? The case has a Mini SAS HD (SFF 8643) connector on the backplane.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Reducing CPU energy usage is quite overrated. It's not the CPU/Memory/Motherboard that kills you on power usage, it's everything else. For e.g.:

- My "core" server at home runs a Supermicro X10DRU motherboard with two E5-2680v4 CPUs and 256GB of RAM (8x32GB sticks). Power consumption at idle? ~75w. That's for 28 cores/56 threads, 256GB RAM, and nine PCI-e slots.

Once I add a few HBAs, network cards etc - Power consumption at idle = 120w.

The add-ons almost double the power consumption, and I haven't even started counted HDDs yet. At ~8w per HDD, even with just eight of them, that's another 64w.

I stopped considering CPU/Platform power consumption a while ago.
 

Tuckie

New Member
May 9, 2021
6
0
1
Reducing CPU energy usage is quite overrated. It's not the CPU/Memory/Motherboard that kills you on power usage, it's everything else. For e.g.:

- My "core" server at home runs a Supermicro X10DRU motherboard with two E5-2680v4 CPUs and 256GB of RAM (8x32GB sticks). Power consumption at idle? ~75w. That's for 28 cores/56 threads, 256GB RAM, and nine PCI-e slots.

Once I add a few HBAs, network cards etc - Power consumption at idle = 120w.

The add-ons almost double the power consumption, and I haven't even started counted HDDs yet. At ~8w per HDD, even with just eight of them, that's another 64w.

I stopped considering CPU/Platform power consumption a while ago.
Yeah, I realize on the rational level that the cpu itself is not a big part of the power draw. I almost went the X10DRU + hotswap case route, but I didn't like the idea of being stuck into a proprietary motherboard. I figure this will let me expand as I see fit down the road.
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
The 846ba has a passive backplane, you will need two additional 8port hba to use all the drive bays
 
  • Like
Reactions: uldise

Tuckie

New Member
May 9, 2021
6
0
1
The 846ba has a passive backplane, you will need two additional 8port hba to use all the drive bays
Thank you for catching that; I've actually gone ahead and ordered the SC846BE1C-R1K23B | 4U | Chassis | Products | Super Micro Computer, Inc.

EDIT:

As far as everything else, I'm currently leaning towards:


Since I really wanted the SFP+, I figured I'll go with your board @uldise, as then I can just throw one of these in: Mellanox MCX311A-XCAT CX311A ConnectX-3 EN Network Card 10GbE SinglePort SFP+ | eBay

Memory:
2x Supermicro (Hynix) 64GB 288-Pin DDR4 3200 (PC4-25600) Server Memory (MEM-DR464MC-ER32) I figured it isn't too much of a price premium to go with something approved, unless anyone else knows of a better deal from a reputable source.

CPU:
I settled on zen2 for now. I probably would have gone with something higher end, but they're just hard to find at a reasonable price

Other stuff I shouldn't forget:

Edit: looks like the mb box includes a Supermicro Internal MiniSAS HD PCIe NVMe 12Gbs 60cm Cable (CBL-SAST-0658)

Missing pieces:
Heatsink: how do I determine what Supermicro heatsink is compatible with the air duct in the Supermicro case?
 
Last edited:

uldise

Active Member
Jul 2, 2020
209
72
28
The 846ba has a passive backplane
Thanks, missed that. but for me it depends. for my use-case i'm not wanting all ports together, but prefer building more but smaller disk arrays with each for it own controller on pass trough on different VMs. with H12SSL-C you should pass even builtin SATA controller(never did that, but will do very soon) and with reverse cables connect them to backplane ports - just use M2 for boot drive.

that's fine - then you can access all drives from built-in SAS controller

Memory:
2x Supermicro (Hynix) 64GB 288-Pin DDR4 3200 (PC4-25600) Server Memory (MEM-DR464MC-ER32) I figured it isn't too much of a price premium to go with something approved, unless anyone else knows of a better deal from a reputable source.
according to H12SSL manual, 4 memory sticks is recommended for CPU less than 32 cores. so you can consider smaller sticks.. my server have running with two sticks now, but i'm in process of upgrade them to 4 in total.

Heatsink: how do I determine what Supermicro heatsink is compatible with the air duct in the Supermicro case?
i have this one - Supermicro 4U Active CPU Heat Sink Socket OLGA4094 (SNK-P0064AP4)
and it works just fine. i stress-tested with my 7282 CPU at full speed, and CPU Temps never rump more than 60C, ambient temp 25C
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Reducing CPU energy usage is quite overrated. It's not the CPU/Memory/Motherboard that kills you on power usage, it's everything else. For e.g.:

- My "core" server at home runs a Supermicro X10DRU motherboard with two E5-2680v4 CPUs and 256GB of RAM (8x32GB sticks). Power consumption at idle? ~75w. That's for 28 cores/56 threads, 256GB RAM, and nine PCI-e slots.

Once I add a few HBAs, network cards etc - Power consumption at idle = 120w.

The add-ons almost double the power consumption, and I haven't even started counted HDDs yet. At ~8w per HDD, even with just eight of them, that's another 64w.

I stopped considering CPU/Platform power consumption a while ago.
While I agree, unless you need that much CPU you could cut that power in half with something lower power that still accepts lots of RAM. Saving 30-45w for one system isn't much, but those on solar, generators, expensive power, or anyone running say 10+ systems... the power saving is worth it IMO in all those use-cases. Sometimes too at home we're pushing what our rooms, closets, etc, can handle so an extra 30w here or 40w there can over heat gear and staying as low-power as possible is still very valid.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
Why are you considering the 7252? That's not the chip I'd go for in a high performance NAS and the cost to performance on that isn't so great additionally single-core performance on those is bad compared to other choices from INTEL or AMD.

For less money, WAY more performance why not 5600x + ECC RAM of course.

For better single core performance E5-2667 v3 is MUCH cheaper than any of the above options, RAM Is cheap, etc...
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
What bottlenecks are causing your existing synology to be slow...? Is it choking on IO or CPU or has it just become unreliable? Does the nature of your work put a high random load over whatever protocol you're using or is it mostly sequential?

I believe the "1GB of RAM for every 1TB of space" was more a rule of thumb for ZFS than anything else; mostly to help with using dedupe (which needs scads of RAM in order to remain speedy); if this is primarily for video work there'd be little sense in having dedupe turned on I suspect. Not that ZFS won't use all the RAM it can, it will, but for serving out video you'll be well in to diminishing returns; I suspect as a single user even 32GB might be overkill.

For less money, WAY more performance why not 5600x + ECC RAM of course.
Just to add to T_Minus' post here, it's not for everyone but I'm very happy with my AMD Zen build which replaced by Haswell E3 xeon. It's using the last-gen X470 D4U with the 8-core 3700X processor but that still gives it exceptionally good single core performance (typically very important for high performance over CIFS as well as a bunch of other tools) plus enough cores in a low enough power envelope to use it for video work as well (most of which gets done on my workstation) and it was very cheap compared to the Xeon's of the time.

The 5000 series get you even better single and multi-threaded perf within the same power envelope; if I was building the system today I'd use the 5600X. ECC UDIMM availability is much better now than it was when I built my system - Kingston 32GB 3200MHz modules (KSM32ED8/32ME) are now relatively commonplace and I think still the best game in town for EUDIMMs. You would of course be limited to a maximum of 128GB of RAM for a Zen build (4x 32GB DIMMs).

Power usage of the platform remains very low although as others have mentioned, it's usually the "everything else" that contributes the lion's share of the power usage; an HBA or two and a dozen HDDs will double or triple the idle load of just the board and CPU. If low power is a serious requirement (and it is for me) you need to think about getting the consumption of the whole system down, not just the board and CPU. FWIW I do fully intend to replace my spinners with big fat SSDs... if I win the lottery!
 
  • Like
Reactions: T_Minus

kapone

Well-Known Member
May 23, 2015
1,095
642
113
While I agree, unless you need that much CPU you could cut that power in half with something lower power that still accepts lots of RAM. Saving 30-45w for one system isn't much, but those on solar, generators, expensive power, or anyone running say 10+ systems... the power saving is worth it IMO in all those use-cases. Sometimes too at home we're pushing what our rooms, closets, etc, can handle so an extra 30w here or 40w there can over heat gear and staying as low-power as possible is still very valid.
Not disagreeing. I was just citing (my own experience) an example of how the platform power consumption is overshadowed by the add-on components.
 
  • Like
Reactions: T_Minus