Build Feedback: 96TB NAS w/ Room For Expansion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

scolby33

New Member
Aug 26, 2020
13
9
3
Hello all,

I'm looking to replace my current 16TB NAS (A4-5300, Debian, mdadm, mixture of 4 and 2TB drives) with something with more storage and room for the future. I expect to use it for media storage, Jellyfin, backups of other computers, miscellaneous cron jobs (including uploading incremental backups to a cloud service), and light serving duties. My reasons for the upgrade are running out of space in the face of a growing media collection, making space to back up family members' computers instead of just my own, and just boredom/upgrade lust--the existing NAS has been serving me since sometime around 2013.

I have never built on a server platform before and so went for a more consumer-style build because of familiarity. I'd be willing to experiment with server components, but I don't know where to start, which is part of the reason why I'm posting here.

Here's the initial parts list, which is slightly modified from the exact list you'll find by following the PCPartPicker list due to some missing parts there.

"Silver" (the old NAS is "Copper," so I'm going one better)
PCPartPicker Part List

CPU: AMD Ryzen 5 2600 3.4 GHz 6-Core Processor
Motherboard: Asus ROG STRIX B450-F GAMING ATX AM4 Motherboard
Memory: 4x Kingston 32 GB (1 x 32 GB) DDR4-2993 CL21 Memory (KSM29ED8/32ME)
Boot Device: 2x Western Digital Blue SN550 500 GB M.2-2280 NVME Solid State Drive
Bulk Storage: 8x Seagate Exos X14 12 TB 3.5" 7200RPM Internal Hard Drive
Case: Fractal Design Define 7 XL ATX Full Tower Case
Power Supply: Corsair RMx (2018) 850 W 80+ Gold Certified Fully Modular ATX Power Supply
HBA: LSI LOGIC SAS 9207-8i Storage Controller LSI00301

Possible Future Additions
More storage:
HBA: LSI LOGIC SAS 9207-8i Storage Controller LSI00301
Bulk Storage: 8x whatever drive has good value at the time

ZIL/SLOG
ZIL/SLOG Drive: Intel Optane 900P 280 GB PCI-E NVME Solid State Drive

L2ARC
L2ARC Drive: Intel DC S3610 800 GB 2.5" Solid State Drive (will look for this or similar on eBay)

What are your thoughts on this build? I have a few specific questions and thoughts:
  • Is it worth it to get the 800GB SSD for L2ARC and partitioning it down to 400GB for wear leveling, or would getting a 400GB-ish drive of similar caliber be sufficient?
  • I'm still undecided on which OS to use: Debian for my familiarity, plain FreeBSD, or TrueNAS Core. Either way, I'll be going ZFS for this build.
  • Is setting this up with consumer parts foolish? Should I switch to server parts for CPU/motherboard/enclosure? (Note that I don't have a rack, all I have is a nice spot of floor.)
  • I know I'm missing a GPU for the Ryzen processor, but I have an extra one lying around for set up and I believe (and will confirm) that this motherboard will boot headless with no GPU.
Thanks!
 

zack$

Well-Known Member
Aug 16, 2018
708
338
63
Should consider getting a server MB. The supermicro brand is well respected amongst the FreeNAS community.

I would also suggest that IPMI is a must for headless management (almost all server grade MBs have them with various licensing options). If your using the GPU just to hook up a monitor for the NAS, that probably is gonna be a huge waste.

You didn't mention anything about the network/nics you will be using. This is gonna be a significant part of your build/infrastructure that will determine whether a slog/l2arc is gonna be worth it. An optane drive for slog on a 1gig network could be an enormous waste.
 

scolby33

New Member
Aug 26, 2020
13
9
3
You didn't mention anything about the network/nics you will be using. This is gonna be a significant part of your build/infrastructure that will determine whether a slog/l2arc is gonna be worth it. An optane drive for slog on a 1gig network could be an enormous waste.
That's a very good point. I was planning to use the onboard NIC with the mobo I specified, which I believe is 1x 1gbps. My network is currently 1gbps throughout and there aren't plans at the moment to upgrade it. I guess it would make more sense to spend on upgrading the network before going all in on the SLOG/L2ARC upgrades to this system.

Should consider getting a server MB. The supermicro brand is well respected amongst the FreeNAS community.

I would also suggest that IPMI is a must for headless management (almost all server grade MBs have them with various licensing options). If your using the GPU just to hook up a monitor for the NAS, that probably is gonna be a huge waste.
IPMI is attractive, although I've never used it, so I don't really feel the need for it. Probably, I just don't know what I'm missing. Looking at server boards with this feature, it looks like these are my options:
  • ASRock Rack X470D4U with the same CPU: IPMI, still 1gbps networking, exactly enough expansion for 2 PCI-E 8x HBA's and either the Optane or an x4 10gbps network card, if such a thing exists. Overall a little limiting on the expandability.
  • ASRock Rack ROMED8-2T + Epyc 7232P (or maybe something higher, but probably not): Big increase in price, but gets all the features, including IPMI, plenty of expandability, onboard 10gbps ethernet, way more max RAM capacity, and an upgrade to PCI-E Gen 4.
  • Supermicro H12SSL-NT (or maybe the H12SSL-CT if the price difference is less than a separate HBA) + Epyc 7232P: Unknown but probably similar increase in price, basically all the same features as the ROMED8-2T, but not released yet. (It seems like they were announced in about April and are still listed as "coming soon" on Supermicro's site.
The ASRock Rack website was not fun to navigate and its search functionality seemed buggy, which doesn't engender trust in their products for me. How is their reputation in the community?

Is there any information I out there on the release timeline for the H12SSL boards?

Is adding IPMI worth it for such an increase in price? (+~$140 for the X470DU, +~$900 for the ROMED8-2T and the 7232P) I know I'm getting a lot more features built in, but I could replicate all of them by going for a B550 gaming board (for PCI-E Gen 4) and adding a 10 gigabit ethernet adapter, except for the IPMI.

I'm not going to lie, the server parts are attractive. But going from a ~$4000 system to an ~$5000 system is a big jump just to save me walking down the stairs to hit the reset button if something goes wrong.

Edit: And I guess I need to factor in $50 for a new bottom-of-the-barrel GPU, or the opportunity cost of not selling my old GTX 1060, which is probably not worth a ton after September 1 and the 3000 series is announced.
 
Last edited:

scolby33

New Member
Aug 26, 2020
13
9
3
The system is built! I ended up going with the server motherboard with an Epyc CPU, and I have to say that IPMI is the best feature I never knew I wanted. Installing an OS from my desk chair instead of crouched over a bad keyboard and monitor is an amazing upgrade.

CPU: AMD Epyc 7232P (Newegg, $519.44)
CPU Cooler: Noctua NH-D9 DX-3647 4U (Newegg, $105.13) + NM-AFB7a Rev 1.1 (Noctua, free)
Motherboard: ASRock Rack ROMED8-2T (Newegg, $655.00)
Memory: Kingston KSM29RD8/16MEI x4 (Newegg, $410.04)
Boot Devices: Western Digital WD Blue SN550 500GB x2 (Newegg, $119.98)
Case: Fractal Design Define 7 XL (Newegg, $179.99)
Power Supply: Corsair RM850x (Corsair, $144.99)

Here are some miscellaneous notes from my build:
  • I originally wanted to use the Noctua NH-U14S TR4-SP3, but that would have resulted in a cooler oriented up-and-down in my standard ATX case, instead of front-to-back. (Due to the 90 degree rotated socket layout on the ROMED8-2T compared to more common ATX layouts.) This probably would have been okay, but I decided to go with the NH-D9 DX-3647 for the front-to-back airflow. This is probably a six-of-one-half-dozen-of-another trade-off, since the NH-D9 is smaller than the NH-U14S. Due to the eventual location for the system, I didn't want to use the vented top panel for the case. Noctua makes an adapter for the NH-D9 DX-3647 for TR4/SP3 sockets, which they provided me for free with proof of purchase of the cooler.
  • The Epyc CPU didn't come with the torque tool that I've seen Threadripper CPUs come with, so I had to guess at an appropriate torque by hand. Everything seems to be working properly so far, so fingers crossed.
  • One extra 140mm fan will fit in the front of the Define 7 than comes with it. I was building this computer with a friend who didn't need an extra 120mm fan from her Fractal Meshify case, which she kindly let me use in my build.
Build in Progress.jpg
Build in progress—the future gaming computer looks much cooler than the future server.

Front Side.jpg
Completed build from the front—it sorta looks too small for the case!

Back Side.jpg
From the back—reasonably-well cable-manged, I think. Just waiting for the hard drives to arrive to fill it out!
 

abregnsbo

New Member
Apr 16, 2023
1
0
1
Are you running Windows 10 on your ASRock Rack ROMED8-2T computer ?

I am asking because I have also bought ROMED8-2T, but there seem to only be motherboard drivers for Windows Server 2019, not Windows 10 64-bit.
 

oldpenguin

Member
Apr 27, 2023
30
10
8
EU
You're coming from a 2012 dual-core (no-HT) desktop CPU with a 16TB storage requirement and 1GBps internal network.
I'll second the move towards a server platform - if your pocket can take the hit, definitely go for the H12+Epyc, but bare in mind you'll be using different type of memory, different type of drives, different type of, well, pretty much everything.

Do a bit of reading around here - if you have room for a rack and a bit of extra noise ain't gonna have your beloved other having you sleep on the doormat, aim for a server unit. I'm tempted to say for the sake of screaming wallets that you'll surely do exceptionally more than fine with anything 2016-2017 era (Xeon v3/v4 CPUs, SM X10/Dell R*20/HP gen9/similar others) or newer. Get a dual CPU board, always equipped with both processors. If you think you'll only need 64GB RAM, get 128. Get a chassis that can accommodate LFF (3.5") drives. If you think you'll only need 4 drives, get the 12x LFF version at least. These usually come equipped with at least 2x 1GBe network ports. The 2x 10GBe copper is the least you should get, 4x10GBe copper or 2x10GBe copper + 2 SFP+ is a minimum. Look at available PCIe slots - you should have at least 1 x16 and 1 x8 available (or 3 x8's worst case). Storage controller should be able to switch between RAID and IT mode with a simple flash.

Why? All answers are not even hidden around here, all you need to do is carefully check. Don't rush into a purchase, check for possible moronic vendor locks, carefully build your requirements, don't hesitate to ask around - and, when you're confident, just do it and be happy, but the plan will already be drafted and spec'ed out.