Advice/recommendations for new 100TB+ 10Gig rackmount NAS build for Plex, Rancher, Proxmox, and more

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

aceawd

New Member
Feb 15, 2022
2
1
3
Build’s Name: Friday (Proxmox cluster is named Jarvis so thinking of something related)
Operating System/ Storage Platform: TrueNAS (core or scale, not sure which is best here)
CPU: unknown
Motherboard: unknown
Chassis: Supermicro 847 24 or 36-bay
Drives: 8-12x 16TB WD Gold HDDs + SSD pool + NVMe pool
RAM: unknown
Add-in Cards: 10G NICs, HBA cards
Power Supply: Dual redundant
Other Bits: See below

Usage Profile: See below

Other information…

Hello everyone,

I wanted to ask your opinions on a new NAS that I am designing for myself and my business within the next 12 months. I very much appreciate your input & experience.

I am currently running Unraid on my 12-year-old gaming PC (with an i7-960 + 13 drives hooked up to it) and it has been great as a learning tool. I am moving into more complicated systems and my current hardware is not up to the level that it needs to be for the next phase of my learning and environment. Plus, I think enterprise hardware is cool and how much you can do with it, so want to learn more in my homelab to help in my career. Hence, the need for this new NAS.

Considerations:
  • This NAS will only be handling network storage and is the main shared storage for my entire home environment.
  • All applications will be running on external compute hosts that would be accessing this NAS over the network via iSCSI or NFS/SMB shares.
  • The build design order of priority is 1) performance 2) fault tolerance 3) uptime.
  • I’m ok spending the money on quality hardware (within reason) and prefer to pay a little bit more for a better option that will last longer if presented.
  • Virtually everything on this NAS will be backed up in the cloud so I do have another copy of the data should something happen.
  • This would be mounted in my 24U APC NetShelter rack connected to a CyberPower UPS (battery backup + power conditioner). This is in a temperature-controlled environment.

Workload accessing this NAS:
  • 4-5 physical node Proxmox cluster running Ubuntu VMs running Rancher running Docker containers of:
    • Plex media server transcoding multiple 4K movies
    • Sonarr
    • Radarr
    • SABnzbd
    • Handbrake
    • Rust server
    • Factorio Server
    • iPerf3
    • PiHole
    • Ansible
    • MetalLB
    • And others
  • Shared storage location of Proxmox VMs for HA failover
  • Twitch stream recordings (sent directly from fiancee’s gaming desktop)
  • iSCSI target for playing Steam and other games off of
  • File storage for family pics/movies.
  • A handful of testing Windows/Linux VMs
  • Premiere Pro editing off of it (less important, could transfer to local storage if too costly)
  • File storage from my and my fiance’s small businesses (no databases), but that load should be minimal.

How I’m understanding it, the IO needs would be:
Sequential Read
  • Plex streaming

Sequential Write
  • SABnzbd

Sequential Read/Write
  • Handbrake
  • File transfers

Random Read/Write
  • Rust/Factorio/other game servers
  • Games
  • Sonarr/Radarr

Performance & Cost Targets:
  • 100TB+ of usable storage with 2 drive failure redundancy
  • ~$3000-4000 USD (open to other numbers, prioritize performance over dollar amount)
  • 10G file transfer from 2-4 computers/hosts simultaneously without saturation
  • Redundancy, 2 parity drives or some mirror

The proposed build below is my current guess on how to accomplish the above workload and goals. Assume that the connections from NAS to my computer, my fiance's computer, and the Proxmox nodes are via 10G NICs through a Ubiquiti/Mikrotik 10G SFP+ network switch. These computers would have drives fast enough to handle that transfer speed.

Proposed build:
  • 8-12x 16TB WD Gold HDDs for main storage pool
    • (Could use some advice on vdev layout)
  • SATA SSD pool (for games, video editing, or other faster storage)
  • NVMe SSDs for write cache / SLOG / special vdev
  • TrueNAS (core or scale, not sure which would be better here) or other OS.
  • 2x 2-port 10G SFP+ NICs
  • No idea how much ram, I hear ZFS is RAM hungry
  • No idea what CPU would fit best here
  • Supermicro 24 to 36-bay 4U chassis with redundant PSUs & hot-swap backplane
  • HBA cards to support all available drive bays

Please let me know what I’ve missed or if you need any more information. Feel free to link other articles/threads that may have answered this before as I’m just not sure where to look. Thanks so much for your help!
 
  • Like
Reactions: itronin

kpfleming

Active Member
Dec 28, 2021
392
205
43
Pelham NY USA
That price point sounds like a challenge to meet. Even with the recent sale, the HDDs would cost you $2.4K-$3.6K ($300 each).

Even buying a bunch of gently-used hardware (like the chassis), I'd expect a build like that to be in the $10K range, and even that will depend on how large the "SSD" and "NVMe" pools are going to be, since you didn't propose a size for them.

(revised estimate after seeing that the 847 chassis alone sells for more than $2K)
 
Last edited:
  • Like
Reactions: BoredSysadmin

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Hmm, you say performance is a higher priority than availability, but I'm not sure what part of your workload is really limited by the speed of the storage system.

Plex is limited by the bitrate of the media multiplied by number of concurrent users (usually much less than what a single spinner can do). Media acquisition (sonarr et al.) is generally bottlenecked by upstream network (unpack is storage-limited, but infrequent and not time-sensitive). Game servers usually just single-core compute and a bit of RAM. Video editing certainly, but I agree it'd be simpler to do that with NVMe local to the editing workstation (or via proxy media on local NVMe). Backing storage for the iSCSI volumes, VMs/containers, etc. shouldn't take up too much room on flash.

My point is that if the HDD pool doesn't need to be blazing fast, you can use fewer, larger z2 vdevs (or even stick with Unraid dual parity) for increased efficiency (lower cost) compared to say mirror vdevs ("raid10").

If you're willing to occupy more bays (perhaps adding a DAS in the future), stepping down to 20x 10TB SAS drives at $100 each would save you some money on that front. 846/847 with X9DR-gen board maybe $700-800? Should leave you barely enough for flash storage (depending on how much you need), CPUs, RAM, etc.
 

Rttg

Member
May 21, 2020
71
47
18
stepping down to 20x 10TB SAS drives at $100 each would save you some money on that front.
^^this

Beyond the better price of 10TB drives vs 16TB, consider a diversified storage layout. IMHO, raidz doesn’t make sense over mergerfs+snapraid/unraid for mostly cold storage. Like SeanHo said, it doesn’t look like most of the use cases (besides video editing) are IOPs sensitive.

You pay dearly for IOPs whether it’s spinning rust or flash.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
- E5 v3 (or dual depending on $\saving on motherboard)
- 4x16 or more.. DDR4 RDIMM is reasonable, and the more the better :D
- SM 846 or 847 are up in price vs. before, but has my vote too.
- SM NVME PCIE Adapter
- P4500 2TB for your fast storage, get 2 and mirror if you need that if not save the $$
- 13 x 10TB (Check "Great Deals") for 13x RAID Z3
- 10Gig NIC

The HDDs will run you $1300, CPU+Motherboard+Chassis will probably put you into $800-1200 range depending on deals and what ones you end up going with. RAM $200+, NVME + NVME PCIE ADAPTER $250-350, NIC $60, HBA $60-150 (depending on market).
 
  • Like
Reactions: TLN

TLN

Active Member
Feb 26, 2016
523
84
28
34
I think going from 8 drives to 10+ drives will drive cost higher. I'd stick with either 8x16Tb drives (Plus 1-2x NVME) or go big with 20x10Tb drives in server chassis.
I agree on e5-v3/v4 platform, for example I got Asrock mobo with integrated SAS controller and 10G NIC: Leaves you 3x Pcie x8 slots for NVME/Videocard/etc if you decide to go that way. $1000 should be enough for computer part, that leaves you with $2-3k for storage. Not so sure about Dual PSU, I would not chase it for smaller Storage, but if I go for server chassis I'd like one with 2x PSU for sure.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
If rack space is available, and you find the 846 too expensive, the $200 RSV-L4500U has 15x LFF bays, fits SSI-EEB, and takes a normal (non-redundant) ATX PSU. No hotswap, unfortunately. When you outgrow 15 drives, just add a KTN-STL3, MD1200, DS4243, etc.
 

unwind-protect

Active Member
Mar 7, 2016
415
156
43
Boston
A resilver after drive failure with 16 TB drives in raidz2 or raidz3 will take incredibly long. Might be another reason to go with more 10 TB drives instead.
 
  • Like
Reactions: Sean Ho

itronin

Well-Known Member
Nov 24, 2018
1,237
797
113
Denver, Colorado
847 will be LOUDer than an 846 - how loud? IMO not same room friendly loud. Also watch your power budget with 920SQ psu's and all your spinners + nvme and memory esp with dual proc. if you use 1200SQ you *will* pick up some high frequency whine that will penetrate walls. 847 will also limit you to HH cards - not sure that matters.

ZFS is not necessarily ram hungry - it will take what is available and try to guess (ARC) what you may want. Concurrent working set size will determine more of how much ram ARC needs to be effective for reads and TN provides reports to see that (hit rate). You mention SLOG. Sync=always for your proxmox storage? Are you using SMB for your clients, if so that's typically async and SLOG won't help you there.

figure 3.5 to 5.5Gbe (and 2+ vdevs > 12 total drives) without a lot of tuning effort for your spinning rust transfer rate and depending on workload. If your are going to try and saturate 10Gbe on your SSD pool (outside of seq write) you are going to want to look at high IOPS enterprise grade SSD's (really we're talking about SAS). So skip the SATA SSD pool and look at SAS SSDs or NVME (if you have the lanes) for performance. I personally like the HGST HUSMM16162xx used drives for capacity, endurance and performance. Please don't buy something like 2 or 4 TB MX500's and expect them to perform well.

@aceawd you also had one of the better and more thorough/thought-through use-case write-ups that have been posted in a while.

the only thing I missed is how much storage are you using today, and have you calculated your growth for 6, 12, and 24 months based on current pattern? I ask since you have at least a couple of 'RR's in your list. I get you are trying not to fill up all your bays at the outset and there's some growth with that model but rather than look at bays I'd look at aggregate storage and growth for 24 months and work backwards to see if you need buy all the storage up front *or* can incrementally grow to meet your needs (at least in the capacity tier).

If you are already at 50% of 100TB you may be surprised how fast that fills up and having designed in a growth plan from the get go will let you get quite a bit more mileage out of the design. Friendly reminder, to maintain read & write performance with your spinning pool you need to factor in at least 20% unused.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
- E5 v3 (or dual depending on $\saving on motherboard)
OP said he wants to run plex with multiple 4k transcoding. The best way to get it working is to use a gen-8 or newer chip with intel graphics onboard.
https://www.reddit.com/r/PleX/comments/hrpuhf E5 v3 doesn't fit that criterion.
I'd suggest something from Coffee Lake era Xeon E--22xxG

Just a humble suggestion, but you could avoid the headache and complexities and go with a pre-built NAS. Qnap TVS-h1288X, for example, checks all of your asks. Yes, it's expensive diskless, but you get stuff like well-designed heat and enclosure management and
 
  • Haha
Reactions: T_Minus