Overkill - Home Server - ZFS, help needed

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

LordGardenGnome

New Member
May 5, 2022
3
1
3
US, Connecticut
I started down this path over a year ago and play with some stuff at work (a bunch of really old hardware just laying around). This is the design for my new home server to replace my X299 HEDT computer that currently just hosts PLEX. It could have a lot of capability but was my old gaming rig, then decided to build a NAS, but didn’t hear about ZFS till later. This entire process seems like a money pit (it is) but I’m learning more along the way, and well since I’m an IT in the Navy, retiring in a few years, the experience and knowledge could be useful.

TL: DR – Need help and clarification RAIDZ design/setup – there’s lots of data and a explanation.

This is a log/story of my learning experience and poor planning build, I have watched more YouTube videos from Lawrence Systems, Level1Techs, ServeTheHome, 45 Drives, Techno Tim, Raid OWL, and Craft Computing for ZFS. I’ve also scoured the net looking up information about what to do with my build and have almost completed my build. I felt the need to rush the building process a bit recently as I’m in the US Navy and moving overseas (few months) and will deploy the server to a family members home while away. I plan to use VPN to access/manage the machine, but the family member also works in IT, so I feel safe having it there. My current server has 42TB's used and I have about 1TB free, I move files between externals currently to mitigate before the new server build.

Here’s a list of current parts and parts that are in shipping, then I’ll talk about the plan, and how the server is setup.



PartNameModel #
Quantity
Chassis45 Drives Q30Q30
1​
CPU-coolerNoctua NH-U9 TR4-SP3
1​
Chassis-FanNoctua NF-F12 iPPC 3000 PWMNF-F12 IPPC 3000
5​
CPUAMD EPYC 7302P100-100000049WOF
1​
MotherboardSuperMicro H11SSL-NCH11SSL-NC
1​
RAMSupermicro 64GB ECC RDIMM DDR4 3200M393A8G40AB2-CWE
2 (+2)​
PSUZippy 1200watt Dual Redundant PSU
1​
GPUNVIDIA RTX 4000 (Quadro)
1​
HBALSI 9305-24 (SAS HBA)SAS 9305-24
1​
HBA #2LSI 9305-16 (not purchased)SAS 9305-16
1​
NIC10GigTek – Dual SFP+Intel X710-BM2
1​
ExpanderSuperMicro PCIe carrier for 4xNVMeAOC-SHG3-4M2P
1​
NVMe DriveKingston DC1000B 240GB m.2SEDC1000BM8/240G
2​
NVMe DriveMicron 7400 PRO M.2 22110 1.92TBMTFDKBG1T9TDZ-1AZ1ZABYY
2​
HDDRefurb – Seagate Exos 14TB X14 SAS3ST14000NM0288
15 (+6)​
OS SSDSAMSUNG PM893 2.5” 480Gb SATA3MZ-7L348000
2 (Mirror)​


As you can see lots of capability beyond my current needs, but most of the hardware is already purchased and installed. The +2 on the RAM I’m probably going to pick some up here shortly as it’s much cheaper now and instead of doing L2ARC for my purposes. The Seagate Exos drives are shipped, and I’ll be getting the build moving forward, I purchased 20 drives (reseller threw in an extra, woo (21 Drives), but only plan to use 15 in the Pool.

Server currently runs Proxmox VE 7, plan is to spin up a couple of VM’s Truenas Scale (ZFS), Ubuntu (Plex), Houston (Server management), and anything else early on to install (sonar, radar, other download management). I was thinking about using the Kingston NVMe Drives (Mirror) as a ZIL/SLOG location, and the Micron NVMe drives (Mirror) for either hosting VM’s or being a location for incoming downloads? (Not sure).

The ZFS layout I had planned was a Pool of 15 Drives (RAIDZ2) with two VDEV’s of 6 disks each. 45 Drives Q30 Layout, backplane is SAS interface.

Group 1 – 15 Drive baysGroup 2 – 15 Drive bays
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV1Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – VDEV2Future addition
RAIDZ2 – Hot SpareFuture addition
RAIDZ2 – Hot SpareFuture addition
RAIDZ2 – Hot SpareFuture addition


I don’t need personal opinions on how I could have spent way less to get the same result, that’s apparent, but would like to know if what I’m looking into doing is correct and not completely dumb in the setup. Below is what I could figure out at ( ZFS Capacity Calculator - WintelGuy.com ) RAIDZ calculator.

RAIDZ2 Layout.PNG

I do have a QNAP external with 56TB of space for backups and will most likely upgrade its capacity in the future as I’m only using 6 of the 12 bays (Has dual 10gb SFP+ NIC).

Thanks for all input.

-LGG
 

Attachments

Last edited:

barichardson

New Member
Mar 31, 2022
7
9
3
I recently built a similar system and I considered mirrors but in the end went with 3x 6 disk Z2 vdevs like you are planning. For my use it was the best balance of performance (6 disks in Z2 is an optimal config) while having an acceptable level of redundancy.

I didn't bother with ZIL/ZLOG since they only help in specific workloads and put that money towards more ram for ARC to use.

Those 9305-24i cards run hot and will need additional cooling. I modeled and 3d printed a cooling shroud that will probably also work with the 16i version too. I ended up changed the NF-A4x10 you see in the pics out with an NF-A4x20 for the added static pressure It will stick out an additional 10mm but if you have room for the bigger fan I would recommend going with that.

There is also a sas branch of hddfancontrol to dynamically control fan speed according to hard drive temperature. This will keep those iPPC 3000 fans quiet while making sure your drives don't get too hot. I haven't tested this on proxmox but I bet it would work since its just a python script that I have setup to run as a systemctl service.

I run all of the fans through one of these PWM fan headers to make speed control easier to manage, plus it runs all of the fans through a molex or sata power connector so you don't risk overloading a motherboard header since those fans can pull a lot of current.
 
Last edited:

pd4ever

New Member
Jan 1, 2023
15
11
3
Those 9305-24i cards run hot and will need additional cooling. I modeled and 3d printed a cooling shroud that will probably also work with the 16i version too. I ended up changed the NF-A4x10 you see in the pics out with an NF-A4x20 for the added static pressure It will stick out an additional 10mm but if you have room for the bigger fan I would recommend going with that.
Thanks for creating that fan shroud! I had it printed middle of 2022, but just now getting around to building out my system.

What kind of temperatures are you seeing with the 20mm Noctua? I just have the 10mm. With the fan at its lowest setting (~1500 rpm) and the card at idle the ROC temp was 79C. With the RPM at max (~5100 rpm) temp has dropped down to 65C. I still need to replace some noisy fans on my hot swap bays so the system is overall pretty noisy, but I did not notice any change in volume bumping the 10mm Noctua fan to full speed. Wondering if its worth it to try the 20mm fan to see how much further it can drop temps.

Command I'm using to grab the Raid on Chip temperature:
storcli /c0 show all | grep -i "ROC temperature"
 

barichardson

New Member
Mar 31, 2022
7
9
3
I'm glad someone was able to make use of that fan shroud. It took a lot of iterations to get the fit just right.

My zpool is under a decent load 24/7 so I should be able to give you a pretty decent idea of what the temps will look like under load. My hot swap bay fans are set vary their speed to maintain ~40C on the hottest disk detected every 30 seconds.

This is my current reading from my HBA:
ROC temperature(Degree Celsius) = 64

I was simply judging the airflow by the fact that I could barely feel any air coming out of the exhaust vent with the 10mm compared to the 20mm noctua.

I also run the 20mm fan at full speed since it doesn't make any detectable noise difference once the chassis is closed up. One thing I did notice when I was comparing them side by side is that the 10mm fan makes a higher pitch sound that is slightly more noticeable and annoying. That alone was worth it for me to make the change.
 
  • Like
Reactions: pd4ever

pd4ever

New Member
Jan 1, 2023
15
11
3
Yea I can imagine it took quite a bit of effort to get it to friction mount so perfectly. At first it seems like it won't fit but then it lines up perfectly and doesn't move at all. Great work!!

It doesn't look like I'll be able to fit the 20mm fan. It will interfere with the card below so I'll have to stick with the 10mm. Thanks for checking your temp. I got another 9305-24i HBA in today that I was able to pickup cheap. This one is reporting 58 degrees with your shroud and the 10mm noctua at full speed. So I may just need to repaste the first card to get the temps down a little further.