Advice on ZFS server build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Ronan

New Member
May 14, 2016
10
1
3
Hopefully I have posted this in the correct forum. Looking for some advice on this build. Have most of the bits listed below but am still deciding on the best disk configuration. Any advice is appreciated.

Build’s Name: HomeLab NAS Build
Operating System/ Storage Platform: ZFS - Still researching the best option e.g FreeNas, Napp-IT etc.
CPU: 2 x SIX-Core XEON E5-2620 2.0GHz Processors
Motherboard: Supermicro X9DRi-LN4F+
Chassis: SuperChassis CSE-846 24x 3.5" with BPN-SAS2-846EL1 backplane
Drives: Researching best drive options (Possibly 4tb or 6tb WD REDs)
RAM: 24GB
Add-in Cards: Intel X520DA2 + (2X Intel Optane 900p 280GB - Free Intel samples from Work which I would like to make good use of. I assume using 1 for SLOG is a no-brainer.)
Power Supply: Dual PWS-920P-SQ
Other Bits: Rack rails

Usage Profile: This will be used to consolidate storage for file share, media, NVR storage,backups & VM storage for approx 10 VM's. Currently using a 5 bay and 8 Bay Synology for storage needs with a total capacity of about 15TB. I currently run a 2 node ESxi cluster and 2 node Hyper-V cluster for test and dev on an Intel 4 node (H2312JFF).

Other information…
Still trying to decide on the best drive configuration for this machine before purchasing HDD's and the HBA. I have been reading on ZFS and am trying to get my head around Vdevs (RAIDz1, RAIDz2). I have worked in the IT sector for many years but ZFS is new to me. Very familiar with standard RAID and SAN but I am still getting to grips with ZFS terminology.


Thanks.
 

Joel

Active Member
Jan 30, 2015
851
191
43
42
As long as your memory is ECC you're usually good (I assume it is based on Xeon + SM, but never know!).

ZFS uses the same concepts as standard RAID, just different names. RAIDZ1 = RAID5, RAIDZ2 = RAID6, etc. RAID1 you'd just

Big thing to know is a VDEV stands for virtual device, could be a single drive or partition, could be a RAIDZx array, etc. ZFS will happily allow some really stupid configs in the command line too, as many folks find out the hard way when trying to add to a pool. Also, I find it best to think of the pool as a striped array over the top of the other arrays.

WD REDs make good ZFS drives, also anything geared towards NAS use. Definitely avoid anything with SMR tendencies like Seagate's "Archive" line.
 

Ronan

New Member
May 14, 2016
10
1
3
Thanks for the info. That makes sense now. Just looking at HBA's now. Is there any advantage to going with a SAS3 HBA considering the server has a SAS2 backplane. Looking at the IBM SERVERAID M1215 vs M1115. I will be flashing it with IT mode firmware.
 

Joel

Active Member
Jan 30, 2015
851
191
43
42
Futureproofing would be the only advantage if you upgrade or swap the HBA to a different chassis. SAS2 is just fine for spinning rust anyway.
 

Joel

Active Member
Jan 30, 2015
851
191
43
42
As far as OS, FreeNAS is probably the most beginnner friendly. Can do just about everything from GUI including scheduling scrubs and SMART tests (best early indicator of a failing drive), and straightforward sharing & permissions. As the name implies, it's a NAS first and foremost.

I've graduated to Proxmox recently, as it still has a robust ZFS on Linux implementation (no training wheels though) and is built for virtualization.
 

fsck

Member
Oct 10, 2013
51
12
8
Why so little memory? Because of the SLOG?
Why 2620s instead of say 2670s? I have basically the same basic config as you, except with 128GB of memory and 2670s.
Zero point to use SAS3 with a SAS2 backplane since the lowest denominator will be your limit. There isn't even a point to use PCIE3 controllers with Sandy Bridge since the CPUs only support PCIE2 anyways.

If you're already familiar with file servers, freenas should be a trivial exercise for you. It was my first experience in servers years ago and it went smoothly.
 

Ronan

New Member
May 14, 2016
10
1
3
As far as OS, FreeNAS is probably the most beginnner friendly. Can do just about everything from GUI including scheduling scrubs and SMART tests (best early indicator of a failing drive), and straightforward sharing & permissions. As the name implies, it's a NAS first and foremost.

I've graduated to Proxmox recently, as it still has a robust ZFS on Linux implementation (no training wheels though) and is built for virtualization.
Thanks for the info Joel. I have been looking at playing with Proxmox but that's a project for the future at the moment.

Why so little memory? Because of the SLOG?
Why 2620s instead of say 2670s? I have basically the same basic config as you, except with 128GB of memory and 2670s.
Zero point to use SAS3 with a SAS2 backplane since the lowest denominator will be your limit. There isn't even a point to use PCIE3 controllers with Sandy Bridge since the CPUs only support PCIE2 anyways.

If you're already familiar with file servers, freenas should be a trivial exercise for you. It was my first experience in servers years ago and it went smoothly.
This chassis was re-purposed and came with the CPU's and RAM. I will definitely look at increasing the RAM as I know ZFS likes to use as much RAM as possible. Would I see any benefit with faster CPU's? My VM compute is running on separate servers so this will probably only be used for NAS duties. Yes, I thought that would be the case regarding SAS3 HBA on a SAS2 backplane but was just checking to see if there was any point in future proofing. I will check and see what the pricing looks like for the cards on Ebay.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I would say for storage only the CPU and 32gb is fine, sure you can add some memory when you see it cheap but for a light workload I doubt you will see any real difference. Diminishing returns if you know what I mean.

I am of course assuming you have most components , if buying new then maybe an E5-2670 is cheap and then maybe a single cpu ?

I know almost nothing about heavy loaded ZFS so maybe somebody has a better idea but for storage only seems way overkill in CPU department to have a dual 6 or 8 cores.
(I have only some larger volume ~200TB but lower usage ZFS and CPU is pretty minimal)
 

fsck

Member
Oct 10, 2013
51
12
8
If you already have the CPUs, no point upgrading; I assume you aren't going for bleeding edge storage since you've said you're sticking with spinners. If all you're going to do is freenas, a single cpu is all you'd need. You don't seem to have a large number of clients so it'd be fine. For spinners, they'll be your I/O bottleneck versus SSDs where the CPU will be. (clockspeed)
I also have no idea how much of a concern power consumption is. If it is, I'd pull one of the processors, but you'll need to pay attention to what PCIE slots are connected to what.

My box isn't solely a freenas box, thus the 16 cores and blob of memory.

Also, best drive options will likely be whatever is cheapest. Though remember that you need to factor the cost of the slot in the equation, of course you could also price in JBOD expansion chassis as well. I personally, really favor HGST Deskstar NAS drives, even though they're loud suckers, but I find useful for my purposes the access time benefit from 7200rpm.
I would recommend 3 vdevs of 8 drives in raidz2 personally, or 2 of 12 in raidz2 depending on your needs. It's up to you if you want 1 or multiple zpools. I'm a student, so I have 1 vdev per zpool due to being short on funds and can't fully populate a chassis in one shot and therefore I have no experience with huge zpools.
I assume that when you scrub a singular huge zpool, everyone suffers at the same time. I like having multiple zpools of 1 vdev each because it's basically the equivalent of storage multithreding. I can stream data to my the totality of my zpools at very high rates even if I end up slamming one zpool with I/O. Of course, if you happen to need a large amount of storage in one location, you have no choice but to have singular large zpool.

I'm going to point out in advance that I feel that hosting VMs on raidz spinners is going to be a horrible affair. I don't have a real comparison with networked SSDs (networked spinner array vs a single local SSD) but the SSD is just more pleasant to deal with due to access time. I'm a VM noob so take it as a loose comment.

The 9211-8i type of HBA will probably be the ideal choice as it's both PCIE2 and SAS2. With the number of slots in the machine, you can easily expand in the future with an expander or HBA with external connections. Of course if you can find any of the other variants cheaper or for only a slight increase, you should go for it. Personally, I'd be considering potentially upgrading the rig to IVB-E in the future, so PCIE3 might be possible in the future. E5-2620 are rather low end after all. You may also not want to dedicate such a power hungry box to purely freenas as well.

My (first) pure freenas box was an i3 2120 on an x9scl in a norco 24 bay 4u chassis. My supermicro 846 was intended to be a second freenas box, but I decided to dip my hand into proxmox on this rig. (SB and IVB i3 do not support VT-d so virtualization is bleh for those boxes)
 
  • Like
Reactions: Joel

Joel

Active Member
Jan 30, 2015
851
191
43
42
Great comment, just one thing to add: there's nothing to fear adding vdevs to a pool, you just have to do it a whole vdev at a time. Where beginners often go wrong is adding a single disk vdev to a RAIDZ2 pool, destroying the benefits of parity. Caution is warranted when executing the "zpool add" command, of course.

I'm liking my proxmox config so far: installed on RAID1 SSDs (where vms will live also), with the storage hdd array as a separate pool, for bulk storage.

Next step is to install Windows VM and setup GPU/USB passthrough.
 
Last edited: