FreeNAS: Best RAID for my array?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Erlipton

Member
Jul 1, 2016
93
23
8
36
My first FreeNAS build, which will be purely dedicated to FreeNAS:

E5-2603v3
32GB DDR4-2133 (non ECC but plan to go ECC when I have an opportunity)
ASRock x99 i7 Pro Gaming (supports RDIMMs)

My array is 5x WD 8TB (mix of Reds and White labels shucked from easystores)

I'm torn between raidz2 and raid 10 with an extra disk for backups.

I wasnt planning on using a controller for it since I am dedicating the whole machine to it, else, I do have a LSI SAS 9211-8i flashed to IT.

Looking forward to hearing opinions. I understand this is more of a religious discussion, but I'd like to maximize my drives (even though I knowingly wont fill them).

Three primary concerns (in order):
- ease of rectifying a drive failure
- size of my array, should I avoid either one?
- performance
 
Last edited:

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
I have one freenas box that is just dedicated to data backup from my workstation. I went with mirrors. Started with 2x 6tb and then added another mirror of 2x6tb. Mirrors are the cheapest to expand as they need only 2 drives.
 

darkconz

Member
Jun 6, 2013
193
15
18
Thing to consider is RAIDZ2, you can lose any combination of the 2 drives in the array but in a RAID 10 scenario, if you lose the wrong combo, you could lose your entire array.

Also, with your situation, if you utilize all 5 drives, Z2 gives you more space too. You’ve already have the hardware, why not use it :)

But either case scenario, you should also keep a backup of the dataset.


Sent from my iPhone using Tapatalk
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Hit send by mistake before completing. If you can get another drive, then go with a 6 drive raidz2. More peace of mind.

It is probably a valid concern, the worry of a second disk failing on the same vdev while resilvering but I've never encountered it in multiple zfs resilvers as well as raid rebuilds in my home systems.

My big NAS that backs up everything else is right now is 4x 7x8tb raidz3.

There is no one size fits all.
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
Performance RAID 10

Z2 adds way more overhead.

@K D Z3 on that size array is too much! Maybe if you had all 28 drives in the same array. Otherwise just wasting disks. 24 drive + 3 parity + 1 hotspare and you're set. That would let you rebuild immediately with hotspare and 4 drives for redundancy instead of 12
 

Erlipton

Member
Jul 1, 2016
93
23
8
36
@K D @TangoWhiskey9 @darkconz any thought on whether I need the LSI controller? I initially got it to visualize, and while I've successfully got the drives to be served to the VM, I opted to build this separate machine because I am not only new at FreeNAS, but I'm new at Hyper-V. Didnt want two battle two fronts at the same time, worst case scenario.

Do I need the controller for a native install?
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
@K D Z3 on that size array is too much!
Not sure what you mean by too much. Whether it is a good thing or a bad thing :)

This is a pure backup server where about once a month I backup all data to it from other systems. It's dual e5-2620 v2 with 128gb ram in an 826 with an 846 JBOD.

I started with a 7 drive raidz3 vdev and kept the adding more vdevs as I filled it up. Currently using about 60TB of 112TB.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Do I need the controller for a native install?
The board has 10 sata ports. You can boot freenas off a USB key ( or use 2 usb keys to mirror the os) and use all the sata ports for data drives. You don't really need the HBA.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I'm torn between raidz2 and raid 10 with an extra disk for backups.

Three primary concerns (in order):
- ease of rectifying a drive failure
- size of my array, should I avoid either one?
- performance
What is the use case?
Movies, Videos, larger files (-> Streaming data) -> Z2 is fine
Smaller Files (Photo, Audio, documents; especially when searching a lot) -> Mirror
VMs -> Mirror
Mix of it -> pick your poison

- ease of rectifying a drive failure => identical. Higher theoretical chance of an issue with 2-way Mirror, offset by faster resync time
- size of my array, should I avoid either one? => Z2 will be larger
- performance => Mirror for most stuff (not necessarily streaming reads/writes)
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
The big thing that gives me chills with mirrored config is that if a drive goes down and you're resilvering, the drive you're now hammering is the same one that has no redundency.

@K D and @TangoWhiskey9

For backup I'd agree that a 7 drive Z3 is probably overkill (or maybe he just really values his data!), but I'd say the optimal would be 2x13 disk Z3 vdevs and that would allow two hot spares and result in 160tb of storage with twice the write speed of a monolithic array. Of course that depends on his performance needs...

Bigger issue though would be how to migrate the data from current config.
 
Last edited:

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
For backup I'd agree that a 7 drive Z3 is probably overkill (or maybe he just really values his data!)
Bigger issue though would be how to migrate the data from current config
You are right. It's probably an overkill. It was just a matter of having started off with a 7 drive z3 vdev and when I ran out of space, being too lazy to reconfigure and adding the same set again till I ended up with what I have.

It's just easier to maintain this rather than go with the whole migration to a new config. I can probably reclaim a drive worth of space by purging some old data and snapshots and cleaning up duplicates but just don't have the time for it.

It just works. So I leave it alone.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
I myself prefer more and smaller drives -- 8x5TB sounds better than 5x8TB for an array. It sounds hard to do 5x8TB, maybe Z2 is the way to go but again, I'd do Z2 on 8x5TB instead of 5x8TB.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Larger disks have a higher density so sequential performance of 5x8 vs 8x5 in a Raid-Z can be quite similar.
IOPS scale with number of vdevs (as all heads must be positioned for every io) so with a single Raid-Z vdev
this is quite similar and equal to one disk in both cases.

Mostly I would prefer less disks due less power and a lesser chance of disk failures.
If you use multiple Raid-10 this is different as in this case more disks (more vdevs) mean more iops.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
Was primarily talking about resilvering performance & data safety, not performance.

I don't think we are looking that much performance on some Red WDs.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.
 

Erlipton

Member
Jul 1, 2016
93
23
8
36
Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.
Fantastic insight, thanks!
 

msg7086

Active Member
May 2, 2017
423
148
43
36
Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.
I thought if you lose 1 disk, you could end up resilvering less data (portion of 5TB instead of 8TB), isn't it?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
A dataloss would be only the case in a classical Jbod or pooling without redundancy.

A realtime Raid like Raid-Z2 allows any two disks to fail without a dataloss as two disks are only there for redundancy and to protect your data. A resilvering process is needed when you replace or repair a disk to regain the full Raid-Z2 datasecurity after a failure.

A Open-ZFS resilvering must read any metadate of the whole pool to decide if data must be repaired and read then affected data from redundancy to repair. This is why it is extremely iops sensitive (Beside the Oracle Solaris way in a genuine ZFS of doing resilvering, see Sequential Resilvering about how resilvering works)