FreeNAS: Best RAID for my array?

Discussion in 'FreeBSD and FreeNAS' started by Erlipton, Apr 21, 2018.

  1. Erlipton

    Erlipton New Member

    Joined:
    Jul 1, 2016
    Messages:
    26
    Likes Received:
    3
    My first FreeNAS build, which will be purely dedicated to FreeNAS:

    E5-2603v3
    32GB DDR4-2133 (non ECC but plan to go ECC when I have an opportunity)
    ASRock x99 i7 Pro Gaming (supports RDIMMs)

    My array is 5x WD 8TB (mix of Reds and White labels shucked from easystores)

    I'm torn between raidz2 and raid 10 with an extra disk for backups.

    I wasnt planning on using a controller for it since I am dedicating the whole machine to it, else, I do have a LSI SAS 9211-8i flashed to IT.

    Looking forward to hearing opinions. I understand this is more of a religious discussion, but I'd like to maximize my drives (even though I knowingly wont fill them).

    Three primary concerns (in order):
    - ease of rectifying a drive failure
    - size of my array, should I avoid either one?
    - performance
     
    #1
    Last edited: Apr 21, 2018
  2. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,233
    Likes Received:
    253
    I have one freenas box that is just dedicated to data backup from my workstation. I went with mirrors. Started with 2x 6tb and then added another mirror of 2x6tb. Mirrors are the cheapest to expand as they need only 2 drives.
     
    #2
  3. darkconz

    darkconz Member

    Joined:
    Jun 6, 2013
    Messages:
    176
    Likes Received:
    13
    Thing to consider is RAIDZ2, you can lose any combination of the 2 drives in the array but in a RAID 10 scenario, if you lose the wrong combo, you could lose your entire array.

    Also, with your situation, if you utilize all 5 drives, Z2 gives you more space too. You’ve already have the hardware, why not use it :)

    But either case scenario, you should also keep a backup of the dataset.


    Sent from my iPhone using Tapatalk
     
    #3
  4. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,233
    Likes Received:
    253
    Hit send by mistake before completing. If you can get another drive, then go with a 6 drive raidz2. More peace of mind.

    It is probably a valid concern, the worry of a second disk failing on the same vdev while resilvering but I've never encountered it in multiple zfs resilvers as well as raid rebuilds in my home systems.

    My big NAS that backs up everything else is right now is 4x 7x8tb raidz3.

    There is no one size fits all.
     
    #4
  5. TangoWhiskey9

    TangoWhiskey9 Active Member

    Joined:
    Jun 28, 2013
    Messages:
    390
    Likes Received:
    59
    Performance RAID 10

    Z2 adds way more overhead.

    @K D Z3 on that size array is too much! Maybe if you had all 28 drives in the same array. Otherwise just wasting disks. 24 drive + 3 parity + 1 hotspare and you're set. That would let you rebuild immediately with hotspare and 4 drives for redundancy instead of 12
     
    #5
  6. Erlipton

    Erlipton New Member

    Joined:
    Jul 1, 2016
    Messages:
    26
    Likes Received:
    3
    @K D @TangoWhiskey9 @darkconz any thought on whether I need the LSI controller? I initially got it to visualize, and while I've successfully got the drives to be served to the VM, I opted to build this separate machine because I am not only new at FreeNAS, but I'm new at Hyper-V. Didnt want two battle two fronts at the same time, worst case scenario.

    Do I need the controller for a native install?
     
    #6
  7. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,233
    Likes Received:
    253
    Not sure what you mean by too much. Whether it is a good thing or a bad thing :)

    This is a pure backup server where about once a month I backup all data to it from other systems. It's dual e5-2620 v2 with 128gb ram in an 826 with an 846 JBOD.

    I started with a 7 drive raidz3 vdev and kept the adding more vdevs as I filled it up. Currently using about 60TB of 112TB.
     
    #7
  8. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,233
    Likes Received:
    253
    The board has 10 sata ports. You can boot freenas off a USB key ( or use 2 usb keys to mirror the os) and use all the sata ports for data drives. You don't really need the HBA.
     
    #8
  9. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    2,229
    Likes Received:
    263
    What is the use case?
    Movies, Videos, larger files (-> Streaming data) -> Z2 is fine
    Smaller Files (Photo, Audio, documents; especially when searching a lot) -> Mirror
    VMs -> Mirror
    Mix of it -> pick your poison

    - ease of rectifying a drive failure => identical. Higher theoretical chance of an issue with 2-way Mirror, offset by faster resync time
    - size of my array, should I avoid either one? => Z2 will be larger
    - performance => Mirror for most stuff (not necessarily streaming reads/writes)
     
    #9
  10. Joel

    Joel Active Member

    Joined:
    Jan 30, 2015
    Messages:
    680
    Likes Received:
    130
    The big thing that gives me chills with mirrored config is that if a drive goes down and you're resilvering, the drive you're now hammering is the same one that has no redundency.

    @K D and @TangoWhiskey9

    For backup I'd agree that a 7 drive Z3 is probably overkill (or maybe he just really values his data!), but I'd say the optimal would be 2x13 disk Z3 vdevs and that would allow two hot spares and result in 160tb of storage with twice the write speed of a monolithic array. Of course that depends on his performance needs...

    Bigger issue though would be how to migrate the data from current config.
     
    #10
    Last edited: Apr 24, 2018
  11. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,233
    Likes Received:
    253
    You are right. It's probably an overkill. It was just a matter of having started off with a 7 drive z3 vdev and when I ran out of space, being too lazy to reconfigure and adding the same set again till I ended up with what I have.

    It's just easier to maintain this rather than go with the whole migration to a new config. I can probably reclaim a drive worth of space by purging some old data and snapshots and cleaning up duplicates but just don't have the time for it.

    It just works. So I leave it alone.
     
    #11
  12. msg7086

    msg7086 Member

    Joined:
    May 2, 2017
    Messages:
    110
    Likes Received:
    11
    I myself prefer more and smaller drives -- 8x5TB sounds better than 5x8TB for an array. It sounds hard to do 5x8TB, maybe Z2 is the way to go but again, I'd do Z2 on 8x5TB instead of 5x8TB.
     
    #12
  13. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,761
    Likes Received:
    595
    Larger disks have a higher density so sequential performance of 5x8 vs 8x5 in a Raid-Z can be quite similar.
    IOPS scale with number of vdevs (as all heads must be positioned for every io) so with a single Raid-Z vdev
    this is quite similar and equal to one disk in both cases.

    Mostly I would prefer less disks due less power and a lesser chance of disk failures.
    If you use multiple Raid-10 this is different as in this case more disks (more vdevs) mean more iops.
     
    #13
  14. msg7086

    msg7086 Member

    Joined:
    May 2, 2017
    Messages:
    110
    Likes Received:
    11
    Was primarily talking about resilvering performance & data safety, not performance.

    I don't think we are looking that much performance on some Red WDs.
     
    #14
  15. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,761
    Likes Received:
    595
    Without advanced features like Solaris sequential resilvering, this is mainly limited by pool iops so pool performance = resilvering performance. This is the same (1 x disk) in 8x5 as in 5x8 as is datasecurity with both a Raid-Z2 with slight advantages with less disks.
     
    #15
  16. Erlipton

    Erlipton New Member

    Joined:
    Jul 1, 2016
    Messages:
    26
    Likes Received:
    3
    Fantastic insight, thanks!
     
    #16
  17. msg7086

    msg7086 Member

    Joined:
    May 2, 2017
    Messages:
    110
    Likes Received:
    11
    I thought if you lose 1 disk, you could end up resilvering less data (portion of 5TB instead of 8TB), isn't it?
     
    #17
  18. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,761
    Likes Received:
    595
    A dataloss would be only the case in a classical Jbod or pooling without redundancy.

    A realtime Raid like Raid-Z2 allows any two disks to fail without a dataloss as two disks are only there for redundancy and to protect your data. A resilvering process is needed when you replace or repair a disk to regain the full Raid-Z2 datasecurity after a failure.

    A Open-ZFS resilvering must read any metadate of the whole pool to decide if data must be repaired and read then affected data from redundancy to repair. This is why it is extremely iops sensitive (Beside the Oracle Solaris way in a genuine ZFS of doing resilvering, see Sequential Resilvering about how resilvering works)
     
    #18
Similar Threads: FreeNAS Best
Forum Title Date
FreeBSD and FreeNAS Best SATA drive for FreeNAS SLOG - Intel S3710 or S4600? Mar 9, 2018
FreeBSD and FreeNAS What's the best alternative to FreeNAS? Oct 14, 2017
FreeBSD and FreeNAS FreeNAS fully operational ? Mar 15, 2018
FreeBSD and FreeNAS FreeNAS Server configuration Mar 4, 2018
FreeBSD and FreeNAS Need some help with FreeNAS Sharing Mar 3, 2018

Share This Page