ZFS-server for veeam-backups - 48x10TB - which layout

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by Stril, Jun 26, 2019.

  1. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    178
    Likes Received:
    9
    Hi!

    I need to setup a server as veeam-backup-storage. It will have 38 10TB drives, but I am not sure about the layout, as I did not use a ZFS system without anything else than striped mirrors.

    What would you recommend?

    It would be good to have a sequential read-performance of >4-5 Gbit/s

    3 vdevs, each 16-disk-RaidZ2
    6 vdevs, each 8-disk-RaidZ2
    8 vdevs, each 6-disk-RaidZ1
    12 vdevs, each with 4-disk-RaidZ1

    What do you think?

    The system will work with sync=disabled, 96 GB memory and an Intel Scalable 5222 without L2ARC and SLOG.

    Thank you for your thoughts
    Stril
     
    #1
  2. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,269
    Likes Received:
    751
    I would rule out any Z1 layout.
    The rest is capacity vs random (iops) performance that may be relevant on a resilver or concurrent access.

    As this is a backup server I would propably prefer capacity over performance with a 4 x 12 disk Z2. Sequential performance scale with number of datadisks. 5 Gbit/s = around 400 MB/s is not a problem. Three striped disks can achieve that, You have 40. You should enable LZ4 compress.
     
    #2
  3. Stril

    Stril Member

    Joined:
    Sep 26, 2017
    Messages:
    178
    Likes Received:
    9
    Hi!

    Thank you for your answer!
    Do you think Z1 is just too unreliable?

    I thought, sequential performance does only scale by vdevs - not by disks...
     
    #3
  4. tjk

    tjk Active Member

    Joined:
    Mar 3, 2013
    Messages:
    236
    Likes Received:
    32
    Win and ReFS if you are using Veeam, google the benefits.
     
    #4
  5. j_h_o

    j_h_o Active Member

    Joined:
    Apr 21, 2015
    Messages:
    370
    Likes Received:
    74
    I'd be concerned about Storage Spaces performance if this is in a single node...

    What kind of performance are you wanting wrt sequential reads and writes? What network connectivity will this box have to the machines it's servicing?
     
    #5
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,269
    Likes Received:
    751
    Yes too unreliable with such a capacity.
    A resilver on a disk failure can last one or two days. A second disk failure would mean a whole pool lost. Any single unrecoverable readerror means then a file lost.

    Sequential performance scale with datadisks. Random performance (iops) scale with vdevs as for any io you must position all disks in a vdev. As a thumbrule a disk has around 100 physical iops. A pool from 4 vdev can give around 400 iops.
     
    #6
  7. BoredSysadmin

    BoredSysadmin Active Member

    Joined:
    Mar 2, 2019
    Messages:
    279
    Likes Received:
    62
    Z1 is very similar to Raid5 and Internet is full of opinions from both sides of the fence on this subject:
    Cafaro's Ramblings ยป Why RAID 5 is NOT dead yet
    However, Keep in mind that in ZFS vDEVs are STRIPPED. having 4 vdevs in stripe setup, I'd definitely not recommend going with z1.
     
    #7
  8. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,269
    Likes Received:
    751
    There is a huge difference between raid-5 and raid-Z1.

    While both allow a single disk to fail, a raid-5 is under control of a raidadapter or a software driver. From OS view it is like a single disk. On a write the data is divided into raid stripes that are written sequentially to the disks of the array. On a crash during write there is no mechanism that can guarantee raid or file system consistency (all stripes written or partly). Without checksums there is even no chance to detect the resulting problems. This is called write hole problem, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

    This is why you need a BBU protection on a raid-5 to reduce the problem a little and why a single crash can corrupt the whole raid-5 array.

    On ZFS any raid stripe write to any disk is under full control of the OS itself. ZFS knows if a write to a single disk is valid or not. That paired with CopyOnWrite what means an atomic write is done completely or discarded gives an unique crash resitency. Most important. Any data and metadate is checksum protected. ZFS knows if something went wrong. ZFS can repair the data then from checksums. this is why a read error on a Z1 does not mean an array lost like on Raid-5 but only a file lost.

    If you use ZFS on Raid-5, ZFS can detect all errors, Raid-5 cannot detect these problem nor repair.
     
    #8
    Last edited: Jun 26, 2019
    tjk likes this.

Share This Page