10TB Drives in ZFS?

Discussion in 'Hard Drives and Solid State Drives' started by ZzBloopzZ, Aug 3, 2019.

  1. ZzBloopzZ

    ZzBloopzZ Member

    Joined:
    Jan 7, 2013
    Messages:
    91
    Likes Received:
    13
    Hello,

    Are 10TB drives recommended to use in a ZFS setup with 2 for redundancy? I ask because I have run into several comments on Slickdeals and reddit where they say bigger drives like that are not as reliable and the rebuild time just for one drive would take forever. I was thinking to pick up some on the next sale since with shucking they can be had for $160.

    Also, if they are Ok to use, what would the best combination be, perhaps 6 total?

    Thanks!
     
    #1
  2. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,936
    Likes Received:
    857
    They will take longer to rebuild, because they're 10TB. They're also cheaper to run longer term since each drive uses power and using smaller drives means more drives and more power over time.

    I'd use them and get on with life. I might not do a 24 drive raid-z with them, but mirrored will be fine.
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,213
    Likes Received:
    722
    6 disks per z2 is the old golden number for a z2 for a minimal space waste. This is still valid without compress. But as you mostly enable compress today, forget this rule. For a Z2 only rule is to use no more than say 10 disks per vdev. This is mainly due a quite long resilver time and bad iops for such pools.

    A good idea would be an additional hot (or cold) spare disk.
     
    #3
    Evan and T_Minus like this.
  4. unwind-protect

    Joined:
    Mar 7, 2016
    Messages:
    119
    Likes Received:
    9
    I found ZFS resilver times to be very unpredictable (compared to md raid).

    What you could do:
    - get 10 TB drives
    - build array, put junk data
    - pull a drive, write a lot, time the resilver time when you put the drive back

    If the time is unacceptable you have the option of partitioning the disks before making 2 ZFS pools. One pool on 3 TB partitions at the beginning of the disk, a second pool on the remaining 7 TB partitions.

    You can then place more important data (or less backed up data) on the fast-resync zpool.
     
    #4
  5. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    719
    Likes Received:
    193
    I wouldn't hesitate to run a 6 drive raidz2 or mirrors. I would not run a raidz1 with drives that size.

    Do some tests before going live with it. A full badblocks and SMART scan is my default, then testing as part of a pool. I have found drives that started out fine but fell over after being filled and wiped a couple times. I'd rather find that before putting my data on it. This also gives you a chance to evaluate performance in raidz2 vs mirrors etc..

    I prefer to run more smaller drives. Better performance and faster rebuild times when needed. However, power is cheap here. That approach might not make sense in some areas.
     
    #5
  6. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,213
    Likes Received:
    722
    I cannot say that I would ever do this as this is against the first rule of IT: Keep it simple. Beside that when a disk fails you have two pools to resilver. More important for me is a sufficient redundancy ex z2 or z3 instead z1.

    Resilver time on ZFS depends on pool fillrate, pool iops and resilver technology as the resilver process must read all metadata to find data that must be copied to the new disk. This means a resilver on an empty pool lasts minutes. If the pool is quite full and heavy fragmented with 10TB disks I would expect a day on mirrors and 2-3 days on a raid-z pool (Open-ZFS). Currently Solaris is much faster than Open-ZFS but sequential resilvering is underway on Open-ZFS.

    Sequential Resilvering
     
    #6
  7. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,805
    Likes Received:
    407
    Ok so @ZzBloopzZ mentioned essentially RD RED (white) and so that’s a bit different than say He10’s but still I would not be concerned running z2 in 6-10 disks.

    Certainly with He?? drives it’s being good, Have a large number of systems with up to 12 disks as large as 14tb and no issues yet. (By a large number I would say enough to have an opinion if you know what I mean, of course these are not shucked consumer disks)
    Get the odd single failure but nothing unexpected.
     
    #7
Similar Threads: 10TB Drives
Forum Title Date
Hard Drives and Solid State Drives WD 10TB drives are supposed to sound like this? Jul 2, 2019
Hard Drives and Solid State Drives Throw me a bone, anyone seen any deals on 3.5" 8-10TB NAS drives lately Apr 18, 2018
Hard Drives and Solid State Drives HSGT 10TB SAS 12G SUN-Oracle Sep 6, 2019
Hard Drives and Solid State Drives Mixing SAS and SATA HGST 10TBs? Jun 9, 2018
Hard Drives and Solid State Drives Trying to If there's any warranty on HGST10TBs - Invalid Serial Jun 4, 2018

Share This Page