10TB Drives in ZFS?


Jan 7, 2013

Are 10TB drives recommended to use in a ZFS setup with 2 for redundancy? I ask because I have run into several comments on Slickdeals and reddit where they say bigger drives like that are not as reliable and the rebuild time just for one drive would take forever. I was thinking to pick up some on the next sale since with shucking they can be had for $160.

Also, if they are Ok to use, what would the best combination be, perhaps 6 total?



Well-Known Member
Mar 30, 2012
They will take longer to rebuild, because they're 10TB. They're also cheaper to run longer term since each drive uses power and using smaller drives means more drives and more power over time.

I'd use them and get on with life. I might not do a 24 drive raid-z with them, but mirrored will be fine.


Well-Known Member
Dec 31, 2010
6 disks per z2 is the old golden number for a z2 for a minimal space waste. This is still valid without compress. But as you mostly enable compress today, forget this rule. For a Z2 only rule is to use no more than say 10 disks per vdev. This is mainly due a quite long resilver time and bad iops for such pools.

A good idea would be an additional hot (or cold) spare disk.
  • Like
Reactions: Evan and T_Minus
Mar 7, 2016
I found ZFS resilver times to be very unpredictable (compared to md raid).

What you could do:
- get 10 TB drives
- build array, put junk data
- pull a drive, write a lot, time the resilver time when you put the drive back

If the time is unacceptable you have the option of partitioning the disks before making 2 ZFS pools. One pool on 3 TB partitions at the beginning of the disk, a second pool on the remaining 7 TB partitions.

You can then place more important data (or less backed up data) on the fast-resync zpool.


Active Member
Mar 10, 2016
I wouldn't hesitate to run a 6 drive raidz2 or mirrors. I would not run a raidz1 with drives that size.

Do some tests before going live with it. A full badblocks and SMART scan is my default, then testing as part of a pool. I have found drives that started out fine but fell over after being filled and wiped a couple times. I'd rather find that before putting my data on it. This also gives you a chance to evaluate performance in raidz2 vs mirrors etc..

I prefer to run more smaller drives. Better performance and faster rebuild times when needed. However, power is cheap here. That approach might not make sense in some areas.


Well-Known Member
Dec 31, 2010
I cannot say that I would ever do this as this is against the first rule of IT: Keep it simple. Beside that when a disk fails you have two pools to resilver. More important for me is a sufficient redundancy ex z2 or z3 instead z1.

Resilver time on ZFS depends on pool fillrate, pool iops and resilver technology as the resilver process must read all metadata to find data that must be copied to the new disk. This means a resilver on an empty pool lasts minutes. If the pool is quite full and heavy fragmented with 10TB disks I would expect a day on mirrors and 2-3 days on a raid-z pool (Open-ZFS). Currently Solaris is much faster than Open-ZFS but sequential resilvering is underway on Open-ZFS.

Sequential Resilvering


Well-Known Member
Jan 6, 2016
Ok so @ZzBloopzZ mentioned essentially RD RED (white) and so that’s a bit different than say He10’s but still I would not be concerned running z2 in 6-10 disks.

Certainly with He?? drives it’s being good, Have a large number of systems with up to 12 disks as large as 14tb and no issues yet. (By a large number I would say enough to have an opinion if you know what I mean, of course these are not shucked consumer disks)
Get the odd single failure but nothing unexpected.