ZFS array expansion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

josh

Active Member
Oct 21, 2013
615
190
43
Hey guys,

Expecting the 14TB Elements to be sold at a great price over Black Fri so thinking of using it to expand my existing array which is about 60% full right now.

Trying to expand out a 6x14TB Z2 array to 8x14TB Z2 (or even 9x14TB Z3) but trying to do this with the least number of intermediary drives.

Is it a good idea to sacrifice one or two of the "parity" drives in Z2 (allow it to run in degraded state) and use it to copy the data off the array or is this extra risky?

Thanks!
 

josh

Active Member
Oct 21, 2013
615
190
43
Move all the data to another place delete the array, create the new array with the desired number of hdds.

(ZFS raidz expansion is not possible yet: [WIP] raidz expansion, alpha preview 1 by ahrens · Pull Request #8853 · openzfs/zfs)
I am aware that RAID Z expansion has been in the works for many years now. Is it worth waiting for considering my existing array is only 60% full?

Also, my plan was to migrate the data and create a new array but I'll need a good number of intermediary drives that will be unused after the process. Can I take out the 2 "extra" drives from the Z2 and use it for this backup? I presume no data would be lost as I won't be writing to the array in the degraded state, only reading from it.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I am aware that RAID Z expansion has been in the works for many years now. Is it worth waiting for considering my existing array is only 60% full?

Also, my plan was to migrate the data and create a new array but I'll need a good number of intermediary drives that will be unused after the process. Can I take out the 2 "extra" drives from the Z2 and use it for this backup? I presume no data would be lost as I won't be writing to the array in the degraded state, only reading from it.
You can just pull two of the disks and create a new backup pool from them in a raid-0.
Problem: If any disk or readerror occours then you have a full pool or data lost.

If you remove only one disk, the risk may be acceptable.
Also the intermediate backup pool should have redundancy (Z1) due the same problem.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Is it worth waiting for considering my existing array is only 60% full?
Nope, it was announced in 2017 and for freebsd 12. It's still alpha, not top priority for delphix (the company that employs matt ahrens to work on zfs) and matt seems to be the only developer working on that feature...
 

Mithril

Active Member
Sep 13, 2019
354
106
43
It's all about how much risk you are willing to take. IIRC a failed ZFS (lose more drives than you have redundancy) is the same as a failed raid 0, data is effectively gone.

If this is the ONLY place you have data, I *personally* wouldn't degrade the volume to do a transfer. 60% full isn't anywhere near a critical problem, especially if you haven't done any housecleaning yet. If you have old (un needed) snapshots taking up space, or duplicate files you can *manually* clean up (ZFS de-dupe is powerful, but easy to screw yourself if you go in blind). IF you haven't turned on compression, DO SO (IIRC, you will need to re-add existing files to compress them but that could be done in batches).

If you DO have backups, or this is your backup for everything on the volume my *personal* path of balanced risk would be:

1. One by one pull AND VERIFY drives from each VDEV leaving one disk redundancy IF you need those drives for step 2

2 A. Assuming the volume is fairly under-filled: Build a new VDEV of at least raidz1 with compression on. If possible with drives you won't be using in the final volume. Verify everything transferred and create the new final volume (finally using the drives from the original volume if desired) and transfer again. If you are short on disks during the transfer (assuming raidz2 or raidz3), you can use a sparse file as a member of the VDEV and then *DELETE IT* before attempting to copy data. Once the transfer is complete add any disks required to restore redundancy.

2 B. Figure out your final pool (how many drives per VDEV, how many VDEVs), using spare drives of smaller sizes manually format the drives so that each one maintains a small bit of unused area (besides any cache, like 1GB or less should be enough, this could make life easier for you later) Use one or more of the existing pool's drives if needed, while maintaining at least one disk redundancy per VDEV. If you are short on disks during the transfer (assuming raidz2 or raidz3), you can use a sparse file as a member of the VDEV and then *DELETE IT* before attempting to copy data. Once the transfer is complete add any disks required, first priority is restoring full redundancy if needed, then upsizing the smaller disks with online replacements.

You could mix 2 A and 2B depending on the "drive math". I've actually done 2B in the past, in my case my "final design" included more large drives than I even had at the time of swap, I had to wait to upsize one of the VDEVs until later. Obviously, use drives you trust during the process.