adding second vdev - Am I doing it right?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

poofer

New Member
Sep 2, 2016
14
0
1
42
This is on a production NAS so I want make double sure I am doing it right.
14 bay jbod
Currently single raidz1-0 vdev with 7 disks and 2 spares.
I want to take the 2 spares and 5 new disks and make a second vdev to increase space to max out my box.
So my questions are these...
  1. Should I stick with second raidz1 or am I safe to do a raidz2? Will a raidz2 play nice with a raidz1 in the same pool?
  2. Do I want to have both vdevs have the same amount of disks or can I have the vdevs different? (6 disk vdev with an extra spare)
  3. What setup would you do if you were in my position?
My initial inclination is to add a second raidz2 with 6 disks and one spare, but I'm unsure of myself on that.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
This is on a production NAS so I want make double sure I am doing it right.
14 bay jbod
Currently single raidz1-0 vdev with 7 disks and 2 spares.
I want to take the 2 spares and 5 new disks and make a second vdev to increase space to max out my box.
So my questions are these...
  1. Should I stick with second raidz1 or am I safe to do a raidz2? Will a raidz2 play nice with a raidz1 in the same pool?
  2. Do I want to have both vdevs have the same amount of disks or can I have the vdevs different? (6 disk vdev with an extra spare)
  3. What setup would you do if you were in my position?
My initial inclination is to add a second raidz2 with 6 disks and one spare, but I'm unsure of myself on that.
Any of these versions are possible, and you can even have vdev's of varying RAID levels and disk counts (not ideal though). Personally, I would setup a new pool with the seven disks and make your first vdev a raidz2. I would zfs send/recv from the old pool to the new pool. Once that is done, I would destroy the old pool and use those disks to create a second raidz2 vdev on the new pool. This would be a very safe configuration and should still provide a decent amount of space for storage. The only issue with this setup is all of your data is going to end up on your first vdev as ZFS won't redistribute it across the array unless you completely copy it elsewhere, remove it, and then copy it back.

Also, what sizes are the disks you are using? (all the same size across each vdev)

I'll be interested to hear what others would do.
 
  • Like
Reactions: cperalt1 and poofer

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
1.
you can mix any vdev types with ZFS but overall pool data security and performance is limited by the weakest vdev.

2.
if your vdevs are different in size you cannot spread data equally over all disks what limits performance partly to a single vdev. As the smaller vdev has a higher fillrate you will see an extra degration due a higher fragmentation.

3.
As this is a production machine, I would suggest NOT to use Z1. If a disk fails you need up to a day for a resilver. A second disk failure and the whole pool is lost. A simple sector readerror and you loose a file.

What I would do
- create a new Pool from a new raid-Z2 vdev with 6 disks
- zfs send your filesystems from the current Pool to the new Pool
- destroy the current pool and add 6 disks as a second vdev to the new pool
- export/import the pool to have the former Poolname

As an option, you can use 7 disks per vdev. I would prefer 6 disks and use a disk nr 13 as a hotspare for both vdevs.

Your pool is unbalanced then (all data on first vdev, second empty) you can manually rebalance with a copy/zfs send actions but only if there was enough space left on vdev1.

About use case
You end with a Raid-Z2 pool from 2 vdevs where iops is equal to two disks (around 300 iops). This is fine for a general use filer or backup system but quite low if this is storage for databases or VMs. In such a case you can either think of two pools, a new one from SSD only and your current for the rest or you can use Raid-10. With 7 x mirrors you can go up to around 1000 iops (A single enterprise SSD can give 20k-80k iops)
 
  • Like
Reactions: cperalt1 and poofer

poofer

New Member
Sep 2, 2016
14
0
1
42
It seems that both of you are on the same page as far as Creating a second Z2 and transferring everything over then killing the old pool and creating a Z2 from that one as well. I like that idea and I'm going to go with it.

Also, what sizes are the disks you are using? (all the same size across each vdev)
I am using 3TB SAS drives

As an option, you can use 7 disks per vdev. I would prefer 6 disks and use a disk nr 13 as a hotspare for both vdevs.
I like that idea as well. Having hot spares will make it so I can sleep at night.

you can manually rebalance with a copy/zfs send actions but only if there was enough space left on vdev1.
I have 3.34 TB out of 17 available. Would that be enough space to do a manual balancing?

The data that is has is largely autocad files. I'm not too worried about performance on old files, but anything over the last three months I would like to get the best possible out of them.
I do have two VM's running out of an NFS share. That performance needs to be high as possible as well.

About use case
I already have the spinning disks on hand. In about a year and a half we will be doing a major upgrade on the system and can move to SSD's at that point. I like where you are going with that though.

Thank both of you for your help on this!
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
To rebalance a pool manually you usually rename a filesystem, replicate to the original name and destroy it then. In your case the vdev1 is initially filled 80% and the new vdev 0% so most of the replicated data is then on vdev2 with more space free on vdev1.

If you want your new data striped over both vdevs on a larger amount you may need several copy/replication actions. Each run will improve situation.

Over time modified data is striped automatically over all disks.
 

poofer

New Member
Sep 2, 2016
14
0
1
42
ok. So if I don't do anything then performance will improve over time? I just don't want to have to monkey with the data more than I have to.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
ZFS is CopyOnWrite.
If you modify data, a whole datablock ex 128k is written newly and therefor distributed over all disks as best as possible.
 
  • Like
Reactions: poofer

poofer

New Member
Sep 2, 2016
14
0
1
42
OK. So now I have a second 6 disk raid z2 vdev (See attached screen shot)


I don't believe I have enough space to transfer from storage to storage1 because of the 7 disk raidz1 vs 6 disk raidz2 difference. Although when I look at the files in a file explorer window it says I am only using 9.66Tb (which is cutting it close). I'm assume the space difference is my snapshots that go back about a year.
So do I...
  1. Try to remove as many snapshots and data as I can to free up space?
  2. Recreate the pool with 7 disks and forget about the spare?
Thanks again for your help on this!
 

Attachments

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
If you cannot move away or delete enough files,
I would try to achieve a 2 x 7 disk z2 vdev with a coldspare that may hold temporarely some files.

You can delete the 10% reservation on the new z2 to allow a move.
You can also enable lz4 compress on the new pool
 

poofer

New Member
Sep 2, 2016
14
0
1
42
I have 2 VM's running off of a NFS share... Do I shut those down before the transfer or can I leave them running?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
If you want to move VMs, you must shut them down - always.
(Beside an ESXi hot move/storage move)