zfs pool with spares in use as vdev members


New Member
Oct 14, 2021
I'm still quite new to ZFS and have to take over a config from a previous collegue.

I have the following layout:
zpool status
pool: tank
state: ONLINE
scan: scrub repaired 0B in 26h3m with 0 errors on Mon Oct 11 02:27:57 2021

tank ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
wwn-0x5000cca098c6e83f ONLINE 0 0 0
wwn-0x5000cca098c6ddb1 ONLINE 0 0 0
wwn-0x5000cca098cadb96 ONLINE 0 0 0
wwn-0x5000cca098c9bdeb ONLINE 0 0 0
wwn-0x5000cca098ca921b ONLINE 0 0 0
wwn-0x5000cca098cadb0f ONLINE 0 0 0
wwn-0x5000cca098cab2aa ONLINE 0 0 0
wwn-0x5000cca098ca7e77 ONLINE 0 0 0
wwn-0x5000cca098f0d4a7 ONLINE 0 0 0
wwn-0x5000cca098ca9116 ONLINE 0 0 0
wwn-0x5000cca098cad20c ONLINE 0 0 0
wwn-0x5000cca098eeba47 ONLINE 0 0 0
wwn-0x5000cca098cab248 ONLINE 0 0 0
wwn-0x5000cca098c2c3d9 ONLINE 0 0 0
raidz3-1 ONLINE 0 0 0
wwn-0x5000cca098cadbc0 ONLINE 0 0 0
wwn-0x5000cca098c6ffdb ONLINE 0 0 0
wwn-0x5000cca098eed5a0 ONLINE 0 0 0
wwn-0x5000cca098ef37eb ONLINE 0 0 0
wwn-0x5000cca098f0b6f7 ONLINE 0 0 0
wwn-0x5000cca098eec00e ONLINE 0 0 0
wwn-0x5000cca098ef4884 ONLINE 0 0 0
wwn-0x5000cca098eeb007 ONLINE 0 0 0
wwn-0x5000cca098eeba3b ONLINE 0 0 0
wwn-0x5000cca098eec8fd ONLINE 0 0 0
wwn-0x5000cca098eec265 ONLINE 0 0 0
wwn-0x5000cca098eeecd4 ONLINE 0 0 0
wwn-0x5000cca098eec371 ONLINE 0 0 0
wwn-0x5000cca098eea614 ONLINE 0 0 0
raidz3-2 ONLINE 0 0 0
wwn-0x5000cca098c1f945 ONLINE 0 0 0
wwn-0x5000cca098c20eb4 ONLINE 0 0 0
wwn-0x5000cca098c20909 ONLINE 0 0 0
wwn-0x5000cca0bde6b902 ONLINE 0 0 0
wwn-0x5000cca098c20d3d ONLINE 0 0 0
wwn-0x5000cca098c1f2ef ONLINE 0 0 0
wwn-0x5000cca098c208f3 ONLINE 0 0 0
wwn-0x5000cca098c23e41 ONLINE 0 0 0
wwn-0x5000cca098c2063c ONLINE 0 0 0
wwn-0x5000cca098c20784 ONLINE 0 0 0
wwn-0x5000cca098c1f6b5 ONLINE 0 0 0
wwn-0x5000cca098c20087 ONLINE 0 0 0
wwn-0x5000cca098c20925 ONLINE 0 0 0
wwn-0x5000cca098c257bd ONLINE 0 0 0
sdaq ONLINE 0 0 0
ata-INTEL_SSDSC2BA100G3_BTTV24540538100FGN-part1 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdaj INUSE currently in use
sdak INUSE currently in use
wwn-0x5000cca098ca91b3 AVAIL
wwn-0x5000cca098ca9275 AVAIL

I want to get the two spares (sdaj and sdak) out of the list, as these are in use by the raidz3-2 vdev (wwn-0x5000cca098cadbc0 and wwn-0x5000cca098c6ffdb) but can't figure out how to do this.
Furthermore, there is a single device in a vdev of which I have no understanding, what that is doing there: sdaq.
I've tried googling, but couldn't find anything matching.

Can anyone enlighten me?
Thanks in advace.


Well-Known Member
Dec 31, 2010
If the spare is AVAIL, you can remove it.
If the spare is "in use" it is an active part of a vdev.
To remove you must first replace with a new disk. The spare becomes then "AVAIL".

Intel 4510 is a bad choice for an Slog. For a filer or backup storage you do not need sync write, for databases or VM storage you want something better.

Use wwn for all disks

Avoid that many hotspares. On short problems like a flaky powersupply you have all hotspares in use with a hard to understand pool state for beginners

L2Arc only makes sense in low ram situations. Is this the case?

More Z2 vdevs would be faster regarding iops and resilver time

ultra critical:
sdaq is a basic vdev in a raid-0 config with the Z3 vdevs, If it fails the pool is lost !!
Add at least a second disk for a mirror immediatly, then backup and recreate the pool

Add pool state as "code" to make it more readable
Last edited:


New Member
Oct 14, 2021
I think I wasn't clear about what I want to achieve: the two spares sdaj and sdak were already automatically added to the raidz3-2 vdev (listed as wwn-0x5000cca098cadbc0 and wwn-0x5000cca098c6ffdb). I don't need to replace the disks as they are fine and working, but they should have been removed from the spares list. I want to keep them in the raidz3-2 vdev, just remove the entries under spares.

Yes, the sdaq is the thing I don't understand at all. This should not even be part of the volume. So the question is: how can I distribute its data to the other vdevs then remove that vdev alltogether withou loosing data?


Well-Known Member
Dec 31, 2010
The idea behind ZFS hotspares:
A hotspare remains a hotspare even if it replaces a faulted disk as the usual action is to replace the bad disk with a new one and the hotspare goes back to avail.

What happens if you try to remove the hotspare entry with a command like
zpool remove tank sdaj

about sdaq
There is currently no way to remove a basic vdev from a Open-ZFS pool with a raid-z. Only Oracle Solaris can do this with native ZFS. Open-ZFS can remove vdevs only in pools without raid-Z (basic+mirror).

So the usual way to fix remains:
add a mirrordisk to sdaq to have redundancy
backup and destroy/recreate pool or keep the mirror vdev


New Member
Oct 14, 2021
OK, thanks for the clarification.

Trying to remove the spares (which are no longer spares) from the list results in:
cannot remove sdaj: pool is busy
Same with the replace command.

So that won't work.
So I could use one of the free spares (by first detaching it from the spare pool) and hope that the actual spare will again become a spare?

That I can't release a single drive vdev without backup and restore is a pitty. I'll have to come up with somthing there.