How to replace failed ZFS mirror rpool drive?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I had a Proxmox host using 2x 2.5" 1TB HDDs lying around as boot drives. I selected zfs rpool and it worked great for the past year.

Then, /dev/sdb died. It's not even on the controller?

I had remote hands move another 1TB drive from a random storage array into the slot. So I've now got a different /dev/sdb installed.

ZFS didn't automatically pick it up and add to the zpool.

Because the drive was used previously, it does have partitions on it.

I've used zfs replace to swap drives in mirrors, but now it's sdb and the old one was sdb so I don't know how that'd work.

Looking for some hand holding from the zfs gods on here. The machine is fine but I'm scared of nuking the rpool mirror.

I also don't know if there's something special I need to do since it's a rpool boot drive not just a data drive.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Think this will take care of ya.

Replacing a Device in a ZFS Storage Pool
After you have determined that a device can be replaced, use the zpool replace command to replace the device. If you are replacing the damaged device with different device, use syntax similar to the following:


# zpool replace tank c1t1d0 c2t0d0

This command migrates data to the new device from the damaged device or from other devices in the pool if it is in a redundant configuration. When the command is finished, it detaches the damaged device from the configuration, at which point the device can be removed from the system. If you have already removed the device and replaced it with a new device in the same location, use the single device form of the command. For example:


# zpool replace tank c1t1d0

This command takes an unformatted disk, formats it appropriately, and then resilvers data from the rest of the configuration.

Shamelessly stolen from docs.oracle.com. Used this procedure before though from cli and been met w/ success. Does Proxmox not have a replace failed ZFS/zpool device GUI option?

Pssst, as @nitrobass24 said...back your shizzle up off that zpool just as a double CYA before performing the procedure.

EDIT: Covered fairly thoroughly here as well:

How to replace a disk under ZFS in Solaris
 
  • Like
Reactions: T_Minus

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Can I just make the new drive a spare, autoreplace=on then detach the old disk?

I don't think proxmox rpool is the only thing that needs to get updated though.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@Jeggs101 do NOT do that! If you do, then the mirror will transition to a single disk.

Here is the guide:
ZFS: Tips and Tricks - Proxmox VE
YES, follow this guide for sure if this is how Proxmox advises to deal w/ their ZFS subsystem. Never seen that sgdisk procedure on a Unix based ZFS distro for ZFS maint so maybe that's explicit to Linux. Offline of the device w/in the pool/grub work as well seems very careful/fragile/eggshell Linux ZFS behavior :-D

Good to know for sure! My ZoL boxes are mostly for testing/playing.

'Are you're base are belong to us'...eh hem I mean 'All my zpools are belong to BSD or Illumos ZFS'

I 'may' just be making all this up though :-D
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
I know this is old, but has anyone managed to replace the rpool by another pool (with smaller disks) ?

Guess it should be somehow possible with a live-cd, creating the new pool, copy partitions/initrd to new hdd's, zfs send/rcv contents and rename/export the new pool then before booting into proxmox from the new pool?
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
I know this is old, but has anyone managed to replace the rpool by another pool (with smaller disks) ?

Guess it should be somehow possible with a live-cd, creating the new pool, copy partitions/initrd to new hdd's, zfs send/rcv contents and rename/export the new pool then before booting into proxmox from the new pool?

It's more difficult with smaller drives as you can't just add them to the pool, wait, then remove the old ones and run grub on the new disks.

Your procedure sounds about right. I think I would probably create the mirror, snapshot + send/recv, run grub on them. Then export both, import with rename on the new pool, shut down, remove old drives and give it a go. You still have the old drives if something goes wrong, so there's not much to lose other than time.
 

_alex

Active Member
Jan 28, 2016
866
97
28
Bavaria / Germany
Well, will give it a try. Maybe this is getting a bit more interesting besides swapping the disks ...
8 Bays, currently 2x Mirrors on 1Tb + SLOG and want to go 3x Mirrors on 500Gb (need the other disks) + SLOG

So, i'ill either break the array or put the 2 disks from the first mirror of the new pool into the Host, copy partitions/boot etc. there and then build the pool at another host that holds the disks temporary and do the send/recv before swapping.
Or do some exports via SRP from an ESOS to have the disks of the new pool pseudo-local, but not sure if the sg - commands work then.