SOLVED: duplicated zpool via send/recv, disk usage appears to be incorrect

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ac4000

New Member
Oct 28, 2019
3
0
1
Solution in post two. TL;DR: it was the compression (which is, of course, almost perfect on a file filled with zeros).

I'm trying to find a way to create duplicate zvols (e.g., from a gold VM) without using cloning, which would cause a dependency issue (I'd rather be able to delete the parent zvol at some time in the future; note that a combination of clone/promote will not work because it just reverses the problem). What appears to work is this: zfs send piped into zfs recv does not appear to create a clone relationship, but the reported sizes cannot be correct. (This is backported zfs on Debian 10.) Here's the procedure to replicate:
Code:
~# zfs create -V 512M tank/test_parent
~# zfs list
NAME                 USED    AVAIL      REFER  MOUNTPOINT
tank                 6.03G   213G       96K    -
tank/test_parent     530M    213G       204K   -
~# mkfs.ext4 /dev/tank/test_parent
~# mount /dev/tank/test_parent /mnt
~# echo "Check file data." > /mnt/check
~# dd if=/dev/zero | pv -bearpIt | dd of=/mnt/filler
~# umount /mnt
~# zfs send tank/test_parent | zfs receive tank/test_duplicate
~# zfs list
NAME                 USED    AVAIL      REFER  MOUNTPOINT
tank                 6.53G   213G       96K    -
tank/test_duplicate  204K    212G       204K   -
tank/test_parent     530M    213G       204K   -
Note that the duplicate appears to take very little space, which is what we'd expect with a clone/snapshot. To further test first, destroy to the parent to make sure it does not remove the clone:
Code:
~# zfs destroy tank/test_parent
~# zfs list
NAME                 USED    AVAIL      REFER  MOUNTPOINT
tank                 6.03G   213G       96K    -
tank/test_duplicate  204K    212G       204K   -
~# mount /dev/tank/test_duplicate /mnt
~# cat /mnt/check
Check file data.
~# df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/zd80  488M 478M 0     100% /mnt
Everything appears to be in order with the duplicate, save for the reported space. I could, of course, just create a new zvol and dd the old one to it, but this appears to be a faster/cleaner way to go--provided it actually works. What I'd like to know is: what's going on under the hood? Is this a bug? Is there danger lurking in the wings?

Thanks for any help and insight.
 
Last edited:

ac4000

New Member
Oct 28, 2019
3
0
1
Solved/elementary mistake: it was the compression. Repeating the test using /dev/urandom, instead of /dev/zero, results in a correctly-sized/reported duplicate. This would, therefore, appear to be a good method to duplicate zvols without the dependency issue caused by clones/snapshots. If I'm missing anything, please chime in so we can learn more.
 

ac4000

New Member
Oct 28, 2019
3
0
1
Repeated with pseudorandom data:
Code:
NAME                    USED  AVAIL  REFER  RATIO  REFRESERV  USEDREFRESERV  REFRATIO  WRITTEN  CLONES
xenvms/test_duplicate   482M   212G   482M  1.00x       none             0B     1.00x        0  -    
xenvms/test_parent      530M   212G   482M  1.00x       530M          47.7M     1.00x     482M  -