ZFS volumes used by iSCSI showing double space usage in zfs list?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TheBloke

Active Member
Feb 23, 2017
200
40
28
44
Brighton, UK
Hi all

I just started testing iSCSI volumes from my Solaris 11.3 server to my Windows 10 workstation. It's working great, and was super easy to setup.

But I am seeing one very confusing thing: zfs list always reports far more data used in the ZFS volume than I have actually created.

I created a 512GB ZFS volume with no reservation (zfs create -sV), created a LUN and target, and then mounted it on Windows and formatted it with NTFS.

In Windows I then copied 322GB of data onto the new drive. That's all completely fine. But zfs list shows the space used in the volume as over double what has actually been used?

This is what I see with zfs list and zfs get:
Code:
root@magrathea:~# zfs list -rt all data/volumes
NAME                          USED  AVAIL  REFER  MOUNTPOINT
data/volumes                  736G  30.7T   347K  /data/volumes
data/volumes/iscsi            736G  30.7T   329K  /data/volumes/iscsi
data/volumes/iscsi/test-512g  736G  30.7T   736G  -
root@magrathea:~# zfs get volsize,used,compress,compressratio data/volumes/iscsi/test-512g
NAME                          PROPERTY       VALUE  SOURCE
data/volumes/iscsi/test-512g  volsize        512G   local
data/volumes/iscsi/test-512g  used           736G   -
data/volumes/iscsi/test-512g  compression    lz4    inherited from data
data/volumes/iscsi/test-512g  compressratio  1.08x  -
root@magrathea:~# zfs get -p used data/volumes/iscsi/test-512g
NAME                          PROPERTY  VALUE  SOURCE
data/volumes/iscsi/test-512g  used      789776179920  -
Everything is working fine, but I just don't understand the values the zfs command is showing? The used figure is 2.265 times higher than the actual data used as reported in Windows - a very strange ratio that I can't begin to figure out.

Here's another example, this time using a standard volume, and writing to it from Solaris:
Code:
root@magrathea:~# zfs create -V 128G data/volumes/iscsi/test-128g
root@magrathea:~# zfs set compress=off data/volumes/iscsi/test-128g
root@magrathea:~# zfs list data/volumes/iscsi/test-128g
NAME                          USED  AVAIL  REFER  MOUNTPOINT
data/volumes/iscsi/test-128g  132G  30.7T   165K  -

root@magrathea:~# stmfadm create-lu /dev/zvol/rdsk/data/volumes/iscsi/test-128g
Logical unit created: 600144F069834D00000058EADB1B0003
root@magrathea:~# stmfadm add-view 600144F069834D00000058EADB1B0003
root@magrathea:~# itadm create-target
Target iqn.1986-03.com.sun:02:5527fe9e-d137-40b8-9c65-f51550002283 successfully created

root@magrathea:~# iscsiadm add discovery-address 192.168.0.5
root@magrathea:~# iscsiadm modify discovery --sendtargets enable
root@magrathea:~# zpool create iscsi-test-128g c0t600144F069834D00000058EADB1B0003d0
root@magrathea:~# zpool list iscsi-test-128g
NAME             SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
iscsi-test-128g  127G  98.5K  127G   0%  1.00x  ONLINE  -

root@magrathea:~# gdd if=/dev/zero of=/iscsi-test-128g/file bs=128k count=80000
80000+0 records in
80000+0 records out
10485760000 bytes (10 GB) copied, 15.1727 s, 691 MB/s
root@magrathea:~# gdd if=/dev/zero of=/iscsi-test-128g/file2 bs=128k count=800000
800000+0 records in
800000+0 records out
104857600000 bytes (105 GB) copied, 233.563 s, 449 MB/s

root@magrathea:~# zfs list iscsi-test-128g data/volumes/iscsi/test-128g
NAME                          USED  AVAIL  REFER  MOUNTPOINT
data/volumes/iscsi/test-128g  248G  30.4T   248G  -
iscsi-test-128g               107G  17.6G   107G  /iscsi-test-128g

root@magrathea:~# zfs get volsize,refer,refreservation,usedbydataset,usedbyrefreservation data/volumes/iscsi/test-128g
NAME                          PROPERTY              VALUE  SOURCE
data/volumes/iscsi/test-128g  volsize               128G   local
data/volumes/iscsi/test-128g  referenced            248G   -
data/volumes/iscsi/test-128g  refreservation        132G   default
data/volumes/iscsi/test-128g  usedbydataset         248G   -
data/volumes/iscsi/test-128g  usedbyrefreservation  0      -
data/volumes/iscsi/test-128g  compressratio         1.00x  -

In this example I created a 128G volume, turned compression off, created an iSCSI LUN and target and then accessed that target via the local initiator. I created a pool on the iSCSI disk and loaded a total of 107GB onto it.

But again, zfs shows crazy figures - including a USED value of 248GB for a 128GB volume! This time the ratio of USED compared to the actual data is 2.303 - very similar to the previous 2.65, but not identical.

What is up with this? Clearly there's some logic to what this figure is, but I can't begin to understand it. Is it related to iSCSI, or is it a general issue with ZFS volumes? I've not really used volumes before, and never closely checked the ones I did create.

It's a little annoying to not see the correct usage figures in the output of zfs. It will make it harder to work out actual space used.

Thanks in advance