Apologies if this question does not below to this forum. I am running a small number of Windows 10 guests on libvirt-2.5 + qemu-2.7 + linux-4.8 with ZFS 0.6.5.8 . Disks of the guests are setup on ZFS zvols , for example disk C: of guest "lublin" ...
... is mapped to:
The ZFS is also setup with a dedicated SLOG 4GB device running on a NVMe and 250GB LARC2 also running on the same NVMe DC3700 , there is also 16GB of RAM for LARC to use (of total 128GB available):
Despite this all, very often the disk performance from within guest is unsatisfactory. And by this I mean IO speed reported by task manager rarely exceeding 3MB/s, sometimes dipping below 1MB/s. This happens when doing some IO intensive work on C: like updating software (e.g. when Adobe cloud is upgrading Photoshop). Often during these times some Windows processes are becoming unresponsive for short periods of time. At the some time the host is fine and responsive.
Any hints how to improve that? Or at least tried-and-tested setup to use when a ZVOL is used as a QEMU guest disk? Or is such setup generally not recommended and I should move pronto to QCOW2?
Code:
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='writeback'/>
<source dev='/dev/zvol/zdata/vdis/lublin'/>
<target dev='sda' bus='scsi'/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
. . .
<controller type='scsi' index='0' model='virtio-scsi'>
<driver queues='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
Code:
root@gdansk /etc/modprobe.d # zfs get all zdata/vdis/lublin
NAME PROPERTY VALUE SOURCE
zdata/vdis/lublin type volume -
zdata/vdis/lublin creation Thu Jul 30 20:35 2015 -
zdata/vdis/lublin used 1.14T -
zdata/vdis/lublin available 2.28T -
zdata/vdis/lublin referenced 136G -
zdata/vdis/lublin compressratio 1.00x -
zdata/vdis/lublin reservation none default
zdata/vdis/lublin volsize 160G local
zdata/vdis/lublin volblocksize 8K -
zdata/vdis/lublin checksum on default
zdata/vdis/lublin compression off inherited from zdata/vdis
zdata/vdis/lublin readonly off default
zdata/vdis/lublin copies 1 default
zdata/vdis/lublin refreservation 165G local
zdata/vdis/lublin primarycache all default
zdata/vdis/lublin secondarycache all default
zdata/vdis/lublin usedbysnapshots 883G -
zdata/vdis/lublin usedbydataset 136G -
zdata/vdis/lublin usedbychildren 0 -
zdata/vdis/lublin usedbyrefreservation 150G -
zdata/vdis/lublin logbias latency default
zdata/vdis/lublin dedup off default
zdata/vdis/lublin mlslabel none default
zdata/vdis/lublin sync standard default
zdata/vdis/lublin refcompressratio 1.00x -
zdata/vdis/lublin written 14.7G -
zdata/vdis/lublin logicalused 990G -
zdata/vdis/lublin logicalreferenced 134G -
zdata/vdis/lublin snapshot_limit none default
zdata/vdis/lublin snapshot_count none default
zdata/vdis/lublin snapdev hidden default
zdata/vdis/lublin context none default
zdata/vdis/lublin fscontext none default
zdata/vdis/lublin defcontext none default
zdata/vdis/lublin rootcontext none default
zdata/vdis/lublin redundant_metadata all default
Code:
root@gdansk /etc/modprobe.d # cat zfs.conf
# Enforce max ZFS ARC size to 16GB = 16*1024*1024*1024 = 17179869184
options zfs zfs_arc_max=17179869184
# Enforce synchronous scsi scan, to prevent zfs driver loading before disks are available
options scsi_mod scan=sync
Any hints how to improve that? Or at least tried-and-tested setup to use when a ZVOL is used as a QEMU guest disk? Or is such setup generally not recommended and I should move pronto to QCOW2?