Hello STH forum, I have a question about running Windows on Proxmox because this is giving me a bad time so far. I'm asking here at STH first because it seems to me there is more technical knowledge on STH than on the Proxmox forums.
Some relevant specs of the box I'm running this on:
Intel server motherboard
Xeon E3-1230 v3
32 GB ECC UDIMM
2x 120GB Kingston SSD as a ZFS mirror for boot
2x 250GB Samsung Pro 840 as two separate ZFS zpools for VM's
2x 1TB spinning rust as a ZFS mirror for backups and general data
Right now I have one zpool with one dataset in it, for VM's, mounted like this: /ssd1/vms
This dataset is imported in Proxmox via the GUI as a directory in contrast to as a ZFS location. My reason for this is that I wanted to use a qcow2 disk.
That brings me to the VM settings. I'm running Windows 10 Pro. The VM has one disk (75GB) as qcow2 on the SSD dataset. The storage driver is VirtIO block, I had to import the driver for this into the Windows install, I've used the most recent stable ISO from the Fedora project site. The virtual disk is configured to use writeback cache because I've followed some recommendations, even though I don't like using writeback I can accept the risks for this Windows VM.
Now my issue is this: storage performance sucks. I've fiddled with the above parameters a bit and it's gotten better, but still unusable as far as I'm concerned. Now what I don't understand is that running crystaldiskmark inside the VM gives very good looking figures. Updating Windows 10 to 1803 slows the whole thing down to a crawl though, never seeing faster write speeds to the disk (according to task manager) than 20 MB/s.. BTW this box is running nothing else at this time, just this single Windows VM (configured with 8GB RAM and 2 cores).
Does anybody know what I'm doing wrong? I can think of these different things to try:
-edit- One thing I forgot to add is that when viewing the node summary, I also noticed the I/O delay get up to 30% during the Windows install.
Some relevant specs of the box I'm running this on:
Intel server motherboard
Xeon E3-1230 v3
32 GB ECC UDIMM
2x 120GB Kingston SSD as a ZFS mirror for boot
2x 250GB Samsung Pro 840 as two separate ZFS zpools for VM's
2x 1TB spinning rust as a ZFS mirror for backups and general data
Right now I have one zpool with one dataset in it, for VM's, mounted like this: /ssd1/vms
This dataset is imported in Proxmox via the GUI as a directory in contrast to as a ZFS location. My reason for this is that I wanted to use a qcow2 disk.
That brings me to the VM settings. I'm running Windows 10 Pro. The VM has one disk (75GB) as qcow2 on the SSD dataset. The storage driver is VirtIO block, I had to import the driver for this into the Windows install, I've used the most recent stable ISO from the Fedora project site. The virtual disk is configured to use writeback cache because I've followed some recommendations, even though I don't like using writeback I can accept the risks for this Windows VM.
Now my issue is this: storage performance sucks. I've fiddled with the above parameters a bit and it's gotten better, but still unusable as far as I'm concerned. Now what I don't understand is that running crystaldiskmark inside the VM gives very good looking figures. Updating Windows 10 to 1803 slows the whole thing down to a crawl though, never seeing faster write speeds to the disk (according to task manager) than 20 MB/s.. BTW this box is running nothing else at this time, just this single Windows VM (configured with 8GB RAM and 2 cores).
Does anybody know what I'm doing wrong? I can think of these different things to try:
- Use the latest VirtIO drivers instead of the most recent stable ones
- Have proxmox consume the ZFS dataset "directly" instead of as a directory
- Use a raw disk instead of a qcow2 one
- Skip ZFS for VM's altogether and just run LVM + EXT4 or XFS on the VM SSD's
-edit- One thing I forgot to add is that when viewing the node summary, I also noticed the I/O delay get up to 30% during the Windows install.
Last edited: