Windows performance on Proxmox - sanity check

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dkong

New Member
Apr 3, 2018
6
1
3
123
Hello STH forum, I have a question about running Windows on Proxmox because this is giving me a bad time so far. I'm asking here at STH first because it seems to me there is more technical knowledge on STH than on the Proxmox forums.

Some relevant specs of the box I'm running this on:
Intel server motherboard
Xeon E3-1230 v3
32 GB ECC UDIMM
2x 120GB Kingston SSD as a ZFS mirror for boot
2x 250GB Samsung Pro 840 as two separate ZFS zpools for VM's
2x 1TB spinning rust as a ZFS mirror for backups and general data

Right now I have one zpool with one dataset in it, for VM's, mounted like this: /ssd1/vms
This dataset is imported in Proxmox via the GUI as a directory in contrast to as a ZFS location. My reason for this is that I wanted to use a qcow2 disk.

That brings me to the VM settings. I'm running Windows 10 Pro. The VM has one disk (75GB) as qcow2 on the SSD dataset. The storage driver is VirtIO block, I had to import the driver for this into the Windows install, I've used the most recent stable ISO from the Fedora project site. The virtual disk is configured to use writeback cache because I've followed some recommendations, even though I don't like using writeback I can accept the risks for this Windows VM.

Now my issue is this: storage performance sucks. I've fiddled with the above parameters a bit and it's gotten better, but still unusable as far as I'm concerned. Now what I don't understand is that running crystaldiskmark inside the VM gives very good looking figures. Updating Windows 10 to 1803 slows the whole thing down to a crawl though, never seeing faster write speeds to the disk (according to task manager) than 20 MB/s.. BTW this box is running nothing else at this time, just this single Windows VM (configured with 8GB RAM and 2 cores).

Does anybody know what I'm doing wrong? I can think of these different things to try:
  1. Use the latest VirtIO drivers instead of the most recent stable ones
  2. Have proxmox consume the ZFS dataset "directly" instead of as a directory
  3. Use a raw disk instead of a qcow2 one
  4. Skip ZFS for VM's altogether and just run LVM + EXT4 or XFS on the VM SSD's
I can of course try each different combination but it will take me weeks and I'd like to get this box running. Ideally, I can get acceptable storage performance without needing to have any form of cache on the virtual disk.

-edit- One thing I forgot to add is that when viewing the node summary, I also noticed the I/O delay get up to 30% during the Windows install.
 
Last edited:

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
ZFS on Linux is slower than on FreeBSD. The benchmarks I have done (18.04 Ubuntu) has shown that sync writes on ZFS can be 2-3 times faster on FreeBSD than on Linux. Maybe a slog could help you here. Windows do use sync writes a lot, especially at boot time, there is constant activity in my slog.

There are workarounds, you could disable sync writes and see if that helps.
Use another filesystem, EXT4 and XFS has excellent performance. You also have bcachefs, though it is more in a experimental state.
Or maybe run a FreeNAS system that host the iSCSI devices for your Proxmox machine.
 

dkong

New Member
Apr 3, 2018
6
1
3
123
Thanks for your reply BackupProphet. In the mean time I have become a little wiser regarding this topic. I have tried out the performance difference of most of the combinations and things to check in my list. Nothing seems to improve performance notably. One thing I did find was removing the ZFS partitions, mounting as EXT4 and doing an fstrim, then restoring the zpool. This helped a bit, but not enough.

For now I chose to stick with EXT4 for VM volumes. Performance immediately went back to normal (normal being what I'm used to from VMware, with comparable hardware and VM's). The FreeNAS + iSCSI thing would be interesting to try. I actually have another FreeNAS box available but I don't want these VM's on the proxmox box to be dependent on the FreeNAS system being available.
 
Last edited:

dkong

New Member
Apr 3, 2018
6
1
3
123
Thanks for the interesting read. Most stuff of the second link I've already looked at. The IO thingy is found in Proxmox as the checkmark "IO Thread" which should increase performance but only if multiple machines make use of the same disk. All of my tests are done on a clean machine with only one VM so this should not have any performance impact. In Proxmox the IO Thread feature also breaks backups, so it's not a real practical measure to implement.

Reading through the rest of the info I think my setup was already pretty good. I edited my earlier reply because maybe I didn't make it perfectly clear: moving to EXT4 solved all my performance problems with the VM's, so ZFS On Linux is the culprit here.
 
  • Like
Reactions: BackupProphet

Sapphiron

New Member
Mar 2, 2018
11
0
1
41
I have older and almost identical hardware. At the end of the day, I got the best performance with LVM with logical volumes for each VM.

ZFS on Linux still has some way to go.

One idea I was playing with doing was to run FreeNAS on the host itself and hardware passthrough-ing the disks to it. I have done the same before to build a nested Proxmox cluster on one box, to test the ZFS replication/HA features. I used the following guide: Physical disk to kvm - Proxmox VE

ZFS performance on the disks that I had passed through, was the same as the ZFS performance directly on the host.

with Freenas, I would then export an ISCSI mount to the Proxmox Host for VM storage with qcow2 files.

It's a Frankenstein option, that I have no idea how well it will work, but it only require one physical box. The performance of the KVM network stack is what I am most uncertain about. Its 10gig, but I don't know how well it will do under heavy IOPS.
 

Klee

Well-Known Member
Jun 2, 2016
1,289
396
83
Just for more info to compare , Proxmox ZFS Raid0 with 12 x 250 gb sata hard drives.

root@pve:~# dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0 records in
1024+0 records out
402653184 bytes (403 MB, 384 MiB) copied, 0.295494 s, 1.4 GB/s