Windows performance on Proxmox - sanity check

Discussion in 'Linux Admins, Storage and Virtualization' started by dkong, Jul 3, 2018.

  1. dkong

    dkong New Member

    Joined:
    Apr 3, 2018
    Messages:
    5
    Likes Received:
    1
    Hello STH forum, I have a question about running Windows on Proxmox because this is giving me a bad time so far. I'm asking here at STH first because it seems to me there is more technical knowledge on STH than on the Proxmox forums.

    Some relevant specs of the box I'm running this on:
    Intel server motherboard
    Xeon E3-1230 v3
    32 GB ECC UDIMM
    2x 120GB Kingston SSD as a ZFS mirror for boot
    2x 250GB Samsung Pro 840 as two separate ZFS zpools for VM's
    2x 1TB spinning rust as a ZFS mirror for backups and general data

    Right now I have one zpool with one dataset in it, for VM's, mounted like this: /ssd1/vms
    This dataset is imported in Proxmox via the GUI as a directory in contrast to as a ZFS location. My reason for this is that I wanted to use a qcow2 disk.

    That brings me to the VM settings. I'm running Windows 10 Pro. The VM has one disk (75GB) as qcow2 on the SSD dataset. The storage driver is VirtIO block, I had to import the driver for this into the Windows install, I've used the most recent stable ISO from the Fedora project site. The virtual disk is configured to use writeback cache because I've followed some recommendations, even though I don't like using writeback I can accept the risks for this Windows VM.

    Now my issue is this: storage performance sucks. I've fiddled with the above parameters a bit and it's gotten better, but still unusable as far as I'm concerned. Now what I don't understand is that running crystaldiskmark inside the VM gives very good looking figures. Updating Windows 10 to 1803 slows the whole thing down to a crawl though, never seeing faster write speeds to the disk (according to task manager) than 20 MB/s.. BTW this box is running nothing else at this time, just this single Windows VM (configured with 8GB RAM and 2 cores).

    Does anybody know what I'm doing wrong? I can think of these different things to try:
    1. Use the latest VirtIO drivers instead of the most recent stable ones
    2. Have proxmox consume the ZFS dataset "directly" instead of as a directory
    3. Use a raw disk instead of a qcow2 one
    4. Skip ZFS for VM's altogether and just run LVM + EXT4 or XFS on the VM SSD's
    I can of course try each different combination but it will take me weeks and I'd like to get this box running. Ideally, I can get acceptable storage performance without needing to have any form of cache on the virtual disk.

    -edit- One thing I forgot to add is that when viewing the node summary, I also noticed the I/O delay get up to 30% during the Windows install.
     
    #1
    Last edited: Jul 3, 2018
  2. BackupProphet

    BackupProphet Active Member

    Joined:
    Jul 2, 2014
    Messages:
    639
    Likes Received:
    222
    ZFS on Linux is slower than on FreeBSD. The benchmarks I have done (18.04 Ubuntu) has shown that sync writes on ZFS can be 2-3 times faster on FreeBSD than on Linux. Maybe a slog could help you here. Windows do use sync writes a lot, especially at boot time, there is constant activity in my slog.

    There are workarounds, you could disable sync writes and see if that helps.
    Use another filesystem, EXT4 and XFS has excellent performance. You also have bcachefs, though it is more in a experimental state.
    Or maybe run a FreeNAS system that host the iSCSI devices for your Proxmox machine.
     
    #2
  3. dkong

    dkong New Member

    Joined:
    Apr 3, 2018
    Messages:
    5
    Likes Received:
    1
    Thanks for your reply BackupProphet. In the mean time I have become a little wiser regarding this topic. I have tried out the performance difference of most of the combinations and things to check in my list. Nothing seems to improve performance notably. One thing I did find was removing the ZFS partitions, mounting as EXT4 and doing an fstrim, then restoring the zpool. This helped a bit, but not enough.

    For now I chose to stick with EXT4 for VM volumes. Performance immediately went back to normal (normal being what I'm used to from VMware, with comparable hardware and VM's). The FreeNAS + iSCSI thing would be interesting to try. I actually have another FreeNAS box available but I don't want these VM's on the proxmox box to be dependent on the FreeNAS system being available.
     
    #3
    Last edited: Jul 11, 2018 at 5:50 AM
  4. BackupProphet

    BackupProphet Active Member

    Joined:
    Jul 2, 2014
    Messages:
    639
    Likes Received:
    222
  5. dkong

    dkong New Member

    Joined:
    Apr 3, 2018
    Messages:
    5
    Likes Received:
    1
    Thanks for the interesting read. Most stuff of the second link I've already looked at. The IO thingy is found in Proxmox as the checkmark "IO Thread" which should increase performance but only if multiple machines make use of the same disk. All of my tests are done on a clean machine with only one VM so this should not have any performance impact. In Proxmox the IO Thread feature also breaks backups, so it's not a real practical measure to implement.

    Reading through the rest of the info I think my setup was already pretty good. I edited my earlier reply because maybe I didn't make it perfectly clear: moving to EXT4 solved all my performance problems with the VM's, so ZFS On Linux is the culprit here.
     
    #5
    BackupProphet likes this.
  6. BackupProphet

    BackupProphet Active Member

    Joined:
    Jul 2, 2014
    Messages:
    639
    Likes Received:
    222
    Yeah, that is my issue too. ZFS on Linux is not performant :(
     
    #6
  7. Sapphiron

    Sapphiron New Member

    Joined:
    Mar 2, 2018
    Messages:
    8
    Likes Received:
    0
    I have older and almost identical hardware. At the end of the day, I got the best performance with LVM with logical volumes for each VM.

    ZFS on Linux still has some way to go.

    One idea I was playing with doing was to run FreeNAS on the host itself and hardware passthrough-ing the disks to it. I have done the same before to build a nested Proxmox cluster on one box, to test the ZFS replication/HA features. I used the following guide: Physical disk to kvm - Proxmox VE

    ZFS performance on the disks that I had passed through, was the same as the ZFS performance directly on the host.

    with Freenas, I would then export an ISCSI mount to the Proxmox Host for VM storage with qcow2 files.

    It's a Frankenstein option, that I have no idea how well it will work, but it only require one physical box. The performance of the KVM network stack is what I am most uncertain about. Its 10gig, but I don't know how well it will do under heavy IOPS.
     
    #7
  8. Klee

    Klee Well-Known Member

    Joined:
    Jun 2, 2016
    Messages:
    1,045
    Likes Received:
    321
    Just for more info to compare , Proxmox ZFS Raid0 with 12 x 250 gb sata hard drives.

    root@pve:~# dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
    1024+0 records in
    1024+0 records out
    402653184 bytes (403 MB, 384 MiB) copied, 0.295494 s, 1.4 GB/s
     
    #8
Similar Threads: Windows performance
Forum Title Date
Linux Admins, Storage and Virtualization Do ProxMox have a console for installing Windows? Jan 29, 2018
Linux Admins, Storage and Virtualization Only seeing some of iSCSI target from Windows 7 Jan 4, 2018
Linux Admins, Storage and Virtualization Switching from Windows based Storage/Transcode Server to Ubuntu Advice Nov 21, 2017
Linux Admins, Storage and Virtualization How can I manage KVM in Windows? Nov 6, 2017
Linux Admins, Storage and Virtualization Ceph blustore over RDMA performance gain Jun 2, 2018

Share This Page