Proxmox and ZFS on host node

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,343
1,797
113
CA
We do, and have 0 issues.

ZFS will use a lot of CPU for high IO workloads it doesn't matter if it's a proxmox host or freenas\truenas or napp-it if you benchmark IO and\or run a lot of VMs with demanding IO then CPU usage goes up a lot... some benchmarks I've gotten 80% CPU from Dual E5 V3 in the past with pools of NVMEs.

I can't see your screenshot on Proxmox forum can you link to something not behind a password?
 
Last edited:

tjk

Active Member
Mar 3, 2013
406
140
43
www.servercentral.com
Yea, that is basically the problem we are seeing using zvols and high host node CPU with "heavy" IO VM's.

Was hoping it was something on my setup/config/etc, but doesn't sound like it.
 

Attachments

Last edited:

gb00s

Well-Known Member
Jul 25, 2018
775
306
63
Poland
The first question would be 'what's running inside the vm's' before considering having any issues with Proxmox, Zvol or whatever you want to blame.

The pure screenshot won't say anything about workload or how your CPU is treated by the vm's.
 

tjk

Active Member
Mar 3, 2013
406
140
43
www.servercentral.com
heavy write workloads, doesn't really matter what the IO is, the fact that moderate IO with zvols drives up the CPU on the host node(s).
 

gb00s

Well-Known Member
Jul 25, 2018
775
306
63
Poland
Without having any info config wise I would guess, with your 'excessive' disk io workload you are already messing with the host reserved cpu resources, not just the vm. Then your vm, without knowing the full config of the vm, is also blocking the host from handling the io and your system is becoming messed up and your cpu usage goes crazy. You must have io delays like hell. Check in the dashboard what your io delay is in Proxmox. If you then have some config issues, like ballooning or wrong cpu reservation etc .... so many possible issues.

EDIT: what is set for io threading and disk controllers etc
EDIT: it would also be interesting to see what kind of 'write jobs' you are running
EDIT: and your cpu usage for each zvol is also suspicious already
EDIT: and write IOPS figures for these 'enterprise' ssd's are also not that impressive for such a torture
 
Last edited:
  • Like
Reactions: tjk and T_Minus

oneplane

Active Member
Jul 23, 2021
301
162
43
The way I read this, it seems the zvol usage is 10% of the heavy guest usage, that's not so strange (depending on the options you enable and the acceleration your CPU offers).
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
691
366
63
48
r00t.dk
I am using proxmox with zfs zvols and have 0 issues - but I have "normal" VM's running.
I think its expected that if you build an AIO solution that you will not get the same kind of performance than if you have disks on a SAN and compute on something else - but what you gain is simplicity. So perhaps you need to rethink and move your I/O heavy VM's to something that is running off the proxmox boxes, so their disks are via ISCSi or similar, in that way the I/O should be offloaded somewhat to the storage server.

Of course it can also be that your ZFS settings are not optimal for what you intend to use it for - or you simply have provisioned too many i/o heavy VM's on the host.