Proxmox 6.4-13 + zfs + oom

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

elesjuan

New Member
Sep 24, 2021
4
1
3
tl;dr - I'm running Proxmox 6.4-13 on a system with 2x e5 Xeon procs, 128gb ECC ram, with a rust array of approximately 50tb. My issue is, proxmox continuously kills 2 of my containers for OOM conditions, and I can't for the life of me figure out how to solve this. Oddly, it almost seems like the containers are left alone if they're running but not actively accessing the storage array, but this could be just conjecture. Hopefully I've listed everything relevant to my setup to maybe give someone with more experience some clues of what I'm doing wrong?

Sep 19 21:19:31 pve2 kernel: [194511.934109] Memory cgroup out of memory: Killed process 27570 (Plex Media Serv) total-vm:3298868kB, anon-rss:1612464kB, file-rss:0kB, shmem-rss:4kB, UID:108 pgtables:3420kB oom_score_adj:0



I'm not really sure where to start this thesis of my vexing issues of the past six months, so guess we'll kind of start where my problems began. My homelab started with a desktop PC running windows and small array of rust with drivepool. After the host disk died, I learned about proxmox 4.5, so bought a chassis, parts, drives, and threw together a new machine. Unfortunately I wasn't aware of zfs at that point, so I continued the madness with a windows vm and drivepool. After losing a disk, I learned about ZFS and decided I wanted to offload storage onto a dedicated system and run VM/CTs on the original chassis.

After buying new house where I'm responsible for utilities (rental had utilities baked into rent) I've started to become a lot more energy conscious than I previously was, so naturally having a 1200 watt load running 24/7 isn't an ideal situation at $0.1159/kwh. Given the historical loads of both systems, it should be a no brainer to move all of my vm/cts over to the storage system. This is where the problems started. Originally the storage system only had 16gb of ram, but the majority of my virtual environments only consume a collective of about 10gb of ram so I didn't think this would be an issue. Quickly it became clear to me that I was very wrong.

After the migration, everything seemed to be fine, until I started to notice my home assistant VM was the target of oom termination. After a few frustrating days and a little more research, I decided to pull the trigger and pump up the ram on this system from 16gb to 128gb. About a day after a slightly frustrating ram upgrade, I observed memory usage of my fresh 128gb up to near 100% once again, and now my plexserver and a python script scheduler containers were now the target of oom terminations. Did a little bit of research on the google and found an article on tuning zfs settings on proxmox, so decided to give that a shot with the following settings:

options zfs zfs_arc_min=8589934592
options zfs zfs_arc_max=53687091200

Unfortunately, the only thing this solved was taking the maximum ram consumed from 100% to about 55% of my total ram and the containers continue to be terminated for oom. From here, I'm at an absolute loss and would really love some pointers on how I can resolve this, especially without spending a ton of money.


Here's the node setup:

Supermicro 24 bay chassis
2x e5 2620v2
128gb ecc ram
Root: 2x 2tb SAS 7200 disks, mirrored, zfs, lz4
Storage: 6 pairs of disks in mirror vdevs, lz4; 2x12tb SATA 5400, 3x(8tb SATA 5400), 2x(8tb SAS 7200)
Used default config for most everything, except adjusting the arc min/max for troubleshooting.

root@pve2:~# free -h # the "buff/cache" grows
total used free shared buff/cache available
Mem: 125Gi 68Gi 53Gi 558Mi 3.6Gi 55Gi
Swap: 0B 0B 0B

root@pve2:~# zpool status
pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 13:17:39 with 0 errors on Sun Sep 12 13:41:42 2021
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35000cca01b77db50-part3 ONLINE 0 0 0
scsi-35000cca01cb78cb4-part3 ONLINE 0 0 0

errors: No known data errors

pool: storage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 1 days 01:37:17 with 0 errors on Mon Sep 13 02:01:24 2021
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_7HKRZVRJ ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_7HKS4R5J ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_1SJKAEMZ ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_1DGEZHWZ ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_1SJ7VDSZ ONLINE 0 0 0
ata-WDC_WD80EMAZ-00WJTA0_1SGZNRRZ ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
ata-WDC_WD120EMFZ-11A6JA0_9JGP9JAT ONLINE 0 0 0
ata-WDC_WD120EMFZ-11A6JA0_9JG498KT ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
scsi-35000cca2523fd390 ONLINE 0 0 0
scsi-35000cca25221fea4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
scsi-35000cca252244af8 ONLINE 0 0 0
scsi-35000cca25221f850 ONLINE 0 0 0

errors: No known data errors

root@pve2:~# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
rpool 589G 1.24T 3 170 48.3K 8.34M
storage 41.3T 5.89T 5 16 635K 1.75M
---------- ----- ----- ----- ----- ----- -----
 

RTM

Well-Known Member
Jan 26, 2014
956
359
63
As far as I can tell from the documentation, you can configure how much memory you want to make available for the container.
Perhaps you need to set that to something lower (if there is no limit) or higher (if it is set really low like 512MB).

Also in case there is a memory leak somewhere in the code, you may want to ensure that the code inside your containers have been updated. The same goes for Promox, where you may want to upgrade to version 7.
 

elesjuan

New Member
Sep 24, 2021
4
1
3
Pardon my French, but you've got to be shitting me... I pumped the minimum ram on the two containers affected up WAAAAY beyond what they should really need. We're currently 18 hours and some change in, and neither container has been oom's since.

Thinking back, before upgrading the system ram, I went through and "audited" a bunch of ram away from other containers to dedicate to the Home Assistant VM, which stopped that from getting oom'd so this actually makes perfect sense to me now.

Greatly appreciate your assistance identifying my oversight!
 
  • Like
Reactions: RTM

Terry Wallace

PsyOps SysOp
Aug 13, 2018
197
118
43
Central Time Zone
FYI containers get a lot more memory pressure from the OS and set low they tend to get OOM'd a lot more. I have multiple boxes with 128 difference is I run only VM's not CT's because of some of the CT issues I have run into. I can set a VM to balloon or static and have never had one OOM'd in last 3 years.
 
  • Like
Reactions: elesjuan