Poolsman - ZFS Web GUI for Linux based on Cockpit

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

mietzen

New Member
Dec 25, 2023
23
13
3
EAP will give you an opportunity to start using Poolsman before its general availability, which is expected later in 2025 with 1.0 release. EAP will grant access to all existing and new features for one year. With the final release (version 1.0) we may rethink our licensing model, but all EAP licenses will remain active until expiration. Additionally we will provide special conditions for migration to a new license for all early adopters who joined EAP.
If you rethinking the licensing model, have you sort about a perpetual fallback license? Also what does this mean exactly, that license prices might double after the EAP phase?
 

MikeO3

New Member
Jun 23, 2024
1
0
1
Sent an email to support but not heard back so thought I would post here also to see if others have perhaps had a similar issue.
Details of versions in the screen shot.

Every time a refresh Dashboard or Disk panels the following error is displayed and cannot be removed. Going into various panels, I can use the functions and configure existing pools or change options on pools.
An unhandled error has occurred

Try to reload the page. If the error persists please contact developers.
Cannot create or manipulate any disk related tasks.

Don't know if this is related to the Disks code, but there is a block devices which is refusing to mount my system. Perhaps ZFS Poolsman is getting hosed because the mount is not sucessful?
7:47 AM
mmcblk0p3: Process '/usr/bin/unshare -m /usr/bin/snap auto-import --mount=/dev/mmcblk0p3' failed with exit code 1.


Browser console debug one-liner shows the following of hundreds of error log content:
  1. Unload event listeners are deprecated and will be removed.
    1. 1 source
      1. cockpit.js:5
Thanks and let me know what other information I can provide if anyone is interested.

Edit: Forgot to add... Tested on Vivaldi, Edge, and Android Vivaldi & Chrome. Private mode on all... All produce the same error.
 

Attachments

Last edited:

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
  • Like
Reactions: mietzen

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Hi @MikeO3,

we just responded to your email and found your post here after that. It seems that there's some issue with getting information about your disks. Please try send us your logs for analysis:

1. Enable `Trace` Log Level on the `Settings` tab.
2. Open the `Console` tab in `Dev Tools` of your Edge Browser.
3. Go to the `Disks` page.
4. Save the Content of the `Console` tab to the file by the `Save as...` action that is accessible in right-click menu.
5. Send it to us.
 

mietzen

New Member
Dec 25, 2023
23
13
3
Hello @scyto,
Unfortunately we don't support LXC containers. A quick googling told us that it's not possible, however we are not experts in LXC. If you find how to make ZFS and Cockpit work inside this, Poolsman should work fine too.
I got it working inside a Debian 12 LXC, here are the steps:
  1. create a new privileged container
  2. Append this to your container config under: /etc/pve/lxc/<YOUR-CONTAINER-ID>.conf
    Code:
    lxc.cgroup2.devices.allow: c 10:249 rwm
    lxc.mount.entry: /dev/zfs dev/zfs none bind,optional,create=file
    lxc.mount.entry: /proc/spl proc/spl none bind,optional,create=dir
    lxc.mount.entry: /sys/module/zfs sys/module/zfs none bind,optional,create=dir
    lxc.apparmor.profile: unconfined
    lxc.mount.auto: proc:rw sys:rw cgroup:rw
    lxc.mount.entry: /sys/fs/cgroup sys/fs/cgroup none bind,optional,create=dir
  3. Run the following commands:
    Bash:
    apt-get update
    
    # Create policy to prevent any zfs service starts during installation, they will block the installation
    cat > /usr/sbin/policy-rc.d << 'EOF'
    #!/bin/sh
    echo "Service start prevented by policy-rc.d"
    exit 101
    EOF
    chmod +x /usr/sbin/policy-rc.d
    
    # Now install from pve repo and pin - services won't start
    
    # Add Proxmox no-subscription repo
    echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-install-repo.list
    wget https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
    chmod a+r /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg
    
    # Create pinning file
    cat <<EOF > /etc/apt/preferences.d/zfsutils-from-pve
    Package: zfsutils-linux
    Pin: release o=Proxmox
    Pin-Priority: 1001
    
    Package: libnvpair3linux
    Pin: release o=Proxmox
    Pin-Priority: 1001
    
    Package: libuutil3linux
    Pin: release o=Proxmox
    Pin-Priority: 1001
    
    Package: libzfs4linux
    Pin: release o=Proxmox
    Pin-Priority: 1001
    
    Package: libzpool5linux
    Pin: release o=Proxmox
    Pin-Priority: 1001
    
    Package: *
    Pin: release o=Proxmox
    Pin-Priority: -1
    EOF
    
    # Update
    apt-get update
    
    # Install zfsutils-linux from Proxmox
    apt-get install -y zfsutils-linux
    
    # Remove the policy
    rm /usr/sbin/policy-rc.d
    
    # Mask all ZFS services since they won't work in the container
    systemctl mask zfs-import-cache.service
    systemctl mask zfs-import-scan.service
    systemctl mask zfs-load-module.service
    systemctl mask zfs-mount.service
    systemctl mask zfs-share.service
    systemctl mask zfs-volume-wait.service
    systemctl mask zfs.target
    
    apt-get install -t ${VERSION_CODENAME}-backports cockpit-ws cockpit-bridge cockpit-system -y
    echo "deb [arch=all] https://download.poolsman.com/repository/debian/ stable main" > /etc/apt/sources.list.d/poolsman.list
    wget -qO- https://download.poolsman.com/keys/poolsman.gpg >/etc/apt/trusted.gpg.d/poolsman.gpg
    
    apt-get update
    apt-get -y install poolsman

Edit: I updated the script it's now working using the proxmox version of zfsutils-linux :)

Except ARC Summary everything is working, as far as I can see. @PoolsmanTeam, which tool is used to get this data that?
arcstats is present and readable inside the container:


Bash:
root@NAS:~# cat /proc/spl/kstat/zfs/arcstats
10 1 0x01 147 39984 8127798513 271814845226465
name                            type data
hits                            4    7411068
iohits                          4    2434
misses                          4    13208
demand_data_hits                4    1358510
demand_data_iohits              4    70
demand_data_misses              4    7371
demand_metadata_hits            4    6039467
demand_metadata_iohits          4    188
demand_metadata_misses          4    2279
prefetch_data_hits              4    905
prefetch_data_iohits            4    0
prefetch_data_misses            4    2607
prefetch_metadata_hits          4    12186
prefetch_metadata_iohits        4    2176
prefetch_metadata_misses        4    951
mru_hits                        4    1239688
mru_ghost_hits                  4    0
mfu_hits                        4    6171380
mfu_ghost_hits                  4    0
uncached_hits                   4    0
deleted                         4    33
mutex_miss                      4    0
access_skip                     4    2
evict_skip                      4    1014
evict_not_enough                4    0
evict_l2_cached                 4    0
evict_l2_eligible               4    588800
evict_l2_eligible_mfu           4    0
evict_l2_eligible_mru           4    588800
evict_l2_ineligible             4    8192
evict_l2_skip                   4    0
hash_elements                   4    18366
hash_elements_max               4    18370
hash_collisions                 4    1416
hash_chains                     4    18
hash_chain_max                  4    2
meta                            4    1073741824
pd                              4    2147483648
pm                              4    2147483648
c                               4    2085887872
c_min                           4    2085887872
c_max                           4    3296722944
size                            4    771555736
compressed_size                 4    634851840
uncompressed_size               4    1202035712
overhead_size                   4    95218688
hdr_size                        4    4500992
data_size                       4    674459648
metadata_size                   4    55610880
dbuf_size                       4    6835680
dnode_size                      4    18929208
bonus_size                      4    4927360
anon_size                       4    201216
anon_data                       4    201216
anon_metadata                   4    0
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    366635520
mru_data                        4    324852736
mru_metadata                    4    41782784
mru_evictable_data              4    307963392
mru_evictable_metadata          4    4667904
mru_ghost_size                  4    0
mru_ghost_data                  4    0
mru_ghost_metadata              4    0
mru_ghost_evictable_data        4    0
mru_ghost_evictable_metadata    4    0
mfu_size                        4    363233792
mfu_data                        4    349405696
mfu_metadata                    4    13828096
mfu_evictable_data              4    272200192
mfu_evictable_metadata          4    2320384
mfu_ghost_size                  4    0
mfu_ghost_data                  4    0
mfu_ghost_metadata              4    0
mfu_ghost_evictable_data        4    0
mfu_ghost_evictable_metadata    4    0
uncached_size                   4    0
uncached_data                   4    0
uncached_metadata               4    0
uncached_evictable_data         4    0
uncached_evictable_metadata     4    0
l2_hits                         4    0
l2_misses                       4    0
l2_prefetch_asize               4    0
l2_mru_asize                    4    0
l2_mfu_asize                    4    0
l2_bufc_data_asize              4    0
l2_bufc_metadata_asize          4    0
l2_feeds                        4    0
l2_rw_clash                     4    0
l2_read_bytes                   4    0
l2_write_bytes                  4    0
l2_writes_sent                  4    0
l2_writes_done                  4    0
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    0
l2_asize                        4    0
l2_hdr_size                     4    0
l2_log_blk_writes               4    0
l2_log_blk_avg_asize            4    0
l2_log_blk_asize                4    0
l2_log_blk_count                4    0
l2_data_to_meta_ratio           4    0
l2_rebuild_success              4    0
l2_rebuild_unsupported          4    0
l2_rebuild_io_errors            4    0
l2_rebuild_dh_errors            4    0
l2_rebuild_cksum_lb_errors      4    0
l2_rebuild_lowmem               4    0
l2_rebuild_size                 4    0
l2_rebuild_asize                4    0
l2_rebuild_bufs                 4    0
l2_rebuild_bufs_precached       4    0
l2_rebuild_log_blks             4    0
memory_throttle_count           4    0
memory_direct_count             4    0
memory_indirect_count           4    0
memory_all_bytes                4    66748411904
memory_free_bytes               4    63899893760
memory_available_bytes          3    61566003328
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    90804120
arc_dnode_limit                 4    329672294
async_upgrade_sync              4    80
predictive_prefetch             4    18808
demand_hit_predictive_prefetch  4    4312
demand_iohit_predictive_prefetch 4    166
prescient_prefetch              4    17
demand_hit_prescient_prefetch   4    12
demand_iohit_prescient_prefetch 4    5
arc_need_free                   4    0
arc_sys_free                    4    2333890432
arc_raw_size                    4    0
cached_only_in_progress         4    0
abd_chunk_waste_size            4    6291968

1752689719732.png

1752687588405.png

1752687649490.png
 
Last edited:
  • Like
Reactions: PoolsmanTeam

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Except ARC Summary everything is working, as far as I can see. @PoolsmanTeam, which tool is used to get this data that?
It's the `arc_summary` command. Btw, if you enable `Trace` Log Level you'll be able to see all commands executed by Poolsman in your Browser's Debug Console (after enabling `Verbose` Log Level in Browser's Console).

Screenshot from 2025-07-23 03-55-02.png
 
  • Like
Reactions: mietzen

mietzen

New Member
Dec 25, 2023
23
13
3
It's the `arc_summary` command. Btw, if you enable `Trace` Log Level you'll be able to see all commands executed by Poolsman in your Browser's Debug Console (after enabling `Verbose` Log Level in Browser's Console).

View attachment 44704

It's a bug in zfsutils-linux (2.1.11-1+deb12u1), probably in combination with the proxmox kernel. The Bug was also present in ubuntu with HWE Kernel: Comment #12 : Bug #1980848 : Bugs : zfs-linux package : Ubuntu

But upgrading to the backports version fixes the issue, but it might be an even better idea to just add the pve repo and install `zfsutils-linux` from there... I'll try it tomorrow.

EDIT: Works like a charm using the pve repo with pinning :cool:

1753297579808.png

I updated the script above.
 
Last edited:
  • Like
Reactions: PoolsmanTeam

Beardmann

New Member
Jan 24, 2023
10
0
1
Just a little issue with the GUI... I have a server running with 3 NetApp DS212C shelfs attached, one to a specific SAS port, the other two are daisy-chained to another SAS port. I am not sure if this is the issue, but it seems that when I list the disks in the GUI, the system thinks some of the disks are a part of two pools?

Screenshot 2025-08-28 at 16.31.44.jpg

I have two pools (aggr0 and TAPE01). aggr0 consists of 24 SAS disks (from the two first shelfs), where TAPE01 consiste of 12 SAS disks from the last shelf that is daisy-chained.
The disks of aggr0 are 8TB (7,3) and TAPE01 are 6TB (5,5). (not all disks are shown in the list)
If you are not familuar with the DS212C shelfs, they are just a 2U shelf with space for 12 x 3.5" disks, and it has two IOM12 (link modules) with SAS3 connectivity via MiniSAS HD cables. In this setup I only use one IOM12 module, so no multipathing...
Anyway the PoolsMan is latest version 9.2-1..

Sorry for mixing up things.. but since I now have to migrate some volumes from one pool to the other, a nice feature would be to add the option on the volume level to "migrate" or "mirror" to another pool... local/remote... ;-) maybe make it a bit advanced, so that it will sync up, and resync every hour... then you could issue a "cut-over" which unmounts the source, does the last update, and remounts the destination in the name of the old source... ;-) that would really help me in this case ;-) Maybe I'm just used to NetApp's work where you do issue a "volume move..." and it more or less takes care of it self ;-)

Anyway, that's all from here... keep up the great work!

PS: I'm sorry if I where supposed to create a support case regarding the first issue?

/B
 

Beardmann

New Member
Jan 24, 2023
10
0
1
Hi... I think I found a bug in the Poolsman version 0.9.2-1
I created a new pool on Ubuntu 24.04 and wondered why I was unable to change the Record Size to a value larger than 1M... so I did it by commandline which works OK... but when I then try to use the "Edit" on the pool again, it fails with "An unhandled error has occurred" red banner at the bottom...
Don't think it matters but this was a draid3:16d:1s:24c pool...
My guess is that this could happen with other options "expanded" with newer ZFS releases.
The 1M was a limit on the ZFS version Ubuntu 22.04 used (can't remember the versions), but it was raised from the 1M to 16M. 24.04 uses ZFS version 2.2.2. (I think that 22.04 uses 2.1....)

(and yes we need the 16M as we mainly use large files, so it should be more efficient)
 

mietzen

New Member
Dec 25, 2023
23
13
3
Could you add Origin and Label to your apt repo? This would make it easier to use unattended-upgrades to upgrade poolsman.
 

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Hi @Beardmann ,

sorry for a long response, we just saw your message. And thanks for your great feedback!

I am not sure if this is the issue, but it seems that when I list the disks in the GUI, the system thinks some of the disks are a part of two pools?
Could you please send us the full trace logs from Debug Console of your Web Browser? In order to do this the `Trace` Log Level in Poolsman and `Vebrose` Log Level in Web Browser should be enabled (please see our message and a screenshot above). If you have some private info in these logs please send them using our `Contact Us` form. Also if you have some difficulties with getting logs please write us and we will help.

Sorry for mixing up things.. but since I now have to migrate some volumes from one pool to the other, a nice feature would be to add the option on the volume level to "migrate" or "mirror" to another pool... local/remote... ;-) maybe make it a bit advanced, so that it will sync up, and resync every hour... then you could issue a "cut-over" which unmounts the source, does the last update, and remounts the destination in the name of the old source... ;-) that would really help me in this case ;-) Maybe I'm just used to NetApp's work where you do issue a "volume move..." and it more or less takes care of it self ;-)
Actually our `Send Snapshot` feature should help you with this. You could create a snapshot on the source, then send it to the target. Then when you finally decide to migrate you could create a final snapshot on the source and send incremental update to the target. As we understand it you'd like to see these steps as a single action. We've added this feature request to the backlog, but unfortunately at this moment we have some other higher priority tasks and can't tell when we would be able to implement this.

I created a new pool on Ubuntu 24.04 and wondered why I was unable to change the Record Size to a value larger than 1M...
The 1M was a limit on the ZFS version Ubuntu 22.04 used (can't remember the versions), but it was raised from the 1M to 16M. 24.04 uses ZFS version 2.2.2. (I think that 22.04 uses 2.1....)
Thanks for noticing that. Yeah, the Record Size Limit was raised to 16M in some new ZFS version. We will add support for this in the next minor patch (approximately in 1 week).
 

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Hi @mietzen ,

Could you add Origin and Label to your apt repo? This would make it easier to use unattended-upgrades to upgrade poolsman.
Unfortunately these tags are not supported by our repository. However according the comment inside the `50unattended-upgrades` configuration file you should be able to configure updates without these tags using `site` keyword (Unattended-Upgrade::Origins-Pattern for repository without Origin, label etc). Could you please try this solution and tell us if it helped or not?
 

theansweris42

New Member
Oct 25, 2025
3
2
3
Hello Poolsman Team,

Thanks you for the great work so far!
We happened upon your Cockpit development via GitHub while searching for an alternative to the Proxmox TrueNAS plugin and switching to Linux LIO.
First, we purchased an EAP key for a production ZFS pool on Ubuntu Server 24.04 LTS.
We're currently using a trial key for a Debian Trixi, but I think the trial key will be replaced with a real key in the next few days.
For the TPM under Windows 11, we're currently forced to temporarily use an SMB share, which we also manage via your Samba plugin. This also works very well, so we would very much appreciate your interest in developing a Samba plugin for Cockpit.

Thanks again for your great work!
 

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Does this support Proxmox 9 yet? or should I wait to upgrade
Yes, it supports Proxmox 9, you can safely upgrade. The only thing is that Cockpit UI got a new design in latest Debian/Proxmox. We are also working on porting Poolsman to a new design now and it should become available soon.
 

Steve T.

New Member
Jan 21, 2026
1
0
1
Has Poolsman taken their server down? It was up a few days ago. Now I get a block from Cloudflare of invalid SSL cert on it.
 

PoolsmanTeam

Member
Sep 12, 2022
47
39
18
poolsman.com
Thanks for letting us know about the issue with SSL cert! The certificate is configured for auto-renewal every three months, but sometimes it fails. We will check it.