Yes, we are using Poolsman in pair with Proxmox in production for a long time, but there are two highly important recommendations. We would recommend to use `backports` repository for `cockpit` (the Web UI that is used as a basis for Poolsman), because `cockpit` in the main Debian repo is pretty old. Also you should install only the minimal set of `cockpit` modules. It's because the `cockpit-networkmanager` module installs `network-manager` that in theory can break network. Also `cockpit-packagekit` module installs `appstream` package that can potentially break `apt update`. All of these can be done by next commands (more details in `cockpit` docs: Running Cockpit):Is it good and safe to use with proxmox?
. /etc/os-release
echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" > \
/etc/apt/sources.list.d/backports.list
apt update
apt install -t ${VERSION_CODENAME}-backports cockpit-ws cockpit-bridge cockpit-system
Yes, we are going to start working on that right after 1.0 release. We are thinking about using sanoid/syncoid. If you prefer some other tools please let us know and we consider that.- Support for setup of snapshot schedules. Maybe utilizing one of the well know tools.
- Support for zfs send schedules, again there are a handful of different ways to do this already, but would be nice to have in a GUI.
It's also in plans for the 2nd version.- Support for configuring ZFS options like notifications etc.
Grafana and Prometheus are out of scope at this moment, because we are mostly implementing a Cockpit plugin right now. But you can configure Grafana/Prometheus by yourself. AFAIK there's a good existing `ZFS exporter` for Prometheus and various Grafana dashboards for ZFS. However we have plans to add data from "zpool iostat -pv", but not through Grafana/Prometheus.- Better stats on the pools maybe with grafana/prometheus
Yeah, we understand that it's pretty annoying right now, going to fix it in next release. As we mentioned before, it's related with an update of the underlying framework (Blazor).- Optimize startup time when first opening the "application"... (I get the %-counter and it takes 10-30 secs. the first time I open it up)
At this moment we are limited to data provided by `smartctl` tool, because we are using this utility for getting S.M.A.R.T. info. We'd like to ask you to run `smartctl --json=osv -i -H -A /dev/YOUR_DRIVE_PATH` command for your drive and highlight the data that you are missing in UI. Then send it to us and we will try to add it. But AFAIR it doesn't provide too much data for your enterprise drives. If you know some other commands that can do that please let us know and we will try to add them.- Better support for enterprise drives, right now the "SMART" view is empty on my Exos SAS drives, not sure if this is because it's SAS? But there are information on even the SAS drives which might be interesting, like power-on hours, replaced blocks etc...
Yes, we are going to add data from "zpool iostat -pv" to all VDEVs (disk groups) on the Topology page. If you have time please check does it provide such info for you.- Under space information, it would be nice to see information about space on the special metadata device, and on the cache vdev...
At this moment Poolsman tries to add disks by id (which is more reliable), but in UI it tries to display the underlying disk paths (e.g. `sda-z`, which is more readable). We understand that the option for displaying aliases defined by `vdev_id.conf` can be very useful, but, unfortunately, it's not very easy to do. Probably it will only be added in version 2.0.- We use the vdev_id.conf to give our disks a useful name based on the location in our shelfs (NetApp Shelfs) so our disks are named "hba0-23" for a disk attached via hba0 and is located in shelf location 23. Sadly you show sda-z for all the disks. Not sure where you get this information, because zpool status shows me the "vdes_id" disks...
It's in plans for the 2nd version, but for now we can't tell whichever comes first (SMB/NFS or iSCSI support). But we got one more vote for SMB/NFS from you- Maybe add support for creating CIFS/NFS shares/exports... we have the "Cockpit ZFS Manager" installed for this reason, because it works great for this specific thing but not much else![]()
{
"device": {
"info_name": "/dev/sdw",
"name": "/dev/sdw",
"protocol": "SCSI",
"type": "scsi"
},
"device_type": {
"name": "disk",
"scsi_value": 0
},
"form_factor": {
"name": "3.5 inches",
"scsi_value": 2
},
"json_format_version": [
1,
0
],
"local_time": {
"asctime": "Tue Mar 12 19:58:54 2024 CET",
"time_t": 1710269934
},
"logical_block_size": 512,
"model_name": "SEAGATE ST18000NM004J",
"physical_block_size": 4096,
"power_on_time": {
"hours": 12495,
"minutes": 10
},
"product": "ST18000NM004J",
"revision": "E004",
"rotation_rate": 7200,
"scsi_grown_defect_list": 0,
"scsi_version": "SPC-5",
"serial_number": "ZR5BNHPG0000C2462AVH",
"smart_status": {
"passed": true
},
"smartctl": {
"argv": [
"smartctl",
"--json=osv",
"-i",
"-H",
"-A",
"/dev/sdw"
],
"build_info": "(local build)",
"exit_status": 0,
"output": [
"smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-100-generic] (local build)",
"Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org",
"",
"=== START OF INFORMATION SECTION ===",
"Vendor: SEAGATE",
"Product: ST18000NM004J",
"Revision: E004",
"Compliance: SPC-5",
"User Capacity: 18,000,207,937,536 bytes [18.0 TB]",
"Logical block size: 512 bytes",
"Physical block size: 4096 bytes",
"LU is fully provisioned",
"Rotation Rate: 7200 rpm",
"Form Factor: 3.5 inches",
"Logical Unit id: 0x5000c500d9b547eb",
"Serial number: ZR5BNHPG0000C2462AVH",
"Device type: disk",
"Transport protocol: SAS (SPL-3)",
"Local Time is: Tue Mar 12 19:58:54 2024 CET",
"SMART support is: Available - device has SMART capability.",
"SMART support is: Enabled",
"Temperature Warning: Enabled",
"",
"=== START OF READ SMART DATA SECTION ===",
"SMART Health Status: OK",
"",
"Grown defects during certification <not available>",
"Total blocks reassigned during format <not available>",
"Total new blocks reassigned <not available>",
"Power on minutes since format <not available>",
"Current Drive Temperature: 31 C",
"Drive Trip Temperature: 60 C",
"",
"Accumulated power on time, hours:minutes 12495:10",
"Manufactured in week 30 of year 2022",
"Specified cycle count over device lifetime: 50000",
"Accumulated start-stop cycles: 14",
"Specified load-unload count over device lifetime: 600000",
"Accumulated load-unload cycles: 1116",
"Elements in grown defect list: 0",
"",
"Vendor (Seagate Cache) information",
" Blocks sent to initiator = 1163512376",
" Blocks received from initiator = 1670618216",
" Blocks read from cache and sent to initiator = 44540700",
" Number of read and write commands whose size <= segment size = 18073622",
" Number of read and write commands whose size > segment size = 318162",
"",
"Vendor (Seagate/Hitachi) factory information",
" number of hours powered up = 12495.17",
" number of minutes until next internal SMART test = 41",
""
],
"platform_info": "x86_64-linux-5.15.0-100-generic",
"svn_revision": "5155",
"version": [
7,
2
]
},
"temperature": {
"current": 31,
"drive_trip": 60
},
"user_capacity": {
"blocks": 35156656128,
"blocks_s": "35156656128",
"bytes": 18000207937536,
"bytes_s": "18000207937536"
},
"vendor": "SEAGATE"
}
Hi @Beardmann , for now we decided to add a full S.M.A.R.T. text report from the `smartctl` tool for such cases (it's already available in the latest release). We hope that it can help you somehow. We will think what else we can do with this some time later. Regarding replication tools, we are considering sanoid/syncoid as the first supported option in the future.Hi.. here are some input from a drive as requested... this is a SAS drive... let me know if you need any other details.
Regarding the snapshot schedule and send/recv I think sanoid is the best way to go. We also use this on several systems already.
Hi @cbattlegear , sorry for a long response. Unfortunately we couldn't reproduce this issue in our environment. Could you tell what exactly did you do to get into this state? You could also write us directly about this issue using the contact form on our website and we will try to solve it together with you.Just a quick bug report, I had set this up with a different UrlRoot set on cockpit and it seemed to break all of the URLs/Functionality for the plugin. Just wanted to pop it up if it hasn't been reported already.
zfs set quota=10G mypool/dataset
Hi @ed8871 , sorry for that. Issues like this should go away after we pack Poolsman into OS packages. For now please remove the previous version manually before update using this command:Hi there. I went to upgrade to 0.7.1.0 and once again I'm having issues upgrading. The progress meter says 100% and "Oops" appears. Web console says the following:
sudo rm -rf /usr/share/cockpit/poolsman/
/usr/share/cockpit/
directory (if you have multiple Poolsman copies inside this directory it will produce the conflict for Cockpit).I was doing all of this in Safari because I don't use Chrome as my primary browser. I deleted the old directory, unzipped a fresh copy in /usr/share/cockpit, and no change. I tried via private mode in Safari and no change. I deleted cookies for the site, no change. It appears to be working in Chrome though.Hi @ed8871 , sorry for that. Issues like this should go away after we pack Poolsman into OS packages. For now please remove the previous version manually before update using this command:
Also please remove any copies that you've created insideBash:sudo rm -rf /usr/share/cockpit/poolsman/
/usr/share/cockpit/
directory (if you have multiple Poolsman copies inside this directory it will produce the conflict for Cockpit).
If it won't help, please try to open Poolsman using your browser's Incognito mode. If it works in this mode, it means that you should clear the browser's cache for your Cockpit URL. However Poolsman should be able to identify that the new version has been installed (using file checksums) and reload all required files, it might not work as expected for you.
Please let us know if it helped or not.
Thanks, it's very helpful. There might be some issue with Safari web browser. We will test this in Safari and get back to you.I was doing all of this in Safari because I don't use Chrome as my primary browser. I deleted the old directory, unzipped a fresh copy in /usr/share/cockpit, and no change. I tried via private mode in Safari and no change. I deleted cookies for the site, no change. It appears to be working in Chrome though.
Could you tell what do you find not intuitive in the current topology layout? Btw we are going to re-work some pages and improve UX in nearest releases after we finish with the main features. Regarding the topology page, the main thing that we see now is that it's not very comfortable to use in case of wide RAID-Z or DRAID vdevs. We are definitely going to re-think this page later. Any feedback is greatly appreciated.
- topology layout is not intuitive to look at (see below)
We've removed width limit for this page, please install new update. A little context there: we initially added width limit for some pages for better experience on wide displays. But seems that the Topology page shouldn't have such limit. Thanks for noticing that!
- topology layout isn't maximizing space on my 4k monitor (see below)
If you are talking about Proxmox warning regarding repo name, it's not an issue and just a specific of the Proxmox UI. At this moment we are using common repo for all Debian-based distros and all of their versions. That's why you are seeing a common `stable` name instead of your Debian version. The same approach with `stable` name is used by some another products, e.g. `Visual Studio Code` or `Google Chrome`.
- seems to be an issue with the command adding the poolsman deb repo and the syntax of the source it creates (see below)
Seems that it's not required for Proxmox but may be you know the conditions where it might be required.Instructions were missing that i might need to edit /etc/cockpit/disallowed-users
Unfortunately we don't support LXC containers. A quick googling told us that it's not possible, however we are not experts in LXC. If you find how to make ZFS and Cockpit work inside this, Poolsman should work fine too.
- Plugin wouldn't work inside LXC (privileged or unprivileged), while full cokpit does work in LXC. I suspect this is an underlying ZFS issue rather than plugin issue as zfs commands in the LXC also didn't work - but for proxmox finding a way to get this working in an LXC would be great.