Poolsman - ZFS Web GUI for Linux based on Cockpit

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PoolsmanTeam

Member
Sep 12, 2022
31
27
18
poolsman.com
thank you @Beardmann for such a great feedback! Regarding startup time optimization, we are aware of this issue and hope to resolve it soon. It's related with the Blazor framework that is used in Poolsman. This issue was solved in the latest release of this framework and currently we are working on migration to its latest version. Regarding your another questions, we will reply within next few days.

Is it good and safe to use with proxmox?
Yes, we are using Poolsman in pair with Proxmox in production for a long time, but there are two highly important recommendations. We would recommend to use `backports` repository for `cockpit` (the Web UI that is used as a basis for Poolsman), because `cockpit` in the main Debian repo is pretty old. Also you should install only the minimal set of `cockpit` modules. It's because the `cockpit-networkmanager` module installs `network-manager` that in theory can break network. Also `cockpit-packagekit` module installs `appstream` package that can potentially break `apt update`. All of these can be done by next commands (more details in `cockpit` docs: Running Cockpit):

Bash:
. /etc/os-release
echo "deb http://deb.debian.org/debian ${VERSION_CODENAME}-backports main" > \
    /etc/apt/sources.list.d/backports.list
apt update

apt install -t ${VERSION_CODENAME}-backports cockpit-ws cockpit-bridge cockpit-system
 
  • Like
Reactions: gb00s

PoolsmanTeam

Member
Sep 12, 2022
31
27
18
poolsman.com
Hi,

we are pleased to announce Poolsman Preview 6 release that adds more advanced features in pool disks management, as well as some dataset properties that are very important for NFS and Samba shares. Now you are able to:

  1. Convert a disk group (VDEV) of a Single Disk to a Mirror.
  2. Remove disk from a Mirror (convert it to a Single Disk VDEV).
  3. Convert Mirror to a N-way Mirror and back.
  4. Add new disk groups to an existing pool. All disk group types are supported (Data, Spare, Log, Special, Cache, Dedup).
  5. Remove Spare, Cache, Log disk groups from any pool.
  6. Remove Data, Special and Dedup disk groups of Mirror or Single Disk configuration from pools that don't include RAID-Z disk groups (it's a ZFS limitation). It also means that now it's possible to remove mirrored DATA disk group from a pool (all allocated space on the removed disk group will be moved to another devices in the pool in the background).
  7. Configure mount point, DNode Size, Extended attributes (XAttr) type, ATime and RelATime properties of a file system dataset. These properties are very important in case of using NFS and Samba shares.
  8. Configure snapshot limit for dataset. Our general recommendation there is to configure limit for the root dataset (it would allow ZFS to calculate snapshots count for every dataset in the pool and display it in Poolsman UI).
  9. Configure file system properties right during pool creation.
  10. Force pool creation in case if ZFS thinks that some devices are in use.
  11. Use Poolsman without installed 'smartctl' tool. Previously there was an unhandled error if it wasn't installed. Now Poolsman just don't display S.M.A.R.T. data in such case.

There are still a lot of things to do before the first final release. We are already working on the encryption support, which is a very big feature, and going to include it into next preview release. We also want to let you know that we have hired two more developers in order to speed up the delivery of new features. Stay tuned!
 

Attachments

  • Like
Reactions: gb00s and pixelBit

thulle

Member
Apr 11, 2019
48
18
8
@PoolsmanTeam Feature suggestion: enable usage of LUKS as an alternative underlying encryption too, while still pulling SMART-data from the underlying device. Maybe with an overridable check for two members of the pool ending up on the same underlying device.

Between the developer funding for ZFS-encryption drying up and the developer therefore gone, ZFS-encryption bugs piling up, TrueNAS Core dropping FDE, Truenas SCALE having no plan to implement anything other than ZFS-encryption and no GUI-alternative seemingly on the horizon this might be a sought after feature (at least while looking for a solution I find many others asking for the same), it would be a real value add. And I think it would generate some free publicity.

Also it shouldn't be too hard to add, instead of piping the password to zfs load-key it goes to cryptsetup luksOpen for each pool member and zfs has to search for pool members in /dev/mapper/ instead. With LUKS you also get support for multiple keyslots to unlock the pool for free.
 

PoolsmanTeam

Member
Sep 12, 2022
31
27
18
poolsman.com
Hi @Beardmann,

thanks again for using Poolsman and providing such detailed feedback. We finally prepared answers to your questions. Also we found your email where we partially discussed some of these things and have more context now.

- Support for setup of snapshot schedules. Maybe utilizing one of the well know tools.
- Support for zfs send schedules, again there are a handful of different ways to do this already, but would be nice to have in a GUI.
Yes, we are going to start working on that right after 1.0 release. We are thinking about using sanoid/syncoid. If you prefer some other tools please let us know and we consider that.

- Support for configuring ZFS options like notifications etc.
It's also in plans for the 2nd version.

- Better stats on the pools maybe with grafana/prometheus
Grafana and Prometheus are out of scope at this moment, because we are mostly implementing a Cockpit plugin right now. But you can configure Grafana/Prometheus by yourself. AFAIK there's a good existing `ZFS exporter` for Prometheus and various Grafana dashboards for ZFS. However we have plans to add data from "zpool iostat -pv", but not through Grafana/Prometheus.

- Optimize startup time when first opening the "application"... (I get the %-counter and it takes 10-30 secs. the first time I open it up)
Yeah, we understand that it's pretty annoying right now, going to fix it in next release. As we mentioned before, it's related with an update of the underlying framework (Blazor).

- Better support for enterprise drives, right now the "SMART" view is empty on my Exos SAS drives, not sure if this is because it's SAS? But there are information on even the SAS drives which might be interesting, like power-on hours, replaced blocks etc...
At this moment we are limited to data provided by `smartctl` tool, because we are using this utility for getting S.M.A.R.T. info. We'd like to ask you to run `smartctl --json=osv -i -H -A /dev/YOUR_DRIVE_PATH` command for your drive and highlight the data that you are missing in UI. Then send it to us and we will try to add it. But AFAIR it doesn't provide too much data for your enterprise drives. If you know some other commands that can do that please let us know and we will try to add them.

- Under space information, it would be nice to see information about space on the special metadata device, and on the cache vdev...
Yes, we are going to add data from "zpool iostat -pv" to all VDEVs (disk groups) on the Topology page. If you have time please check does it provide such info for you.

- We use the vdev_id.conf to give our disks a useful name based on the location in our shelfs (NetApp Shelfs) so our disks are named "hba0-23" for a disk attached via hba0 and is located in shelf location 23. Sadly you show sda-z for all the disks. Not sure where you get this information, because zpool status shows me the "vdes_id" disks...
At this moment Poolsman tries to add disks by id (which is more reliable), but in UI it tries to display the underlying disk paths (e.g. `sda-z`, which is more readable). We understand that the option for displaying aliases defined by `vdev_id.conf` can be very useful, but, unfortunately, it's not very easy to do. Probably it will only be added in version 2.0.

- Maybe add support for creating CIFS/NFS shares/exports... we have the "Cockpit ZFS Manager" installed for this reason, because it works great for this specific thing but not much else :)
It's in plans for the 2nd version, but for now we can't tell whichever comes first (SMB/NFS or iSCSI support). But we got one more vote for SMB/NFS from you:) We don't know have you seen cockpit-file-sharing plugin from 45Drives or not, but may be it can help you with SMB/NFS shares at this moment.
 

Beardmann

New Member
Jan 24, 2023
7
0
1
Hi.. here are some input from a drive as requested... this is a SAS drive... let me know if you need any other details.
Regarding the snapshot schedule and send/recv I think sanoid is the best way to go. We also use this on several systems already.

Code:
{
  "device": {
    "info_name": "/dev/sdw",
    "name": "/dev/sdw",
    "protocol": "SCSI",
    "type": "scsi"
  },
  "device_type": {
    "name": "disk",
    "scsi_value": 0
  },
  "form_factor": {
    "name": "3.5 inches",
    "scsi_value": 2
  },
  "json_format_version": [
    1,
    0
  ],
  "local_time": {
    "asctime": "Tue Mar 12 19:58:54 2024 CET",
    "time_t": 1710269934
  },
  "logical_block_size": 512,
  "model_name": "SEAGATE ST18000NM004J",
  "physical_block_size": 4096,
  "power_on_time": {
    "hours": 12495,
    "minutes": 10
  },
  "product": "ST18000NM004J",
  "revision": "E004",
  "rotation_rate": 7200,
  "scsi_grown_defect_list": 0,
  "scsi_version": "SPC-5",
  "serial_number": "ZR5BNHPG0000C2462AVH",
  "smart_status": {
    "passed": true
  },
  "smartctl": {
    "argv": [
      "smartctl",
      "--json=osv",
      "-i",
      "-H",
      "-A",
      "/dev/sdw"
    ],
    "build_info": "(local build)",
    "exit_status": 0,
    "output": [
      "smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-100-generic] (local build)",
      "Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org",
      "",
      "=== START OF INFORMATION SECTION ===",
      "Vendor:               SEAGATE",
      "Product:              ST18000NM004J",
      "Revision:             E004",
      "Compliance:           SPC-5",
      "User Capacity:        18,000,207,937,536 bytes [18.0 TB]",
      "Logical block size:   512 bytes",
      "Physical block size:  4096 bytes",
      "LU is fully provisioned",
      "Rotation Rate:        7200 rpm",
      "Form Factor:          3.5 inches",
      "Logical Unit id:      0x5000c500d9b547eb",
      "Serial number:        ZR5BNHPG0000C2462AVH",
      "Device type:          disk",
      "Transport protocol:   SAS (SPL-3)",
      "Local Time is:        Tue Mar 12 19:58:54 2024 CET",
      "SMART support is:     Available - device has SMART capability.",
      "SMART support is:     Enabled",
      "Temperature Warning:  Enabled",
      "",
      "=== START OF READ SMART DATA SECTION ===",
      "SMART Health Status: OK",
      "",
      "Grown defects during certification <not available>",
      "Total blocks reassigned during format <not available>",
      "Total new blocks reassigned <not available>",
      "Power on minutes since format <not available>",
      "Current Drive Temperature:     31 C",
      "Drive Trip Temperature:        60 C",
      "",
      "Accumulated power on time, hours:minutes 12495:10",
      "Manufactured in week 30 of year 2022",
      "Specified cycle count over device lifetime:  50000",
      "Accumulated start-stop cycles:  14",
      "Specified load-unload count over device lifetime:  600000",
      "Accumulated load-unload cycles:  1116",
      "Elements in grown defect list: 0",
      "",
      "Vendor (Seagate Cache) information",
      "  Blocks sent to initiator = 1163512376",
      "  Blocks received from initiator = 1670618216",
      "  Blocks read from cache and sent to initiator = 44540700",
      "  Number of read and write commands whose size <= segment size = 18073622",
      "  Number of read and write commands whose size > segment size = 318162",
      "",
      "Vendor (Seagate/Hitachi) factory information",
      "  number of hours powered up = 12495.17",
      "  number of minutes until next internal SMART test = 41",
      ""
    ],
    "platform_info": "x86_64-linux-5.15.0-100-generic",
    "svn_revision": "5155",
    "version": [
      7,
      2
    ]
  },
  "temperature": {
    "current": 31,
    "drive_trip": 60
  },
  "user_capacity": {
    "blocks": 35156656128,
    "blocks_s": "35156656128",
    "bytes": 18000207937536,
    "bytes_s": "18000207937536"
  },
  "vendor": "SEAGATE"
}
 

cbattlegear

New Member
Mar 15, 2024
1
0
1
@PoolsmanTeam
Just a quick bug report, I had set this up with a different UrlRoot set on cockpit and it seemed to break all of the URLs/Functionality for the plugin. Just wanted to pop it up if it hasn't been reported already.