Cockpit ZFS Manager

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

optimans

Member
Feb 20, 2015
61
77
18
Thought I'd better give another quick update: am behind schedule (think i'm starting to sound like a broken record :eek:). Been sick last couple of weeks and having drives fail due to firmware bugs (link), but with that all over, now getting back on track!

Apologies for the delays, bear with me, it will be ready in November!

One question I do have, those who are using ZFS as root, are you using it for / and /boot only? Have configuration options to hide/display boot and root pools from list (display default); would like to know if need to include other system directories as part of filter. Thanks.

P.S.: Few more things added:
  • Added ability to configure pool features
  • Can choose whether ZFS manager manages Samba shares or not
  • Can unlock all file systems from the one place - handy if you use the same passphrase for more than 1
  • Working well on Ubuntu 19.10 and CentOS 8 so far
upload_2019-10-26_23-13-21.png
 

Marshalleq

New Member
Nov 13, 2019
3
1
3
This is amazing work! Hopefully we find a way to get it onto unraid at some point. It will be fun to try to make that happen! Thanks.
 
  • Like
Reactions: optimans

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Personal opinion here: I think there will be those who put two 1TB SSDs in a system and ZFS mirror root + keep data on there. It may not be ideal, but it will happen.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Personal opinion here: I think there will be those who put two 1TB SSDs in a system and ZFS mirror root + keep data on there. It may not be ideal, but it will happen.
My thoughts too
 

RageBone

Active Member
Jul 11, 2017
617
159
43
i totally agree, mainly because i did that, and a friend does it currently on Freenas.
He doesn't want to waste any space on the 250GB drive he's using for it.
It was the "best" thing he had around. He would have used a 16GB Optane but his board didn't have drivers for it.

i stopped doing it because the 16GB optane is a bit small for more then two VMs on top.
And jails and stuff can't be put onto it.
 
  • Like
Reactions: T_Minus

optimans

Member
Feb 20, 2015
61
77
18
Ready...Steady...Go!

Thanks for your patience! Info is in first post. Install should be pretty easy, hopefully works well for your system.
 
  • Like
Reactions: EluRex

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster),

1575018365353.jpg

However I am getting following error

2019-11-29_17-09-03.jpg

I have zfs module install and loaded into kernel, please check

Code:
root@pve-nextcloud:~# modinfo zfs
filename:       /lib/modules/5.0.21-5-pve/zfs/zfs.ko
version:        0.8.2-pve2
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     7974A38E326E18F22E88682
depends:        spl,znvpair,icp,zlua,zunicode,zcommon,zavl
retpoline:      Y
name:           zfs

root@pve-nextcloud:~# cat /sys/module/zfs/version
0.8.2-pve2
I guess it is because PVE version adds -pve2 at the suffix
 
Last edited:

optimans

Member
Feb 20, 2015
61
77
18
I have tried to use it but I cant see zfs pool in my cockpit... why?

ahh!!! debian 10 repository only has cockpit 188-1 and not cockpit 201+
Hi EluRex,

Are you able to send me a screenshot. Also what information do you have in the console logging? Any errors? What browser are you using?

with Debian 10 you need to uninstall cockpit and then install cockpit via the testing or unstable repos to get above 188. Unstable has the latest 208 which was what I was testing last night.
 

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
I have provided all info in above post and here is the screenshot of the console

2019-11-29_17-32-09.jpg

And I found out that zfs.js calls
  • /usr/bin/cat
  • /usr/bin/grep
  • /usr/bin/echo
but Proxmox bin directory is at
  • /bin/cat
  • /bin/grep
  • /bin/echo
so I made ln -s to make then works

I have not test all the functionality like create destroy or snapshot... but the Status of the Pool failed to display
2019-11-29_17-57-08.jpg

I am assuming it should display something like following (which is from proxmox web gui interface)
2019-11-29_18-00-54.jpg
2019-11-29_18-01-08.jpg
 
Last edited:

optimans

Member
Feb 20, 2015
61
77
18
I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster),

View attachment 12373

However I am getting following error

View attachment 12372

I have zfs module install and loaded into kernel, please check

Code:
root@pve-nextcloud:~# modinfo zfs
filename:       /lib/modules/5.0.21-5-pve/zfs/zfs.ko
version:        0.8.2-pve2
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
alias:          devname:zfs
alias:          char-major-10-249
srcversion:     7974A38E326E18F22E88682
depends:        spl,znvpair,icp,zlua,zunicode,zcommon,zavl
retpoline:      Y
name:           zfs

root@pve-nextcloud:~# cat /sys/module/zfs/version
0.8.2-pve2
I guess it is because PVE version adds -pve2 at the suffix
Thanks for this. Didn’t know the version could have letters in them. Will need to add regex to strip letters before comparing the version numbers.
 

optimans

Member
Feb 20, 2015
61
77
18
I have provided all info in above post and here is the screenshot of the console

View attachment 12375

And I found out that zfs.js calls
  • /usr/bin/cat
  • /usr/bin/grep
  • /usr/bin/echo
but Proxmox bin directory is at
  • /bin/cat
  • /bin/grep
  • /bin/echo
so I made ln -s to make then works

I have not test all the functionality like create destroy or snapshot... but the Status of the Pool failed to display
View attachment 12376

I am assuming it should display something like following (which is from proxmox web gui interface)
View attachment 12377
View attachment 12378
I will need to setup a copy of Proxmox for myself and do some testing. The status function uses lsblk to get information about the disks. Layout should be very similar to the zpool status command. I wonder if the bin folder issue is also here too. The console log should hopefully provide where the error is at.

I have a check for operating system at script load so that it makes changes to samba path depending on os. Can make adjustments to detect proxmox and change the bin path for commands too.
 

optimans

Member
Feb 20, 2015
61
77
18
@EluRex I have created both issue tickets on GitHub. Am currently away for the weekend, but will get onto this next week. If you find anything else, please raise a issue ticket. Thanks for your help!
 

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
@EluRex I have created both issue tickets on GitHub. Am currently away for the weekend, but will get onto this next week. If you find anything else, please raise a issue ticket. Thanks for your help!
Optimans, I solved the status page issue by
  • ln -s /bin/lsblk /usr/bin/lsblk
please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue.

Complete PVE 6.0 installation for cockpit and zfs manager as follow
Code:
echo "deb http://deb.debian.org/debian buster-backports main" > /etc/apt/sources.list.d/buster-backport.list;
apt update;
apt-get -t buster-backports install cockpit;
git clone https://github.com/cockpit-project/cockpit;
cp -r cockpit/zfs /usr/share/cockpit;

systemctl enable cockpit.service;
systemctl start cockpit.service;

#for pve5to6
ln -s /bin/cat /usr/bin/cat;
ln -s /bin/grep /usr/bin/grep;
ln -s /bin/echo /usr/bin/echo;
ln -s /bin/lsblk /usr/bin/lsblk;
 
Last edited:

optimans

Member
Feb 20, 2015
61
77
18
Optimans, I solved the status page issue by
  • ln -s /bin/lsblk /usr/bin/lsblk
please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue.

Complete PVE 6.0 installation for cockpit and zfs manager as follow
Code:
echo "deb http://deb.debian.org/debian buster-backports main" > /etc/apt/sources.list.d/buster-backport.list;
apt update;
apt-get -t buster-backports install cockpit;
git clone https://github.com/cockpit-project/cockpit;
cp -r cockpit/zfs /usr/share/cockpit;

systemctl enable cockpit.service;
systemctl start cockpit.service;

#for pve5to6
ln -s /bin/cat /usr/bin/cat;
ln -s /bin/grep /usr/bin/grep;
ln -s /bin/echo /usr/bin/echo;
ln -s /bin/lsblk /usr/bin/lsblk;
Hi EluRex,

I have made changes to the absolute paths for shell commands, and symbolic links should no longer be required. Have tested on Ubuntu 18.04 LTS which had the same issue as you did with the PVE 5 -> 6 upgrade.

Was very surprised how easy Promox 6.0 was to setup Cockpit and get up and running.

Thanks for your help.