Optimans, I solved the status page issue by
ln -s /bin/lsblk /usr/bin/lsblk
please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue.
Complete PVE 6.0 installation for cockpit and zfs manager as follow
I have provided all info in above post and here is the screenshot of the console
And I found out that zfs.js calls
but Proxmox bin directory is at
so I made ln -s to make then works
I have not test all the...
I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster),
However I am getting following error
I have zfs module install and loaded into kernel, please check
root@pve-nextcloud:~# modinfo zfs
check the error log for each osd /var/log/ceph/ceph-osd.[id].log
typically the problem can be solved
ceph-disk activate /dev/sd[x] --reactivate
systemctl disable ceph-osd@[id].service; systemctl enable ceph-osd@[id].service
I want to share following testing with you
4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD.
block.db & block.wal device: Samsung sm961 512GB
NIC: Mellanox Connectx3 VPI dual port 40 Gbps
Switch: Mellanox sx6036T
Network: IPoIB separated public network &...
This also can happened to offline migration windows 10 as well. Due to windows 1o default shutdown is a fake shutdown, it saves RAM into HDD for faster next booth time. The best way to check is to see under Task Manager -> CPU -> Up time. You will see although your windows just boot up but the...
I am using scst currently on proxmox itself wtih IB, however, all settings are done via scstadmin cli and also srp initiator only.
Having a web gui would be great but it will stop me from using SCST as it is far better and stable than other target out there.