Search results

  1. EluRex

    Cockpit ZFS Manager [0.3.2.348 Now Available]

    Optimans, I solved the status page issue by ln -s /bin/lsblk /usr/bin/lsblk please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue. Complete PVE 6.0 installation for cockpit and zfs manager as follow echo "deb...
  2. EluRex

    Cockpit ZFS Manager [0.3.2.348 Now Available]

    I have provided all info in above post and here is the screenshot of the console And I found out that zfs.js calls /usr/bin/cat /usr/bin/grep /usr/bin/echo but Proxmox bin directory is at /bin/cat /bin/grep /bin/echo so I made ln -s to make then works I have not test all the...
  3. EluRex

    Cockpit ZFS Manager [0.3.2.348 Now Available]

    I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster), However I am getting following error I have zfs module install and loaded into kernel, please check root@pve-nextcloud:~# modinfo zfs filename...
  4. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    I am using usb disk on USB 2.0 ports....alot of times usb disk die not because the write wear but its USB host voltage over usb disk limit
  5. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    CS380B is extremely hot due to its back pane blocks all air flow
  6. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    1. you can swap the direction of the 70mm fan 2. no.. it does not... all cpu/mobo air flow all goes out on the side... rear fan is for hdd
  7. EluRex

    Ceph blustore over RDMA performance gain

    hmmm strange... because I am also running on msx6036 IB switch and what I use is IPoIB
  8. EluRex

    Ceph blustore over RDMA performance gain

    this seems your RoCE is not up
  9. EluRex

    Ceph blustore over RDMA performance gain

    the lab environment already move on to test other things... no netdata or anything available @ this point
  10. EluRex

    Ceph blustore over RDMA performance gain

    check the error log for each osd /var/log/ceph/ceph-osd.[id].log typically the problem can be solved ceph-disk activate /dev/sd[x] --reactivate or systemctl disable ceph-osd@[id].service; systemctl enable ceph-osd@[id].service
  11. EluRex

    Ceph blustore over RDMA performance gain

    I want to share following testing with you 4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD. OSD: st6000nm0034 block.db & block.wal device: Samsung sm961 512GB NIC: Mellanox Connectx3 VPI dual port 40 Gbps Switch: Mellanox sx6036T Network: IPoIB separated public network &...
  12. EluRex

    Proxmox VE 5.2 and AMD EPYC it Works Great

    This also can happened to offline migration windows 10 as well. Due to windows 1o default shutdown is a fake shutdown, it saves RAM into HDD for faster next booth time. The best way to check is to see under Task Manager -> CPU -> Up time. You will see although your windows just boot up but the...
  13. EluRex

    Proxmox VE 5.2 and AMD EPYC it Works Great

    when you live migration btw xeon and epyc.... the vm wont run and just show internal error
  14. EluRex

    Proxmox VE 5.2 and AMD EPYC it Works Great

    i can confirm live migration will not work. even the cpu is kvm
  15. EluRex

    Custom storage plugins for Proxmox

    Dear alex, I am using scst currently on proxmox itself wtih IB, however, all settings are done via scstadmin cli and also srp initiator only. Having a web gui would be great but it will stop me from using SCST as it is far better and stable than other target out there.
  16. EluRex

    PVE Cluster using UNAS 810A

    pFSense always best to passthrough a dedicated NIC to it. In addition, its best to seperate your clsuter +storage network from your client facing network (use vlan). That is what I do @ home.
  17. EluRex

    PVE Cluster using UNAS 810A

    810a has one more 7cm fan on the side which is a better design for airflow
  18. EluRex

    FS: Dell/Samsung SM863a 1.92tb SSD

    Bump and price reduced