Search results

  1. EluRex

    Ceph blustore over RDMA performance gain

    dev@ceph.io[/email]/thread/FYASBMA2NOAVXEWM3TVFGMGUJXGHD4MB/"]Re: Ceph RDMA Update - Dev - lists.ceph.io has no update info since 2019 nov... seastore uses spdk this will definate boost nvme ssd performance two fold or more however, to lower latency, you definitely needs dpdk, and RDMA will...
  2. EluRex

    Ceph blustore over RDMA performance gain

    ceph will not support RDMA in production yet and seems development on it is extremely slow and mellanox commitment on it ceased and now it is part of async messenger development on supproting dpdk + spdk is probaby faster than waiting for rdma
  3. EluRex

    Updated: US: 40G/100G networking, SSDs, Intel v3/v4/scalables & AMD EPYC CPUs

    dell server will lock EPYC cpu.... https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-for-enhanced-security-at-a-cost/
  4. EluRex

    PVE Cluster using UNAS 810A

    if your motherboard has onboard 10 base-T like x550/552/557, they are extremely hot as well, also the MOSFET are hot as well
  5. EluRex

    PVE Cluster using UNAS 810A

    This is all silverstone case issue, its the back plane for hdd/sdd blocks all air flow.... I have to open and clean my cs280 on regular basis (like every 3 month) It seems to me that cs381 backplane has improved this is 810a SAS3 backplane
  6. EluRex

    PVE Cluster using UNAS 810A

    the mobo is X11SSH-CTF | Motherboards | Products | Super Micro Computer, Inc. with SAS3 controller now it has a sas3 variant model (sff8643) already Ihave sliverstone cs281 (2.5" version) which cooling sucks... I dont recommend siliverstone cs381
  7. EluRex

    PVE Cluster using UNAS 810A

    @BigServerSmallStudio my new build 2020 using sas3 mobo with sas3 backplane the board has lsi3008 and also intel x550 onboard, must use additional blower fan to cool it down.
  8. EluRex

    AMD Radeon RX 6900 XT 6800 XT and 6800 Launch

    if 6900xt support sriov it would be a blast
  9. EluRex

    Cockpit ZFS Manager

    Optimans, I solved the status page issue by ln -s /bin/lsblk /usr/bin/lsblk please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue. Complete PVE 6.0 installation for cockpit and zfs manager as follow echo "deb...
  10. EluRex

    Cockpit ZFS Manager

    I have provided all info in above post and here is the screenshot of the console And I found out that zfs.js calls /usr/bin/cat /usr/bin/grep /usr/bin/echo but Proxmox bin directory is at /bin/cat /bin/grep /bin/echo so I made ln -s to make then works I have not test all the...
  11. EluRex

    Cockpit ZFS Manager

    I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster), However I am getting following error I have zfs module install and loaded into kernel, please check root@pve-nextcloud:~# modinfo zfs filename...
  12. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    I am using usb disk on USB 2.0 ports....alot of times usb disk die not because the write wear but its USB host voltage over usb disk limit
  13. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    CS380B is extremely hot due to its back pane blocks all air flow
  14. EluRex

    Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

    1. you can swap the direction of the 70mm fan 2. no.. it does not... all cpu/mobo air flow all goes out on the side... rear fan is for hdd
  15. EluRex

    Ceph blustore over RDMA performance gain

    hmmm strange... because I am also running on msx6036 IB switch and what I use is IPoIB
  16. EluRex

    Ceph blustore over RDMA performance gain

    this seems your RoCE is not up
  17. EluRex

    Ceph blustore over RDMA performance gain

    the lab environment already move on to test other things... no netdata or anything available @ this point
  18. EluRex

    Ceph blustore over RDMA performance gain

    check the error log for each osd /var/log/ceph/ceph-osd.[id].log typically the problem can be solved ceph-disk activate /dev/sd[x] --reactivate or systemctl disable ceph-osd@[id].service; systemctl enable ceph-osd@[id].service