dev@ceph.io[/email]/thread/FYASBMA2NOAVXEWM3TVFGMGUJXGHD4MB/"]Re: Ceph RDMA Update - Dev - lists.ceph.io has no update info since 2019 nov...
seastore uses spdk this will definate boost nvme ssd performance two fold or more
however, to lower latency, you definitely needs dpdk, and RDMA will...
ceph will not support RDMA in production yet and seems development on it is extremely slow and mellanox commitment on it ceased and now it is part of async messenger
development on supproting dpdk + spdk is probaby faster than waiting for rdma
This is all silverstone case issue, its the back plane for hdd/sdd blocks all air flow.... I have to open and clean my cs280 on regular basis (like every 3 month)
It seems to me that cs381 backplane has improved
this is 810a SAS3 backplane
the mobo is X11SSH-CTF | Motherboards | Products | Super Micro Computer, Inc. with SAS3 controller
now it has a sas3 variant model (sff8643) already
Ihave sliverstone cs281 (2.5" version) which cooling sucks... I dont recommend siliverstone cs381
@BigServerSmallStudio my new build 2020 using sas3 mobo with sas3 backplane
the board has lsi3008 and also intel x550 onboard, must use additional blower fan to cool it down.
Optimans, I solved the status page issue by
ln -s /bin/lsblk /usr/bin/lsblk
please note my current Proxmox 6.0 (buster) is upgraded from Proxmox 5.X (stretch), thus I have those path issue.
Complete PVE 6.0 installation for cockpit and zfs manager as follow
echo "deb...
I have provided all info in above post and here is the screenshot of the console
And I found out that zfs.js calls
/usr/bin/cat
/usr/bin/grep
/usr/bin/echo
but Proxmox bin directory is at
/bin/cat
/bin/grep
/bin/echo
so I made ln -s to make then works
I have not test all the...
I have install Cockpit 202 (via buster-backport stable) and Cockpit ZFS manager on Proxmox VE 6.0 (which is Debian Buster),
However I am getting following error
I have zfs module install and loaded into kernel, please check
root@pve-nextcloud:~# modinfo zfs
filename...
check the error log for each osd /var/log/ceph/ceph-osd.[id].log
typically the problem can be solved
ceph-disk activate /dev/sd[x] --reactivate
or
systemctl disable ceph-osd@[id].service; systemctl enable ceph-osd@[id].service
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.