ceph

  1. V

    Starting small with Ceph storage

    Hi I am looking at a small low latency Ceph storage that I can expand later and I need to check if what I am thinking of is a good idea or I need more hardware to start with. In my case I need fast storage with low latency and I am looking at using RoCEv2 NFS between clients and storage. I was...
  2. D

    Hotswap issues

    Hello, I have a 6027R-E1R12L server with Ubuntu 18.04.3 running. It currently has 4 HUH728080ALE600 drives being used as ceph OSDs. I recently bought a couple more HUH721010ALE600 drives which I plugged into the backplane in front. The new drives however aren't being recognized by the running...
  3. I

    m.2 PCIe adapter in Supermicro Super X9DRD-CNT+

    i am planning to get 5 servers Supermicro | Products | SuperServers | 2U | 6027R-CDNRT+ to make ceph storage cluster and i would like to know if i can connect m.2 pcie adapter (with 4 m2 connector) to a single pcie lane the motherboard supports 1x PCI-E 3.0 x16 (FHHL), 2x PCI-E 3.0 x8...
  4. M

    New ceph cluster -recommendations?

    I'm about to build my first production ceph cluster after goofing around in the lab for a while. I'd like some recommendations on hardware and setup before rushing to buy hardware and making wrong decisions. My goal is to start small with 3 nodes to start using ceph for daily tasks and start...
  5. E

    Open Hardware for PCIE fabric

    I have been running Mellanox QDR Infiniband primarily due to the magnitude of difference in latency using RDMA and its use via SRIOV in containers and VMs. Unfortunately Infiniband in the industry is almost entirely proprietary and PCIE fabrics have been a promising future for years now without...
  6. C

    CEPH: switching from HDD to SSD - HW recommendations

    Hi guys, I would like to replace our current HP G6 (64G RAM, 2x L5640 CPU, TGE nic, PCI NVMe) and HDD (1 TB WD Black) CEPH cluster with a used newer system with SSD. We have 70 OSD, (10 per node), avg IOPS around 2k, peak ~5k, the cluster used for KVM vms. It working very well, but we would...
  7. S

    Ceph low performance

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  8. EluRex

    Ceph blustore over RDMA performance gain

    I want to share following testing with you 4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD. OSD: st6000nm0034 block.db & block.wal device: Samsung sm961 512GB NIC: Mellanox Connectx3 VPI dual port 40 Gbps Switch: Mellanox sx6036T Network: IPoIB separated public network &...
  9. WeekendWarrior

    Proxmox/Ceph Distributed System - Advice Sought

    I am planning a Proxmox/Ceph installation and would appreciate some advice on performance aspects of such a system. My question seems to concern Ceph but that will be installed within the context of Proxmox usage. My goals are to have "many" servers (at least by SMB standards) running...
  10. P

    Reorganize our infrastructure for OpenShift/Kubernetes

    Hello, As I don't have a lot of experience setting up OpenShift and Kubernetes, I'm asking for help here as a way to brainstorm and find creative way to leverage our existing infrastructure. As a new initiative to embrace Docker, we start dockerizing all our software and we are deploying them...
  11. vl1969

    [CLOSED]Setup and use Ceph on single node Proxmox? A little crazy idea?

    OK , before anyone starts "This is crazy" rant here here me out. I know this is not what it was designed to do. but just want to get some feedback on possibility. From everything I have read so far, it seams that it is theoretically possible to setup Ceph on a single node, and still have the...
  12. K

    Shared Multi-Host Storage for Docker and Data Volumes

    Hi! I'm trying to come up with a design for a initially small-medium infrastructure that uses docker and shared multi-host storage, but I'm not entirely sure which option would suite best or be the most feasible... I apologise if this is not the right forum for this thread, and if it should...
  13. H

    Cisco SG550XG-24F stacked => vLAG => packet drops on ceph cluster network

    I bought 2 Cisco SGSG550XG-24F for our new Ceph cluster. The cluster has been setup in the lab with 2 of our old Blade G8124 24x10G Switches and worked seamlessly with good performance. For the sake of simplicity no VLAN config has been used in the lab setup. Now we moved to the SG550XG (and...
  14. A

    Proxmox VE "noob" build Ceph question

    I'm looking to get my feet wet in the Proxmox world... Chassis: SuperMicro 2027TR-H72RF CPU: Xeon 2x E5-2620 per node RAM: 128GB per node SSD o/s: 2x SuperMicro SSD-DM064-PHI per node SSD Ceph: 6x Samsung SM863 or Intel S3710 per node Networking: 1x Mellanox ConnectX-3 dual port 56g + 2x 1gig...
  15. J

    Ceph on C6100

    Hi, Looking for a little advise with people who has used Ceph a little more than myself! I have just purchased the following equipment 2 x Dell C6100 4 Blades in each consisting of 2 x L5640 96GB RAM 2 x 10Gb NIC Mez Cards (Waiting to be delivered) The model has 12 x 3.5" drives. I am...
  16. H

    Ceph is pool never creating?

    I've setup a small ceph cluster 3 nodes with 5 drives in each (spinning rust type) I've got all the osd's up and in but creating a pool never seems to complete. ceph -w gives me the following status. The interesting thing is the the used portion keeps increasing its now on 115 gb but its...
  17. RandyC

    Dell C6100 used for openstack cluster.

    I have a C6100 that I want to setup with Openstack. The only decision holding me back right now is HDD / SSD choice. Should I get some inexpensive 2TB HGST drives that have been dumped on ebay recently? Or should I get 8 Intel DC S3500 drives? (Or should I get a mix of both?) Does anyone one...
  18. kroem

    Ceph right for my needs? (Keeping a in sync backup on remote location...) (pve-zync vs ceph?)

    (This turned out to be a long thread start...sorry...) Im redoing my servers at home and looking to maybe redo the storage too. Today I run ESXi on two hosts and run like periodic backups to external storage, because I never really found a way to do it properly. (VMs backup via Unitrends). ...