ceph

  1. N

    Minisforum MS-01 ProxmoxVE Clusters

    Starting this thread to talk about the popular MS-01 as a Proxmox Virtual Environment node, in a clustered environment. Some questions I have: What are the things to look out for? Should we install other software along with Proxmox on the nodes? Is your solution successful? What unique are...
  2. G

    Proxmox & CEPH

    I would like to start a short poll that shall run for 90 days here. Any thoughts about Ceph deployment and/or other distributed HA storage solutions are very much appreciated. Thanks and keep voting.
  3. G

    Proxmox VE Ceph Benchmark Summary - 12/23 Edition

    Source: Ceph Benchmark - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster
  4. S

    Proxmox Setup in Dell R610

    Hi All I have three Dell R610 Servers with me and was thinking of setting up a home lab with Proxmox. These servers come with Perc 6i RAID controllers and I understand that there is no way for me to configure the drives as JBOD for Ceph Storage. I am looking at replacing the Perc 6i with SAS...
  5. VMman

    Help me select NVMe storage for my Ceph Build

    Thanks in advance to all the storage experts out there. I've been wanting to get a small Ceph cluster up for some time and finally have some budget to start spending :cool: I want to start with a 5 node setup and use the experience gained to assist me with a deployment for the office at a...
  6. W

    How important is QAT for distributed storage?

    Specifically, I'm looking at Ceph (via Rook) and TrueNAS Scale (so OpenZFS) between CPUs like Xeon D 1500/AMD Epyc 3000 vs Atom C3000/Xeon D 1700. I can't seem to find anything about using QAT with Ceph/ZFS besides the fact that it's available to use. There is an older thread here about QAT on...
  7. VMman

    Recommended 10Gb NIC for Proxmox + Ceph Lab

    Hi All, I'm currently in the planning phase of setting up a lab using 2 Proxmox hypervisors connected to 3 Ceph storage hosts. I wanted to get everyone's opinion on what is considered the best NIC value or otherwise to connect these servers together using a pair of ICX 6650's that I have. I...
  8. L

    What CPU for LOW power ceph cluster

    Hey guys! I want to build a low power ceph cluster! Rejected Xeon "X79" LGA2011 etc I've considered getting some old Xeon LGA2011 or whatever servers from AliExpress which would make cost low, ECC RAM abundant and IO aplenty. But they'll probably idle at at least 60W each for just the...
  9. C

    Add iSCSI Gateways in CEPH

    Hi Guys. I have a doubt about CEPH working with iSCSI Gateways. Today we have a cluster with 10 OSD Nodes, 3 Monitors and 02 iSCSI Gateways. We are planning to expand de gateways to 04 machines. We understood the process to do this, but we would like to know if it’s necessary to adjust some...
  10. BackupProphet

    How do people here provision new Ceph nodes?

    So you get a rack of new servers that you want to quickly add to your existing cluster. What strategy do you use to provision them? Do you use something like Ubuntu MAAS? Something else?
  11. V

    Starting small with Ceph storage

    Hi I am looking at a small low latency Ceph storage that I can expand later and I need to check if what I am thinking of is a good idea or I need more hardware to start with. In my case I need fast storage with low latency and I am looking at using RoCEv2 NFS between clients and storage. I was...
  12. D

    Hotswap issues

    Hello, I have a 6027R-E1R12L server with Ubuntu 18.04.3 running. It currently has 4 HUH728080ALE600 drives being used as ceph OSDs. I recently bought a couple more HUH721010ALE600 drives which I plugged into the backplane in front. The new drives however aren't being recognized by the running...
  13. I

    m.2 PCIe adapter in Supermicro Super X9DRD-CNT+

    i am planning to get 5 servers Supermicro | Products | SuperServers | 2U | 6027R-CDNRT+ to make ceph storage cluster and i would like to know if i can connect m.2 pcie adapter (with 4 m2 connector) to a single pcie lane the motherboard supports 1x PCI-E 3.0 x16 (FHHL), 2x PCI-E 3.0 x8...
  14. M

    New ceph cluster -recommendations?

    I'm about to build my first production ceph cluster after goofing around in the lab for a while. I'd like some recommendations on hardware and setup before rushing to buy hardware and making wrong decisions. My goal is to start small with 3 nodes to start using ceph for daily tasks and start...
  15. E

    Open Hardware for PCIE fabric

    I have been running Mellanox QDR Infiniband primarily due to the magnitude of difference in latency using RDMA and its use via SRIOV in containers and VMs. Unfortunately Infiniband in the industry is almost entirely proprietary and PCIE fabrics have been a promising future for years now without...
  16. C

    CEPH: switching from HDD to SSD - HW recommendations

    Hi guys, I would like to replace our current HP G6 (64G RAM, 2x L5640 CPU, TGE nic, PCI NVMe) and HDD (1 TB WD Black) CEPH cluster with a used newer system with SSD. We have 70 OSD, (10 per node), avg IOPS around 2k, peak ~5k, the cluster used for KVM vms. It working very well, but we would...
  17. S

    Ceph low performance

    Hello, We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using. So is there any way we can improve with configuration changes...
  18. EluRex

    Ceph blustore over RDMA performance gain

    I want to share following testing with you 4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD. OSD: st6000nm0034 block.db & block.wal device: Samsung sm961 512GB NIC: Mellanox Connectx3 VPI dual port 40 Gbps Switch: Mellanox sx6036T Network: IPoIB separated public network &...
  19. WeekendWarrior

    Proxmox/Ceph Distributed System - Advice Sought

    I am planning a Proxmox/Ceph installation and would appreciate some advice on performance aspects of such a system. My question seems to concern Ceph but that will be installed within the context of Proxmox usage. My goals are to have "many" servers (at least by SMB standards) running...
  20. P

    Reorganize our infrastructure for OpenShift/Kubernetes

    Hello, As I don't have a lot of experience setting up OpenShift and Kubernetes, I'm asking for help here as a way to brainstorm and find creative way to leverage our existing infrastructure. As a new initiative to embrace Docker, we start dockerizing all our software and we are deploying them...