Specifically, I'm looking at Ceph (via Rook) and TrueNAS Scale (so OpenZFS) between CPUs like Xeon D 1500/AMD Epyc 3000 vs Atom C3000/Xeon D 1700.
I can't seem to find anything about using QAT with Ceph/ZFS besides the fact that it's available to use. There is an older thread here about QAT on...
I'm currently in the planning phase of setting up a lab using 2 Proxmox hypervisors connected to 3 Ceph storage hosts.
I wanted to get everyone's opinion on what is considered the best NIC value or otherwise to connect these servers together using a pair of ICX 6650's that I have.
I want to build a low power ceph cluster!
Rejected Xeon "X79" LGA2011 etc
I've considered getting some old Xeon LGA2011 or whatever servers from AliExpress which would make cost low, ECC RAM abundant and IO aplenty. But they'll probably idle at at least 60W each for just the...
I have a doubt about CEPH working with iSCSI Gateways.
Today we have a cluster with 10 OSD Nodes, 3 Monitors and 02 iSCSI Gateways. We are planning to expand de gateways to 04 machines. We understood the process to do this, but we would like to know if it’s necessary to adjust some...
I am looking at a small low latency Ceph storage that I can expand later and I need to check if what I am thinking of is a good idea or I need more hardware to start with.
In my case I need fast storage with low latency and I am looking at using RoCEv2 NFS between clients and storage.
I have a 6027R-E1R12L server with Ubuntu 18.04.3 running.
It currently has 4 HUH728080ALE600 drives being used as ceph OSDs.
I recently bought a couple more HUH721010ALE600 drives which I plugged into the backplane in front. The new drives however aren't being recognized by the running...
i am planning to get 5 servers Supermicro | Products | SuperServers | 2U | 6027R-CDNRT+
to make ceph storage cluster
and i would like to know if i can connect m.2 pcie adapter (with 4 m2 connector) to a single pcie lane
the motherboard supports
1x PCI-E 3.0 x16 (FHHL),
2x PCI-E 3.0 x8...
I'm about to build my first production ceph cluster after goofing around in the lab for a while.
I'd like some recommendations on hardware and setup before rushing to buy hardware and making wrong decisions.
My goal is to start small with 3 nodes to start using ceph for daily tasks and start...
I have been running Mellanox QDR Infiniband primarily due to the magnitude of difference in latency using RDMA and its use via SRIOV in containers and VMs. Unfortunately Infiniband in the industry is almost entirely proprietary and PCIE fabrics have been a promising future for years now without...
I would like to replace our current HP G6 (64G RAM, 2x L5640 CPU, TGE nic, PCI NVMe) and HDD (1 TB WD Black) CEPH cluster with a used newer system with SSD.
We have 70 OSD, (10 per node), avg IOPS around 2k, peak ~5k, the cluster used for KVM vms. It working very well, but we would...
We have a seperate Ceph cluster and Proxmox Cluster (seperate server nodes), i want to know if the performance we get is normal or not, my thought was that the performance could be way better with the hardware we are using.
So is there any way we can improve with configuration changes...
I want to share following testing with you
4 PVE Nodes cluster with 3 Ceph Bluestore Node, total of 36 OSD.
block.db & block.wal device: Samsung sm961 512GB
NIC: Mellanox Connectx3 VPI dual port 40 Gbps
Switch: Mellanox sx6036T
Network: IPoIB separated public network &...
I am planning a Proxmox/Ceph installation and would appreciate some advice on performance aspects of such a system. My question seems to concern Ceph but that will be installed within the context of Proxmox usage.
My goals are to have "many" servers (at least by SMB standards) running...
As I don't have a lot of experience setting up OpenShift and Kubernetes, I'm asking for help here as a way to brainstorm and find creative way to leverage our existing infrastructure.
As a new initiative to embrace Docker, we start dockerizing all our software and we are deploying them...
OK , before anyone starts "This is crazy" rant here here me out.
I know this is not what it was designed to do. but just want to get some feedback on possibility.
From everything I have read so far, it seams that it is theoretically possible to setup Ceph on a single node, and still have the...
I'm trying to come up with a design for a initially small-medium infrastructure that uses docker and shared multi-host storage, but I'm not entirely sure which option would suite best or be the most feasible...
I apologise if this is not the right forum for this thread, and if it should...
I bought 2 Cisco SGSG550XG-24F for our new Ceph cluster.
The cluster has been setup in the lab with 2 of our old Blade G8124 24x10G Switches and worked seamlessly with good performance. For the sake of simplicity no VLAN config has been used in the lab setup.
Now we moved to the SG550XG (and...
I'm looking to get my feet wet in the Proxmox world...
Chassis: SuperMicro 2027TR-H72RF
CPU: Xeon 2x E5-2620 per node
RAM: 128GB per node
SSD o/s: 2x SuperMicro SSD-DM064-PHI per node
SSD Ceph: 6x Samsung SM863 or Intel S3710 per node
Networking: 1x Mellanox ConnectX-3 dual port 56g + 2x 1gig...
Looking for a little advise with people who has used Ceph a little more than myself!
I have just purchased the following equipment
2 x Dell C6100
4 Blades in each consisting of
2 x L5640
2 x 10Gb NIC Mez Cards (Waiting to be delivered)
The model has 12 x 3.5" drives.