Proxmox and CEPH cluster issues

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
I just started dabbling in Proxmox and CEPH and have gone through WIKI and the guides here (thanks Patrick for the OSD with disks that already have partitions guide)

Anyway I have the following:

Supermicro Fat Twin with 2 x 5620's and 48GB RAM, each node has 2 x 60GB SSDs for Proxmox on a ZFS mirror, 200GB Intel S3700 for CEPH Journal and 2 x 2TB Seagate Constellation ENT drives for the CEPH OSD's. Before I continue yes I know that 3 servers would be optimal but this is a lab.

Anyway, I create my new pool and try to create a new RBD and this is where it goes south. I start getting connection errors in when I look at the new RBD storage and its shows no space. I make sure it's pointed to the new pool etc.

I went through and did the key copy as well wasnt sure that I had to still don't this in version 4.1.

Anyone have any idea what I'm doing wrong here?

One more thing, when I create my pool do I use the storage network or the mgmt network of the Proxmox hosts? I have the storage network on the 10Gb Intel adapters directly connected via twinax network has been defined on each host and pingable
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
What command are you using for the key copy? If you are copy/pasting out of the wiki, you need to change one of the copy values.

Code:
# cd /etc/pve/priv/
# mkdir ceph
# cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring
"my-ceph-storage.keyring" you would change to "rbd.keyring" for the default rbd pool as an example.

That trips a lot of people up.
 
  • Like
Reactions: T_Minus

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
My pool is called VM and I made sure that I changed it to VM.keyring.

Do i meed to identify the storage network or the Proxmox MgMt network when i create the pool?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
My pool is called VM and I made sure that I changed it to VM.keyring.

Do i meed to identify the storage network or the Proxmox MgMt network when i create the pool?
You should use the storage network.

For example, in one lab I use:
10.0.3.0/24 as the main Proxmox network
10.0.5.0/24 as the storage network for Promxox traffic (different NICs)

The 10.0.5.0/24 is what I use for ceph monitors and how I use the client to access the pool.
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
Ok I think I tried it using both networks but i will double check. I have a Port Channel setup for the Proxmox Mgmt and VM traffic on a 192.168.10.0/24 and my 10Gb network directly connected to each node on a 10.10.10.0/30 for storage, single connection.
 

Markus

Member
Oct 25, 2015
78
19
8
Did I miss anything? How did you create the CEPH-FS with just two nodes?
Can you point out the steps in a short writing?

Regards
Markus
 

msvirtualguy

Active Member
Jan 23, 2013
494
244
43
msvirtualguy.com
CLI first

node1# pveceph install -version hammer

node2# pveceph install -version hammer

node1# pveceph init --network 10.10.10.0/30 <-My storage Network

node2# pveceph init --network 10.10.10.0/30<-My storage Network

node1# pveceph createmon

node2# pveceph createmon

GUI next

See Servethehome.com Article for creation of OSD's and Journals and issues you may have with disks that already have partitions:

Proxmox VE Cluster with Ceph - Re-purposing for Hyper-convergence

Then:

Create a new Pool with 2/1

Create Storage RBD (in my case) and point to POOL you created in previous step using Storage Network and make sure you choose both nodes

# cd /etc/pve/priv/
# mkdir ceph
# cp /etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring (Whatever you named your RBD in my case VMs so VMs.keyring)

FYI: I also did an update for the latest version of Proxmox before and latest version of CEPH package after initial install

 
  • Like
Reactions: Chuntzu and T_Minus