I just started dabbling in Proxmox and CEPH and have gone through WIKI and the guides here (thanks Patrick for the OSD with disks that already have partitions guide)
Anyway I have the following:
Supermicro Fat Twin with 2 x 5620's and 48GB RAM, each node has 2 x 60GB SSDs for Proxmox on a ZFS mirror, 200GB Intel S3700 for CEPH Journal and 2 x 2TB Seagate Constellation ENT drives for the CEPH OSD's. Before I continue yes I know that 3 servers would be optimal but this is a lab.
Anyway, I create my new pool and try to create a new RBD and this is where it goes south. I start getting connection errors in when I look at the new RBD storage and its shows no space. I make sure it's pointed to the new pool etc.
I went through and did the key copy as well wasnt sure that I had to still don't this in version 4.1.
Anyone have any idea what I'm doing wrong here?
One more thing, when I create my pool do I use the storage network or the mgmt network of the Proxmox hosts? I have the storage network on the 10Gb Intel adapters directly connected via twinax network has been defined on each host and pingable
Anyway I have the following:
Supermicro Fat Twin with 2 x 5620's and 48GB RAM, each node has 2 x 60GB SSDs for Proxmox on a ZFS mirror, 200GB Intel S3700 for CEPH Journal and 2 x 2TB Seagate Constellation ENT drives for the CEPH OSD's. Before I continue yes I know that 3 servers would be optimal but this is a lab.
Anyway, I create my new pool and try to create a new RBD and this is where it goes south. I start getting connection errors in when I look at the new RBD storage and its shows no space. I make sure it's pointed to the new pool etc.
I went through and did the key copy as well wasnt sure that I had to still don't this in version 4.1.
Anyone have any idea what I'm doing wrong here?
One more thing, when I create my pool do I use the storage network or the mgmt network of the Proxmox hosts? I have the storage network on the 10Gb Intel adapters directly connected via twinax network has been defined on each host and pingable
Last edited: