1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Ceph is pool never creating?

Discussion in 'Linux Admins, Storage and Virtualization' started by hinch, Feb 6, 2017.

  1. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    I've setup a small ceph cluster 3 nodes with 5 drives in each (spinning rust type)

    I've got all the osd's up and in but creating a pool never seems to complete. ceph -w gives me the following status.
    The interesting thing is the the used portion keeps increasing its now on 115 gb but its taken 2 weeks to get this far. I'm trying to setup ceph to act as an rbd storage pool for VM's I've read as many tutorials/how to's etc as I can find and they're all similar in process which I've followed the osd tree looks good compared to what others have posted in the tutorials etc

    Can anyone suggest anything else I need to be looking at to try to work out what the hell is going on as I've tried everything I can find but figured I must be missing something important somewhere.
     
    #1
  2. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    I dunno what the hell is taking so long but one of my first foray's into CEPH used a 200gb hussl slc ssd for cache tier and 450GB 10K hitachi spinner for capacity. 3 OSD's, 2 disks in each OSD. Pool generation/usage was nearly instant. W/ pure AFA pool and a complete CEPH cluster rebuild seemed to be just an instantaneous.

    Start here and read whole series unless you are a CEPH aficionado: (what I did, then branched out into about 100+ articles heh).

    http://www.virtualtothecore.com/en/adventures-ceph-storage-part-1-introduction/

    Another good quick hit all-in-one article, not as in depth but has gold nuggets in it.
    http://www.virtualtothecore.com/en/quickly-build-a-new-ceph-cluster-with-ceph-deploy-on-centos-7/
     
    #2
  3. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    thats pretty much the guides I used with a little sprinkling of the official docs guide too. but near enough the same. I've tried deleting the pool and re-creating it several times now just never seems to do anything just gets stuck creating the pg's I even reduced the pg's down from the recommended 1500 to 700 just to see if it would be faster.
     
    #3
  4. Marsh

    Marsh Moderator

    Joined:
    May 12, 2013
    Messages:
    1,331
    Likes Received:
    546
    @hinch

    Sorry, I don't have any idea about your problem.
    If you are starting over building out the Cepth cluster, you may want to considered trying Proxmox, it has build in support for Cepth and works out of the box.
     
    #4
  5. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    would love to but proxmox refuses to install on any of my servers gets part way through the install and shits the bed i've tried it several times over its various 4.x revisions and it just refuses to work so given up on it.
     
    #5
  6. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    What distro are you using for your base? What type of overall CEPH architecture for admin/headend/mgmt node and monitor/s?
     
    #6
  7. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    BTW, my math shows that for 15 osd's (assuming 5 spinners per OSD host right?) the optimum PG's 'should' be between 300-480 FWIW

    Honeslty don't know if that's your issue though, seems like grasping, I've built out my CEPH cluster 3 times now, first iteration using VM's/vdisks, second, VM's/HBA 1 ssd, 1 spinner, third, VM's/HBA AFA and never experienced this, I have been bitched at for too few PG's and had to fix that.
     
    #7
  8. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    using ubuntu with the base I added repos from ceph.com to pull the kraken build but it pulled jewel but I can live with that
    3 nodes 5 drives per node for ceph 1 os node, mon running on os drive one osd per drive so yeah 15 useable osd's
    I had to build journals on osd drives too since it didn't like having the osd's on a different drive partitioned in to 5 it'd never actually activate and bring the journal partitions online despite saying it'd configured them correctly so I just put the osd and journal on same drive speed isn't really an issue for me its more storage volume i'm interested in and redundancy.

    I used the ceph.com pg calc (PGCalc - Ceph) that gave me 2048 pg's for 15 drives in this config with this level of redundancy etc however I read various other articles some state 100 pg per osd some 150 per osd etc so I went with 100 per osd and 15 osd's at first so 1500 pg's and it was just not creating the pool at all so I thought just to see i just reduced that number to 700 see if it made a difference tbh it didn't :(

    I'm completely out of idea's just can't seem to find anyone else having this issue :(
     
    #8
  9. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    Good deal, sounds abt same as my recent struggles, I tried to get kraken but ended up w/ jewel. Gonna take another run at this tonight. Maybe pign folks on CEPH IRC for an upgrade path/procedure.
     
    #9
  10. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    w00t, rebuild success w/ kraken on CentOS7

    Code:
    [cephuser@ceph-kraken-admin ceph-deploy]$ ceph -v
    ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)
    
    Code:
    [cephuser@ceph-kraken-admin ceph-deploy]$ ceph health
    HEALTH_OK
    [cephuser@ceph-kraken-admin ceph-deploy]$ ceph osd tree
    ID WEIGHT  TYPE NAME     UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 0.13170 root default
    -2 0.04390     host osd1
     0 0.04390         osd.0      up  1.00000          1.00000
    -3 0.04390     host osd2
     1 0.04390         osd.1      up  1.00000          1.00000
    -4 0.04390     host osd3
     2 0.04390         osd.2      up  1.00000          1.00000
    [cephuser@ceph-kraken-admin ceph-deploy]$ ceph -s
        cluster ca18d013-5e67-4db5-b6e4-eaa4d9caf5dc
         health HEALTH_OK
         monmap e1: 1 mons at {mon1=192.168.2.171:6789/0}
                election epoch 3, quorum 0 mon1
         osdmap e15: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects
                100 MB used, 134 GB / 134 GB avail
                      64 active+clean
    [cephuser@ceph-kraken-admin ceph-deploy]$ ceph -w
        cluster ca18d013-5e67-4db5-b6e4-eaa4d9caf5dc
         health HEALTH_OK
         monmap e1: 1 mons at {mon1=192.168.2.171:6789/0}
                election epoch 3, quorum 0 mon1
         osdmap e15: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v31: 64 pgs, 1 pools, 0 bytes data, 0 objects
                100 MB used, 134 GB / 134 GB avail
                      64 active+clean
    
    2017-02-06 22:14:26.981190 mon.0 [INF] from='client.? 192.168.2.171:0/998924038' entity='mon.' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-rgw"}]: dispatch
    
     
    #10
  11. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    If it helps at all here are my ceph-deploy cmds from history

    Code:
    8  mkdir ceph-deploy
        9  cd ceph-deploy/
       10  ceph-deploy new mon1
       12  ceph-deploy install ceph-kraken-admin mon1 osd1 osd2 osd3
       22  ceph-deploy install ceph-kraken-admin mon1 osd1 osd2 osd3
       23  ceph-deploy mon create-initial
       24  ceph-deploy gatherkeys ceph-kraken-admin
       25  ceph-deploy gatherkeys mon1
       26  ceph-deploy gatherkeys osd1
       27  ceph-deploy gatherkeys osd2
       28  ceph-deploy gatherkeys osd3
       29  ceph-deploy disk list osd1
       30  ceph-deploy disk zap osd1:sdb
       31  ceph-deploy disk zap osd2:sdb
       32  ceph-deploy disk zap osd3:sdb
       33  ceph-deploy osd create osd1:sdb
       34  ceph-deploy osd create osd2:sdb
       35  ceph-deploy osd create osd3:sdb
       36  ceph-deploy admin ceph-kraken-admin mon1 osd1 osd2 osd3
       38  ceph-deploy gatherkeys mon1
       50  history | grep ceph-deploy
    
    Disregard my gatherkeys cmd, I think that is unnecessary, MAYBE needed on monitor node only.
     
    #11
  12. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    my command chain below

     
    #12
  13. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    interestingly

    root@cnc:~# ceph pg map 5.c9
    osdmap e411368 pg 5.c9 (5.c9) -> up [] acting []

    the pg's don't seem to be getting assigned to the osd's even though the osd's are up and in and I can't for the life of me find out why.
     
    #13
  14. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,369
    Likes Received:
    731
    What type of disks are you using, phys hosts or VM's?
     
    #14
  15. hinch

    hinch New Member

    Joined:
    Jul 22, 2016
    Messages:
    19
    Likes Received:
    1
    all physical hosts raw ubuntu 16.04 install fully updated no vm's these are dedicated nodes.
     
    #15
Similar Threads: Ceph pool
Forum Title Date
Linux Admins, Storage and Virtualization CEPH redundant network design Oct 15, 2017
Linux Admins, Storage and Virtualization [CLOSED]Setup and use Ceph on single node Proxmox? A little crazy idea? Sep 8, 2017
Linux Admins, Storage and Virtualization Ceph and LIO and FibreChannel Oh My! May 19, 2017
Linux Admins, Storage and Virtualization Proxmox VE "noob" build Ceph question May 2, 2017
Linux Admins, Storage and Virtualization CEPH write performance pisses me off! Jan 25, 2017

Share This Page