I've setup a small ceph cluster 3 nodes with 5 drives in each (spinning rust type)
I've got all the osd's up and in but creating a pool never seems to complete. ceph -w gives me the following status.
I've got all the osd's up and in but creating a pool never seems to complete. ceph -w gives me the following status.
The interesting thing is the the used portion keeps increasing its now on 115 gb but its taken 2 weeks to get this far. I'm trying to setup ceph to act as an rbd storage pool for VM's I've read as many tutorials/how to's etc as I can find and they're all similar in process which I've followed the osd tree looks good compared to what others have posted in the tutorials etcceph -w
cluster 7908651c-252e-4761-8a83-4b1cfcf90522
health HEALTH_ERR
700 pgs are stuck inactive for more than 300 seconds
700 pgs stuck inactive
monmap e1: 3 mons at {ceph1=10.0.80.10:6789/0,ceph2=10.0.80.11:6789/0,ceph3=10.0.80.12:6789/0}
election epoch 18, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e395386: 15 osds: 15 up, 15 in
flags sortbitwise,require_jewel_osds
pgmap v1456524: 700 pgs, 1 pools, 0 bytes data, 0 objects
115 GB used, 55672 GB / 55788 GB avail
700 creating
2017-02-06 16:28:50.659575 mon.0 [INF] pgmap v1456524: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:51.259085 mon.0 [INF] mds.? 10.0.80.10:6801/2222 up:boot
2017-02-06 16:28:51.259236 mon.0 [INF] fsmap e395320:, 1 up:standby
2017-02-06 16:28:52.560383 mon.0 [INF] pgmap v1456525: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:53.771687 mon.0 [INF] pgmap v1456526: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:54.593187 mon.0 [INF] osdmap e395387: 15 osds: 15 up, 15 in
2017-02-06 16:28:54.661965 mon.0 [INF] pgmap v1456527: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:54.873143 mon.0 [INF] mds.? 10.0.80.10:6800/1746 up:boot
2017-02-06 16:28:54.873331 mon.0 [INF] fsmap e395321:, 1 up:standby
2017-02-06 16:28:56.184869 mon.0 [INF] pgmap v1456528: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:57.384587 mon.0 [INF] pgmap v1456529: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:58.485161 mon.0 [INF] pgmap v1456530: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:58.650389 mon.0 [INF] osdmap e395388: 15 osds: 15 up, 15 in
2017-02-06 16:28:58.650492 mon.0 [INF] mds.? 10.0.80.10:6801/2222 up:boot
2017-02-06 16:28:58.650643 mon.0 [INF] fsmap e395322:, 1 up:standby
2017-02-06 16:28:58.717787 mon.0 [INF] pgmap v1456531: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
2017-02-06 16:28:59.818401 mon.0 [INF] pgmap v1456532: 700 pgs: 700 creating; 0 bytes data, 115 GB used, 55672 GB / 55788 GB avail
Can anyone suggest anything else I need to be looking at to try to work out what the hell is going on as I've tried everything I can find but figured I must be missing something important somewhere.ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 54.47983 root default
-2 18.15994 host ceph1
0 3.63199 osd.0 up 1.00000 1.00000
1 3.63199 osd.1 up 1.00000 1.00000
2 3.63199 osd.2 up 1.00000 1.00000
3 3.63199 osd.3 up 1.00000 1.00000
4 3.63199 osd.4 up 1.00000 1.00000
-3 18.15994 host ceph2
5 3.63199 osd.5 up 1.00000 1.00000
6 3.63199 osd.6 up 1.00000 1.00000
7 3.63199 osd.7 up 1.00000 1.00000
8 3.63199 osd.8 up 1.00000 1.00000
9 3.63199 osd.9 up 1.00000 1.00000
-4 18.15994 host ceph3
10 3.63199 osd.10 up 1.00000 1.00000
11 3.63199 osd.11 up 1.00000 1.00000
12 3.63199 osd.12 up 1.00000 1.00000
13 3.63199 osd.13 up 1.00000 1.00000
14 3.63199 osd.14 up 1.00000 1.00000