HATERS GONNA HATE HAHAH j/k bro, I knew that was coming...you know me and my JANKY gear :-DInteresting. I think you need more drives
Why is READ not penalized nearly as much as write in CEPH, maybe I am naive to a simple/fundamental core concept here. Replica's/PG's/CRUSH leaf type w/in ceph.conf maybe? If this is 'all she's got Scotty' for my config and are reasonable performance number I can deal I guess, when I had 15 or so VM's on it is seemed very snappy/responsive.read is 250MB a sec, write is 35/40MB sec.
Sounds about right.
I’m simplifying a bit but:
Write is /3 for the standard triple redundancy
And then /2 for the journaling
So 250MB/3 = 83MB /2 = 40MB
[cephuser@ceph-admin ceph-deploy]$ cat ceph.conf
[global]
fsid = 31485460-ffba-4b78-b3f8-3c5e4bc686b1
mon_initial_members = osd01, osd02, osd03
mon_host = 192.168.2.176,192.168.2.177,192.168.2.178
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 192.168.2.0/24
cluster_network = 192.168.111.0/24
osd_pool_default_size = 2 # Write an object 2 times
osd_pool_default_min_size = 1 # Allow writing 1 copy in a degraded state
osd_pool_default_pg_num = 256
osd_pool_default_pgp_num = 256
osd_crush_chooseleaf_type = 1
[cephuser@ceph-admin ceph-deploy]$ ceph osd lspools
0 rbd,
Yeah I was hoping to create/isolate a new pool of the new devices just to see if devices had anything to do w/ it but I suppose we have debunked and said the hell w/ my theory there. So my previous cmds will work, rock on.You don't need to create a new pool. Just add the OSDs and data will be redistributed to use them...
Sent from my SM-G925V using Tapatalk
[cephuser@ceph-admin ceph-deploy]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2.16747 root default
-2 0.72249 host osd01
0 0.72249 osd.0 up 1.00000 1.00000
-3 0.72249 host osd02
1 0.72249 osd.1 up 1.00000 1.00000
-4 0.72249 host osd03
2 0.72249 osd.2 up 1.00000 1.00000
[cephuser@ceph-admin ceph-deploy]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 3.22939 root default
-2 1.07646 host osd01
0 0.72249 osd.0 up 1.00000 1.00000
3 0.17699 osd.3 up 1.00000 1.00000
4 0.17699 osd.4 up 1.00000 1.00000
-3 1.07646 host osd02
1 0.72249 osd.1 up 1.00000 1.00000
5 0.17699 osd.5 up 1.00000 1.00000
6 0.17699 osd.6 up 1.00000 1.00000
-4 1.07646 host osd03
2 0.72249 osd.2 up 1.00000 1.00000
7 0.17699 osd.7 up 1.00000 1.00000
8 0.17699 osd.8 up 1.00000 1.00000
[cephuser@ceph-admin ceph-deploy]$ ceph -w
cluster 31485460-ffba-4b78-b3f8-3c5e4bc686b1
health HEALTH_WARN
1 pgs backfill_wait
1 pgs backfilling
recovery 1243/51580 objects misplaced (2.410%)
too few PGs per OSD (14 < min 30)
monmap e1: 3 mons at {osd01=192.168.2.176:6789/0,osd02=192.168.2.177:6789/0,osd03=192.168.2.178:6789/0}
election epoch 6, quorum 0,1,2 osd01,osd02,osd03
osdmap e129: 9 osds: 9 up, 9 in; 1 remapped pgs
flags sortbitwise,require_jewel_osds
pgmap v42140: 64 pgs, 1 pools, 101382 MB data, 25390 objects
199 GB used, 3107 GB / 3306 GB avail
1243/51580 objects misplaced (2.410%)
62 active+clean
1 active+remapped+backfilling
1 active+remapped+wait_backfill
recovery io 636 MB/s, 159 objects/s
client io 999 B/s rd, 999 B/s wr, 0 op/s rd, 1 op/s wr
2017-01-25 19:03:40.896002 mon.0 [INF] pgmap v42139: 64 pgs: 1 active+remapped+wait_backfill, 1 active+remapped+backfilling, 62 active+clean; 101382 MB data, 199 GB used, 3107 GB / 3306 GB avail; 991 B/s rd, 991 B/s wr, 2 op/s; 1243/51580 objects misplaced (2.410%); 631 MB/s, 157 objects/s recovering
2017-01-25 19:03:41.917934 mon.0 [INF] osdmap e129: 9 osds: 9 up, 9 in
2017-01-25 19:03:41.921966 mon.0 [INF] pgmap v42140: 64 pgs: 1 active+remapped+wait_backfill, 1 active+remapped+backfilling, 62 active+clean; 101382 MB data, 199 GB used, 3107 GB / 3306 GB avail; 999 B/s rd, 999 B/s wr, 2 op/s; 1243/51580 objects misplaced (2.410%); 636 MB/s, 159 objects/s recovering
2017-01-25 19:03:42.935860 mon.0 [INF] osdmap e130: 9 osds: 9 up, 9 in
2017-01-25 19:03:42.936920 mon.0 [INF] pgmap v42141: 64 pgs: 1 peering, 1 active+remapped+backfilling, 62 active+clean; 101382 MB data, 199 GB used, 3106 GB / 3306 GB avail; 782/51187 objects misplaced (1.528%); 266 MB/s, 66 objects/s recovering
2017-01-25 19:03:42.944929 mon.0 [INF] pgmap v42142: 64 pgs: 1 peering, 1 active+remapped+backfilling, 62 active+clean; 101382 MB data, 199 GB used, 3106 GB / 3306 GB avail; 782/51187 objects misplaced (1.528%); 533 MB/s, 133 objects/s recovering
2017-01-25 19:03:43.949058 mon.0 [INF] pgmap v42143: 64 pgs: 1 peering, 1 active+remapped+backfilling, 62 active+clean; 101382 MB data, 199 GB used, 3106 GB / 3306 GB avail; 782/51187 objects misplaced (1.528%)
[root@cephgw ~]# vnstat -l -i ens160
Monitoring ens160... (press CTRL-C to stop)
rx: 311.18 Mbit/s 3164 p/s tx: 616.44 Mbit/s 7415 p/s
[root@cephgw ~]# vnstat -l -i ens160
Monitoring ens160... (press CTRL-C to stop)
rx: 2.27 Gbit/s 34827 p/s tx: 2.26 Gbit/s 21284 p/s
[root@cephgw ~]# vnstat -l -i ens160
Monitoring ens160... (press CTRL-C to stop)
rx: 932.57 Mbit/s 9508 p/s tx: 933.93 Mbit/s 11017 p/s