What in the heck is going on here? Here are the results of my 'insanity' check: (taken right in the middle/steady state/rough average of 40GB VM sVMotion)
Single husmm
Mirror of two husmm's
Stripped mirror of 4 husmm's
Raidz of 4 husmm's
Just seems that no matter what I do pool config-wise (single dev, mirror, stripped-mirror, raidz) using husmm's which are a DAMN good drive or so I thought you only get roughly 150-175MB/s w/ these drives until you add a slog then it only takes it up to 300-350MB/s (another 150-175MB/s, coincidence...I think not).
BOOO at consistent 30-50MB/s into these devices in ANY ZFS pool config for me when an identical SLOG devices happily sucks in 175MB/s...shouldn't ALL be able to suck in 175MB/s consistently then is the theory giving me roughly 700MB/s disk I/o throughput and on network 5+Gbps (HELL I'd take 500-600 at this point)
Anyone that can shed light on this I 'may' owe ya a kidney/liver/both :-D
Single husmm
Code:
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
husmm1640 9.17G 361G 0 3.31K 0 166M
gptid/a9e37996-c17c-11e7-a22f-0050569a060b 9.17G 361G 0 3.31K 0 166M
-------------------------------------- ----- ----- ----- ----- ----- -----
Mirror of two husmm's
Code:
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
husmm1640-mirror 4.01G 366G 0 3.38K 0 164M
mirror 4.01G 366G 0 3.38K 0 164M
gptid/b24fc567-c17f-11e7-a22f-0050569a060b - - 0 3.25K 0 164M
gptid/b285c529-c17f-11e7-a22f-0050569a060b - - 0 3.25K 0 164M
-------------------------------------- ----- ----- ----- ----- ----- -----
Stripped mirror of 4 husmm's
Code:
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
husmm1640-stripped-mirror 21.8G 718G 0 3.69K 0 176M
mirror 10.9G 359G 0 1.82K 0 89.3M
gptid/769fb787-c1a8-11e7-a22f-0050569a060b - - 0 1.78K 0 89.4M
gptid/76d8d32b-c1a8-11e7-a22f-0050569a060b - - 0 1.78K 0 89.4M
mirror 10.9G 359G 0 1.87K 0 86.6M
gptid/a21bb9ca-c1a8-11e7-a22f-0050569a060b - - 0 1.79K 0 86.7M
gptid/a251ead4-c1a8-11e7-a22f-0050569a060b - - 0 1.79K 0 86.7M
-------------------------------------- ----- ----- ----- ----- ----- -----
Raidz of 4 husmm's
Code:
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
husmm1640-rz 5.62G 1.44T 0 3.37K 0 161M
raidz1 5.62G 1.44T 0 3.37K 0 161M
gptid/02c4749f-c1ac-11e7-a22f-0050569a060b - - 0 2.45K 0 57.0M
gptid/02f5705d-c1ac-11e7-a22f-0050569a060b - - 0 2.45K 0 53.9M
gptid/0326a0a8-c1ac-11e7-a22f-0050569a060b - - 0 2.45K 0 57.0M
gptid/03586024-c1ac-11e7-a22f-0050569a060b - - 0 2.45K 0 53.9M
-------------------------------------- ----- ----- ----- ----- ----- -----
Just seems that no matter what I do pool config-wise (single dev, mirror, stripped-mirror, raidz) using husmm's which are a DAMN good drive or so I thought you only get roughly 150-175MB/s w/ these drives until you add a slog then it only takes it up to 300-350MB/s (another 150-175MB/s, coincidence...I think not).
BOOO at consistent 30-50MB/s into these devices in ANY ZFS pool config for me when an identical SLOG devices happily sucks in 175MB/s...shouldn't ALL be able to suck in 175MB/s consistently then is the theory giving me roughly 700MB/s disk I/o throughput and on network 5+Gbps (HELL I'd take 500-600 at this point)
Anyone that can shed light on this I 'may' owe ya a kidney/liver/both :-D
Attachments
-
21.9 KB Views: 8
Last edited: