capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
Backup_Rz1_8TB 4.90T 16.8T 0 16.1K 0 609M
raidz1 4.90T 16.8T 0 12.2K 0 191M
gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49 - - 0 2.46K 0 97.9M
gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49 - - 0 2.43K 0 98.0M
gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49 - - 0 2.49K 0 97.4M
logs - - - - - -
gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49 708M 92.3G 0 1.97K 0 209M
gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49 708M 92.3G 0 1.97K 0 209M
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
Backup_Rz1_8TB 4.94T 16.8T 16.0K 0 220M 0
raidz1 4.94T 16.8T 16.0K 0 220M 0
gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49 - - 1.10K 0 75.4M 0
gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49 - - 1.09K 0 82.5M 0
gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49 - - 1.34K 0 87.1M 0
logs - - - - - -
gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49 8K 93.0G 0 0 0 0
gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49 140K 93.0G 0 0 0 0
-------------------------------------- ----- ----- ----- ----- ----- -----
Data_Rz1_8TB 4.83T 16.9T 0 18.3K 0 501M
raidz1 4.83T 16.9T 0 16.2K 0 251M
gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49 - - 0 1.11K 0 128M
gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49 - - 0 1.16K 0 127M
gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49 - - 0 1.15K 0 127M
logs - - - - - -
gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49 336M 92.7G 0 1.09K 0 125M
gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49 336M 92.7G 0 1.09K 0 125M
-------------------------------------- ----- ----- ----- ----- ----- -----
@whitey are you saying those husmm drives configured in any fashion (stripe, stripe mirros,etc) offers bad performance until you configure SLOG with similar speed or faster ?You should see a pool of sas3 husmm's on a 12G HBA look lame until you throw a SLOG behind them, really kinda disappointing. No matter what config even to stripped 4 disk they never really GO until they get a SLOG behind em', not at least in this use case. Reads are good though and REALLY go in both directions once you get a good log device picking up the slack.
Seems like when you put them in any ZFS config stripped/mirror/stripper-mirror/raidz/2 that they push abt GbE speeds for some unknown reason until you put a slog behind them. I should perform more thourough tests after I get my data re-situated and test a single device I guess. Irritating to say the least.@whitey are you saying those husmm drives configured in any fashion (stripe, stripe mirros,etc) offers bad performance until you configure SLOG with similar speed or faster ?
capacity operations bandwidth
pool alloc free read write read write
-------------------------------------- ----- ----- ----- ----- ----- -----
tank0 554G 10.3T 0 26.3K 0 1.22G
mirror 104G 1.71T 0 2.61K 0 42.1M
gptid/a8f38541-612f-11e6-830d-001b21a48d6c - - 0 788 0 41.8M
gptid/2e2fa90a-dd40-11e5-86a9-0cc47a79d5db - - 0 1.71K 0 42.9M
mirror 110G 1.71T 0 3.54K 0 57.1M
gptid/30d5c5ca-dd40-11e5-86a9-0cc47a79d5db - - 0 1.49K 0 57.7M
gptid/33b0c42a-dd40-11e5-86a9-0cc47a79d5db - - 0 970 0 57.1M
mirror 86.2G 1.73T 0 2.38K 0 38.4M
gptid/35184fad-dd40-11e5-86a9-0cc47a79d5db - - 0 711 0 38.9M
gptid/35da6a2e-dd40-11e5-86a9-0cc47a79d5db - - 0 702 0 38.4M
mirror 85.4G 1.73T 0 3.26K 0 52.8M
gptid/36a1dbd5-dd40-11e5-86a9-0cc47a79d5db - - 0 876 0 52.8M
gptid/377656cd-dd40-11e5-86a9-0cc47a79d5db - - 0 920 0 52.2M
mirror 85.4G 1.73T 0 2.88K 0 46.2M
gptid/50d5fb00-e2ab-11e5-a3ea-001b21a48d6c - - 0 696 0 46.2M
gptid/517b3d95-e2ab-11e5-a3ea-001b21a48d6c - - 0 1.03K 0 46.4M
mirror 83.2G 1.73T 0 2.88K 0 46.3M
gptid/6f68530d-e82f-11e5-83ff-001b21a48d6c - - 0 725 0 46.6M
gptid/707d68e8-e82f-11e5-83ff-001b21a48d6c - - 0 646 0 46.0M
logs - - - - - -
gptid/618bd10b-2d60-11e7-88d5-a0369f4a8a68 8.19G 364G 0 8.76K 0 965M
-------------------------------------- ----- ----- ----- ----- ----- -----
That's interesting! How long can the 750 maintain that write speed? It's usually poo-poo'd for a SLOG device around here.Intel 750 is working very well for me. 12x2TB in mirror+stripe and 400GB 750 as SLOG. 2x10Gb SFP+ from ESXi node in round-robin.
Is that for all capacities or just the 400GB?Remember, the 750 has 2GB onboard DRAM.
Anyone an idea re this?Now I still wonder - will the speed of the pool impact the max speed of the slog device?