whitey's FreeNAS ZFS ZIL testing

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
iscsi will not use slog by default as esx will not sync on it iirc - you might want to set sync to 'always' (i believe is the value) for testing

edit: or use nfs/default like @whitey suggested
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
changed to always

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.90T  16.8T      0  16.1K      0   609M
  raidz1                                4.90T  16.8T      0  12.2K      0   191M
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.46K      0  97.9M
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.43K      0  98.0M
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.49K      0  97.4M
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49   708M  92.3G      0  1.97K      0   209M
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   708M  92.3G      0  1.97K      0   209M
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
new test - both pools set sync to always- vmotion data drive from backup to data pool.
I think the read speed of pool is slow point here. Ill try the ssd test again.

upload_2017-9-26_17-41-15.png
completion time

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.94T  16.8T  16.0K      0   220M      0
  raidz1                                4.94T  16.8T  16.0K      0   220M      0
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -  1.10K      0  75.4M      0
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -  1.09K      0  82.5M      0
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -  1.34K      0  87.1M      0
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49     8K  93.0G      0      0      0      0
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   140K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.83T  16.9T      0  18.3K      0   501M
  raidz1                                4.83T  16.9T      0  16.2K      0   251M
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -      0  1.11K      0   128M
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -      0  1.16K      0   127M
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -      0  1.15K      0   127M
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   336M  92.7G      0  1.09K      0   125M
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49   336M  92.7G      0  1.09K      0   125M
--------------------------------------  -----  -----  -----  -----  -----  -----
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@marcoi THAT'S where I expect a hussl SLOG to be, least for sVMotion operations, matches w/ my testing (although you are iSCSI v.s. mine NFS but both sync, forced on your end).

You should see a pool of sas3 husmm's on a 12G HBA look lame until you throw a SLOG behind them, really kinda disappointing. No matter what config even to stripped 4 disk they never really GO until they get a SLOG behind em', not at least in this use case. Reads are good though and REALLY go in both directions once you get a good log device picking up the slack for pool writes. Again sVMotions are just abt the ugliest/meanest I/O torture test out there IMHO for real world other than pure I/O burn-in/torture tests.
 
Last edited:

azev

Well-Known Member
Jan 18, 2013
769
251
63
You should see a pool of sas3 husmm's on a 12G HBA look lame until you throw a SLOG behind them, really kinda disappointing. No matter what config even to stripped 4 disk they never really GO until they get a SLOG behind em', not at least in this use case. Reads are good though and REALLY go in both directions once you get a good log device picking up the slack.
@whitey are you saying those husmm drives configured in any fashion (stripe, stripe mirros,etc) offers bad performance until you configure SLOG with similar speed or faster ?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@whitey are you saying those husmm drives configured in any fashion (stripe, stripe mirros,etc) offers bad performance until you configure SLOG with similar speed or faster ?
Seems like when you put them in any ZFS config stripped/mirror/stripper-mirror/raidz/2 that they push abt GbE speeds for some unknown reason until you put a slog behind them. I should perform more thourough tests after I get my data re-situated and test a single device I guess. Irritating to say the least.

EDIT: Seriously gonna test single dev, mirror, stripped-mirror, raidz, raidz2 w/ 4 of my husmm devs minus SLOG to sanity check myself.
 

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
@whitey & @azev
I have two 200gb of the hgst (HUSMM8020ASS200) in raid 1 for maxcache and they are slow until I hit them really hard (multiple sequential transfers between clients and the fileserver or benchmarks with qd > 32)*.

*(Raid 6 with 10 6tb hgst drives + 2 HUSMM8020ASS200 in raid 1 for cache, read & write up to ~1800 mb/s in perfmon.exe with filetransfers and up to ~2500mb/s in crystaldisk mark @ qd >32)
 

azev

Well-Known Member
Jan 18, 2013
769
251
63
@i386 that is a very interesting statistic, but I think its a very different use case. I also have a max cache setup with 2 x 3700 as cache and I feel performance is only so so for some reason :).

I am very interested in figuring out how to get the most juice out of ZFS setup with write sync turned on.
I have a 6x 800gb sandisk accend setup and performance is amazing when sync is turned off, but once I turned it on write sync everything is down to a crawl. I tried many different SSD for ZIL (S3700, LightningII sandisk, HGST HUSSL) most I can get is like around 100MB ish which is ridiculous. I guess the next step is getting DRAM based zil like zeusram or NVME but most review only shows performance in around 200-300MB.
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
Intel 750 is working very well for me. 12x2TB in mirror+stripe and 400GB 750 as SLOG. 2x10Gb SFP+ from ESXi node in round-robin.

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
tank0                                    554G  10.3T      0  26.3K      0  1.22G
  mirror                                 104G  1.71T      0  2.61K      0  42.1M
    gptid/a8f38541-612f-11e6-830d-001b21a48d6c      -      -      0    788      0  41.8M
    gptid/2e2fa90a-dd40-11e5-86a9-0cc47a79d5db      -      -      0  1.71K      0  42.9M
  mirror                                 110G  1.71T      0  3.54K      0  57.1M
    gptid/30d5c5ca-dd40-11e5-86a9-0cc47a79d5db      -      -      0  1.49K      0  57.7M
    gptid/33b0c42a-dd40-11e5-86a9-0cc47a79d5db      -      -      0    970      0  57.1M
  mirror                                86.2G  1.73T      0  2.38K      0  38.4M
    gptid/35184fad-dd40-11e5-86a9-0cc47a79d5db      -      -      0    711      0  38.9M
    gptid/35da6a2e-dd40-11e5-86a9-0cc47a79d5db      -      -      0    702      0  38.4M
  mirror                                85.4G  1.73T      0  3.26K      0  52.8M
    gptid/36a1dbd5-dd40-11e5-86a9-0cc47a79d5db      -      -      0    876      0  52.8M
    gptid/377656cd-dd40-11e5-86a9-0cc47a79d5db      -      -      0    920      0  52.2M
  mirror                                85.4G  1.73T      0  2.88K      0  46.2M
    gptid/50d5fb00-e2ab-11e5-a3ea-001b21a48d6c      -      -      0    696      0  46.2M
    gptid/517b3d95-e2ab-11e5-a3ea-001b21a48d6c      -      -      0  1.03K      0  46.4M
  mirror                                83.2G  1.73T      0  2.88K      0  46.3M
    gptid/6f68530d-e82f-11e5-83ff-001b21a48d6c      -      -      0    725      0  46.6M
    gptid/707d68e8-e82f-11e5-83ff-001b21a48d6c      -      -      0    646      0  46.0M
logs                                        -      -      -      -      -      -
  gptid/618bd10b-2d60-11e7-88d5-a0369f4a8a68  8.19G   364G      0  8.76K      0   965M
--------------------------------------  -----  -----  -----  -----  -----  -----
 
  • Like
Reactions: poto and Patrick

SlickNetAaron

Member
Apr 30, 2016
50
13
8
44
Intel 750 is working very well for me. 12x2TB in mirror+stripe and 400GB 750 as SLOG. 2x10Gb SFP+ from ESXi node in round-robin.
That's interesting! How long can the 750 maintain that write speed? It's usually poo-poo'd for a SLOG device around here.
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
The DWPD is certainly not P3700 level but it's plenty for my use case. Plus they were ~$150. Can't really argue the value and performance. I'd need to do more test on extended writes but it never seem to have issue hitting that number, especially for sequential.

Sent from my Pixel using Tapatalk
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@eroji is that a overprovisioned Intel 750? Wonder how much more performance gains can be potentially had by overprovisioning my P3700?
 

eroji

Active Member
Dec 1, 2015
276
52
28
40
No, it was full size of useable space.

Sent from my Pixel using Tapatalk