whitey's FreeNAS ZFS ZIL testing

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Ok lil preface here:

Lab spec in sig

FreeNAS 9.10 AIO on each ESXi host, 2vcpu, 12Gb RAM, vmxnet3 10GbE virtual adapters, 3 node cluster

40GbE phys network - jumbo frames end-to-end

3 VM's, 1 on each ESXi host totaling roughly 180GB sVMotioning between ZIL accelerated NFS storage.

4 ent class ZIL devices tested- P3700, HUSMM, ZeusRAM, HUSSL

esxi6a- SC216 chassis, LSI 6Gbps HBA, 3disk raidz husmm ssd pool, HUSMM ZIL
esxi6b - norco 2212 chassis, LSI 6Gbps HBA, 6 disk raidz2 magnetic pool, ZeusRAM ZIL
esxi6c - SC216 chassis, LSI 12Gbps HBA, 4 disk stripe husmm ssd pool, P3700 ZIL

Think the results hit spot on what I was thinking would happen, P3700 taking 1st prize, HUSMM taking 2nd, ZeusRAM 3rd, HUSSL 4th place. Nothing too awe shocking, just thought I would share.

P3700-ZIL.png HUSMM-ZIL.png ZEUSRAM-ZIL.png HUSSL-ZIL.png
 
Last edited:

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,065
113
SAS2 or SAS3 HGST drives?
Dual ported ZeusRAM or only only single?

Dedicated HBA for SLOG or shared with POOL?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Comeon T, you know the HGST ssd series/line heh

husmm ='s sas3
hussl ='s sas2

Single ported ZeusRam, shared HBA for pool/SLOG in all cases other than ZeusRam/P3700.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
@whitey whats the command that shows that report you have of the screen shots above?
When I do zpool iostat 1 I get the following output:
Code:
                capacity     operations    bandwidth
pool          alloc   free   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB  4.47T  17.3T     54     95   820K  1.63M
Data_Rz1_8TB  4.55T  17.2T    697    177  9.67M  2.64M
VM_Backup_1TB   128G   800G      0      0     12     33
freenas-boot  1.44G  58.1G      0      0  1.63K    155
I dont see the drive details.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
Thanks for the cmd. Is there anyway to get an average? My quick testing. Seems good?

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.55T  17.2T      0  13.9K      0  1.09G
  raidz1                                4.55T  17.2T      0  5.83K      0  93.6M
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -      0  1.88K      0  47.0M
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -      0  1.98K      0  47.1M
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -      0  1.87K      0  46.8M
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   164K  93.0G      0  4.03K      0   516M
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49   136K  93.0G      0  3.99K      0   510M
--------------------------------------  -----  -----  -----  -----  -----  -----
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
DAMN, nice, what type of log devices and what type of test are you using to generate that disk I/O? Looks like your log devices are doing all of the heavy lifting to deliver that type of performance :-D
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
I have FreeNas running as a VM with 64GB Ram.
I have two lsi 9300 hba (1 internal and 1 ext) card pass through mode.
The internal 9300 is connecting to sas3 backplane with 8 x HGST 100GB SAS SSD 2.5'' HDD HUSSL4010BSS600 drives. 4 drives per port.
The other external connecting to SM chassis with 24xsas3 backplane holding my 6 x 8TB red drives.

I have two pools consisting of 3 x 8TB drives in RaidZ1 with two HGST 100GB SAS SSD as stripped log.
One pool is data the other is for backups.

The data pool has 7 TB zdev on it and it shared via ISCSI to the host server. I then pass the iscsi target to my windows server 2012 r2 server using RDM. (only way i got around bad windows performance with iscsi). I do the same for the backup pool.

The test was using CDM in windows with 2GB setting.
upload_2017-9-26_12-8-39.png
 

azev

Well-Known Member
Jan 18, 2013
769
251
63
WOW for a spinners pool that ZIL is screaming fast.
Assuming it is a similar drive that whitey used (HUSSL) in his test, I wonder what tweaks did you have to do to get almost 3x the performance other than striping the ZIL.
 

sth

Active Member
Oct 29, 2015
381
92
28
I don’t see how 3 8TB drives in RAIDZ1 can deliver 1000MB/s+, Looks like your testing your SSD’s and RAM rather than the actual array.
 
  • Like
Reactions: i386

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
WOW for a spinners pool that ZIL is screaming fast.
Assuming it is a similar drive that whitey used (HUSSL) in his test, I wonder what tweaks did you have to do to get almost 3x the performance other than striping the ZIL.
Mine is same hussl drive BSS version even not ASS. He's striping SLOG, but I bet iSCSI is also doing him a favor as well. I will retest on iSCSI zvol, it always seems to perform a bit better than NFS at the cost of ease of mgmt/granularity.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@marcoi, up test to a larger dataset than 2GB, you said you have 64GB memory in stg system, bigger than that, my dataset is 180GB, sVMotion/NFS. Not quite apples to apples but good initial results nonetheless. I always aim for 'real-world' not synthetic benchmarks as well so this is my typical use case/test or fio runs, or ZFS send/recv's for me as that most closely matches my use case/lab.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
@whitey so im copying 70 gb of music and 80 gb of iso images over to a 200 gn drive now via windows 10 vm that lives on the local host.
Once that is done, ill vmotion it between the two pools and see what I get. Right now im running two copies at the same time and windows reporting 60-70mb/s for each.

going from data pool to backup pool.

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.55T  17.2T      0  9.25K      0   145M
  raidz1                                4.55T  17.2T      0  9.25K      0   145M
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -      0    693      0  72.9M
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -      0    722      0  72.9M
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -      0    709      0  73.0M
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49      0    93G      0      0      0      0
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   128K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.57T  17.2T  4.68K  1.03K  74.3M  6.99M
  raidz1                                4.57T  17.2T  4.68K  1.03K  74.4M  6.99M
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -  1.62K    151  20.9M  4.25M
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -  1.84K    136  22.7M  4.31M
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -  2.50K    126  31.7M  4.27M
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   292K  93.0G      0      0      0      0
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49     8K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
Test so far - doesnt seem to be hitting the slog at all?

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.71T  17.0T  10.5K     85   168M   409K
  raidz1                                4.71T  17.0T  10.5K     85   168M   409K
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -    638     23  51.7M   271K
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -    742     21  61.2M   269K
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -    630     23  58.5M   269K
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49      0    93G      0      0      0      0
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   128K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.64T  17.1T      2  11.5K  16.0K   162M
  raidz1                                4.64T  17.1T      2  11.5K  16.0K   162M
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -      0  2.54K  1.99K  84.7M
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -      1  2.43K  7.98K  85.0M
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -      1  2.32K  5.98K  85.2M
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   164K  93.0G      0      0      0      0
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49   136K  93.0G      0      0      0      0
Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.71T  17.0T  18.1K      0   289M      0
  raidz1                                4.71T  17.0T  18.1K      0   289M      0
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -    914      0  88.5M      0
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -   1014      0   112M      0
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -    888      0  89.7M      0
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49      0    93G      0      0      0      0
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   128K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.72T  17.0T      0  15.0K      0   236M
  raidz1                                4.72T  17.0T      0  15.0K      0   236M
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -      0  1.07K      0   119M
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -      0  1.05K      0   119M
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -      0  1.06K      0   119M
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   164K  93.0G      0      0      0      0
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49   136K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
time to complete vmotion - with vm off
upload_2017-9-26_16-47-8.png
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,635
1,768
113
Now I still wonder - will the speed of the pool impact the max speed of the slog device?

just asking because my tests from February showed higher P3700 values, but o/c i was using sysbench back then which might not be comparable.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
test 2 - move the data drive to local host ssd.
completion time
upload_2017-9-26_17-0-40.png

Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Data_Rz1_8TB                            4.81T  16.9T  20.0K     59   312M   364K
  raidz1                                4.81T  16.9T  20.0K     59   312M   364K
    gptid/3d689dc7-995c-11e7-9b6e-005056b7ee49      -      -  1.16K      9   146M   264K
    gptid/3e0c5992-995c-11e7-9b6e-005056b7ee49      -      -    769      5  86.1M   208K
    gptid/3ebadb19-995c-11e7-9b6e-005056b7ee49      -      -    786      8  86.1M   264K
logs                                        -      -      -      -      -      -
  gptid/3f1e782a-995c-11e7-9b6e-005056b7ee49   164K  93.0G      0      0      0      0
  gptid/d65449c9-99c3-11e7-8ec0-005056b7ee49   136K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
 
Last edited:

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
test 3 - move the data drive from ssd back to Backup pool.
Again seems like slog isnt doing anything?


Code:
                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup_Rz1_8TB                          4.73T  17.0T      0  11.7K      0   188M
  raidz1                                4.73T  17.0T      0  11.7K      0   188M
    gptid/e59bbf88-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.29K      0  94.1M
    gptid/e65bc623-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.39K      0  94.9M
    gptid/e70b53ca-9a34-11e7-8ec0-005056b7ee49      -      -      0  2.50K      0  93.4M
logs                                        -      -      -      -      -      -
  gptid/e77bc85c-9a34-11e7-8ec0-005056b7ee49      0    93G      0      0      0      0
  gptid/07a0a821-9a35-11e7-8ec0-005056b7ee49   128K  93.0G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
ohh iSCSI, yeah w/ iSCSI you will see low SLOG usage, least on sVMotions I believe, async v.s. sync nature of iSCSI v.s. NFS behavior if memory serves me correct.

EDIT: Bet if ya setup NFS exports, mount to vSphere and perform same src/dest sVMotions it'll hit.
 

Rand__

Well-Known Member
Mar 6, 2014
6,635
1,768
113
is your pool attached to esx not via nfs ? or is pool sync set disabled?

zfs get sync pool/dataset