Home Setup - Design changes

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing2
ISCSI of zvol to ESXI host.
100 GB drive thin provisioned passed to win7 VM.
FreeNas sync for pool set to disabled



Q32 threads 1
upload_2019-1-6_11-10-27.png

q32 threads 2
upload_2019-1-6_11-11-55.png

q32 threads 16
upload_2019-1-6_11-13-22.png
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing3
ISCSI of zvol to ESXI host.
100 GB drive thin provisioned passed to win7 VM.
FreeNas sync for pool set to always
added 30GB optane based log


upload_2019-1-6_11-24-28.png

Q32 threads 1
upload_2019-1-6_11-29-45.png

Q32 threads 2
upload_2019-1-6_11-31-15.png

Q32 threads 16
upload_2019-1-6_11-32-53.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Well its expected that iscsi standard and disabled are close since standard is off on iscsi.

Did you run tests without optane slog? and which optane is it?

Thanks for the tests, very enlightening :)
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
The first two tests are without a slog device. The last test has the optane. The optane is intel 280gb 900p optane aic. But I have it shared out with other pools. It's setup as a datastore on esxi host with 4 x 30gb HDD added to the freenas for each pool.

So far in my testing the slog didn't make a difference.

I need some better fio commands for testing out the pools.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing4 - 2 x 8TB RED drives in R0 setup. no slog
ISCSI of zvol to ESXI host.
100 GB drive thin provisioned passed to win7 VM.
FreeNas sync for pool set to always
upload_2019-1-7_10-34-10.png
upload_2019-1-7_10-34-33.png
upload_2019-1-7_11-39-22.pngupload_2019-1-7_12-9-3.png



FreeNas testing5 - 2 x 8TB RED drives in R0 setup. w/slog
ISCSI of zvol to ESXI host.
100 GB drive thin provisioned passed to win7 VM.
FreeNas sync for pool set to always
ADDED 800GB sas SSD as log.


upload_2019-1-7_11-19-56.pngupload_2019-1-7_11-23-44.png
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing6 - 2 x 8TB RED drives in R0 setup. w/slog optane
ISCSI of zvol to ESXI host.
100 GB drive thin provisioned passed to win7 VM.
FreeNas sync for pool set to always
ADDED 30GB optane as log.

upload_2019-1-7_12-27-14.pngupload_2019-1-7_12-30-36.png
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Astonishingly good values for both the SAS SSD and the optane based log. But this might be related to CDM6 (used to 5 which has different default tests).

The optane based test is faster with 2x Red @ Raid0 than your regular pool;) Probably due to fill rate i guess, but interesting this even affects slog based writes.
Also, it might be that some of those writes are buffered in memory which might explain the 1GiB vs 4GiB difference ...
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
I'm trying to figure out how to get higher network bandwidth. Right now the VMs are on the same host using internal switch with vnic3 and jumbo frames. that switch has 10GB link to physical switch which then connects to 2nd server via 10GB physical connection.

VM to VM should allow greater then 10GB from my understanding.

Any thoughts?

Next going to test 8 disk R0 setup. (not that i plan on keeping that for my VMs) I think R10 (mirror pairs striped) would be better option for VM IO.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
from what I can see the UTM still has a 50IP limit, Am I looking at the wrong thing?
yes for Sophos UTM 9.x, no for the newer Sophos XG. I plan on moving over to XG but i need to figure out hardware first then work on redesigning my network,etc.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing01.09.19 -8 x 800GB SAS SSD drives in R0 setup. no slog
FIO Testing on local mount.
upload_2019-1-9_16-8-54.png

Write Test:
Code:
fio --output=128K_Seq_Write.txt --name=seqwrite --write_bw_log=128K_Seq_Write_sec_by_sec.csv --filename=nvme0n1p1 --rw=write --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
Code:
seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
...
fio-3.5
Starting 8 processes

seqwrite: (groupid=0, jobs=8): err= 0: pid=6118: Wed Jan  9 16:05:51 2019
  write: IOPS=24.0k, BW=3004MiB/s (3150MB/s)(32.0GiB/10907msec)
    clat (usec): min=21, max=313563, avg=303.38, stdev=3083.11
     lat (usec): min=22, max=313565, avg=308.13, stdev=3090.24
    clat percentiles (usec):
     |  1.00th=[    30],  5.00th=[    42], 10.00th=[    45], 20.00th=[    54],
     | 30.00th=[    71], 40.00th=[    84], 50.00th=[    94], 60.00th=[   114],
     | 70.00th=[   155], 80.00th=[   310], 90.00th=[   474], 95.00th=[   586],
     | 99.00th=[  2057], 99.50th=[  4752], 99.90th=[ 23462], 99.95th=[ 50070],
     | 99.99th=[162530]
   bw (  MiB/s): min=    0, max= 5907, per=48.35%, avg=1452.53, stdev=969.59, samples=262144
   iops        : min=  429, max= 7135, avg=2964.69, stdev=1361.15, samples=161
  lat (usec)   : 50=16.55%, 100=36.97%, 250=24.44%, 500=13.54%, 750=5.57%
  lat (usec)   : 1000=0.80%
  lat (msec)   : 2=1.09%, 4=0.45%, 10=0.33%, 20=0.13%, 50=0.07%
  lat (msec)   : 100=0.04%, 250=0.01%, 500=0.01%
  cpu          : usr=2.66%, sys=30.36%, ctx=195493, majf=0, minf=1736
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
  WRITE: bw=3004MiB/s (3150MB/s), 3004MiB/s-3004MiB/s (3150MB/s-3150MB/s), io=32.0GiB (34.4GB), run=10907-10907msec
Read Test:
Code:
fio --output=128K_Seq_Read.txt --name=seqread --write_bw_log=128K_Seq_Read_sec_by_sec.csv --filename=nvme0n1p1 --rw=read --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
Code:
seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
...
fio-3.5
Starting 8 processes
seqread: Laying out IO file (1 file / 4096MiB)

seqread: (groupid=0, jobs=8): err= 0: pid=6749: Wed Jan  9 16:11:28 2019
   read: IOPS=41.8k, BW=5219MiB/s (5473MB/s)(32.0GiB/6278msec)
    clat (usec): min=40, max=6963, avg=186.03, stdev=32.19
     lat (usec): min=40, max=6964, avg=186.54, stdev=32.20
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[  163], 10.00th=[  169], 20.00th=[  178],
     | 30.00th=[  182], 40.00th=[  186], 50.00th=[  188], 60.00th=[  190],
     | 70.00th=[  194], 80.00th=[  198], 90.00th=[  206], 95.00th=[  210],
     | 99.00th=[  231], 99.50th=[  262], 99.90th=[  314], 99.95th=[  371],
     | 99.99th=[  750]
   bw (  KiB/s): min=18823, max=3222580, per=13.67%, avg=730607.28, stdev=231692.25, samples=262144
   iops        : min= 4888, max= 5208, avg=5057.16, stdev=79.52, samples=96
  lat (usec)   : 50=0.51%, 100=1.09%, 250=97.67%, 500=0.70%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%
  cpu          : usr=3.23%, sys=52.80%, ctx=227192, majf=0, minf=1992
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=128

Run status group 0 (all jobs):
   READ: bw=5219MiB/s (5473MB/s), 5219MiB/s-5219MiB/s (5473MB/s-5473MB/s), io=32.0GiB (34.4GB), run=6278-6278msec
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
FreeNas testing01.09.19 -8 x 800GB SAS SSD drives in R0 setup. no slog
Sync disabled
ISCSI 4TB mounted to ESXI - 200GB Datastore added to w7 VM.

Trying to mimic the FIO test SEQ Q128 - T8
upload_2019-1-9_16-30-36.png

I maybe miss understanding the FIO results but it seems i should be 5400 MB/s read and 3000 MB/s write. So what is my bottleneck?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I assume one is a linux vm, the other a win vm with similar hw specs on the same ESXi host?
Have you run iperf yet?
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
fio test is directly on the freenas VM. The other is win 10. Both on the same ESXI host.

No iperf testing. I havent done it before so it would be something i would have to figure out how to do.
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
run fio from windows then. My earlier link to fio examples included windows fio link.
Variances to different versions possible though

iperf is simple enough; iperf2 and 3 are not comparable and are not compatible, pick one.
iperf2 can be run multithreaded (-P #of processes), iperf3 needs multiple instances but has more info.