gotcha. I have 2 spare WD reb 8th drives ill put into mirror and test with 630 and optaneJust wanted to know the slog usability of the 630 (pretty low I assume) to see the difference the 900p creates.
And better fio commands always depend on your specific workload, but I had some samples in this thread: https://forums.servethehome.com/index.php?threads/vmware-vsan-performance.19308/page-2#post-187398
yes for Sophos UTM 9.x, no for the newer Sophos XG. I plan on moving over to XG but i need to figure out hardware first then work on redesigning my network,etc.from what I can see the UTM still has a 50IP limit, Am I looking at the wrong thing?
fio --output=128K_Seq_Write.txt --name=seqwrite --write_bw_log=128K_Seq_Write_sec_by_sec.csv --filename=nvme0n1p1 --rw=write --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
seqwrite: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
...
fio-3.5
Starting 8 processes
seqwrite: (groupid=0, jobs=8): err= 0: pid=6118: Wed Jan 9 16:05:51 2019
write: IOPS=24.0k, BW=3004MiB/s (3150MB/s)(32.0GiB/10907msec)
clat (usec): min=21, max=313563, avg=303.38, stdev=3083.11
lat (usec): min=22, max=313565, avg=308.13, stdev=3090.24
clat percentiles (usec):
| 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 54],
| 30.00th=[ 71], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 114],
| 70.00th=[ 155], 80.00th=[ 310], 90.00th=[ 474], 95.00th=[ 586],
| 99.00th=[ 2057], 99.50th=[ 4752], 99.90th=[ 23462], 99.95th=[ 50070],
| 99.99th=[162530]
bw ( MiB/s): min= 0, max= 5907, per=48.35%, avg=1452.53, stdev=969.59, samples=262144
iops : min= 429, max= 7135, avg=2964.69, stdev=1361.15, samples=161
lat (usec) : 50=16.55%, 100=36.97%, 250=24.44%, 500=13.54%, 750=5.57%
lat (usec) : 1000=0.80%
lat (msec) : 2=1.09%, 4=0.45%, 10=0.33%, 20=0.13%, 50=0.07%
lat (msec) : 100=0.04%, 250=0.01%, 500=0.01%
cpu : usr=2.66%, sys=30.36%, ctx=195493, majf=0, minf=1736
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=3004MiB/s (3150MB/s), 3004MiB/s-3004MiB/s (3150MB/s-3150MB/s), io=32.0GiB (34.4GB), run=10907-10907msec
fio --output=128K_Seq_Read.txt --name=seqread --write_bw_log=128K_Seq_Read_sec_by_sec.csv --filename=nvme0n1p1 --rw=read --direct=1 --blocksize=128k --norandommap --numjobs=8 --randrepeat=0 --size=4G --runtime=600 --group_reporting --iodepth=128
seqread: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=128
...
fio-3.5
Starting 8 processes
seqread: Laying out IO file (1 file / 4096MiB)
seqread: (groupid=0, jobs=8): err= 0: pid=6749: Wed Jan 9 16:11:28 2019
read: IOPS=41.8k, BW=5219MiB/s (5473MB/s)(32.0GiB/6278msec)
clat (usec): min=40, max=6963, avg=186.03, stdev=32.19
lat (usec): min=40, max=6964, avg=186.54, stdev=32.20
clat percentiles (usec):
| 1.00th=[ 57], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178],
| 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190],
| 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 210],
| 99.00th=[ 231], 99.50th=[ 262], 99.90th=[ 314], 99.95th=[ 371],
| 99.99th=[ 750]
bw ( KiB/s): min=18823, max=3222580, per=13.67%, avg=730607.28, stdev=231692.25, samples=262144
iops : min= 4888, max= 5208, avg=5057.16, stdev=79.52, samples=96
lat (usec) : 50=0.51%, 100=1.09%, 250=97.67%, 500=0.70%, 750=0.01%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=3.23%, sys=52.80%, ctx=227192, majf=0, minf=1992
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=5219MiB/s (5473MB/s), 5219MiB/s-5219MiB/s (5473MB/s-5473MB/s), io=32.0GiB (34.4GB), run=6278-6278msec