2018 - Upgrading the toys

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nry

Active Member
Feb 22, 2013
312
61
28
I'd definitely recommend Proxmox over FreeNAS if you know what you're doing. Main benefit of FreeNAS is you have the pretty GUI to do your ZFS stuff. BSD-based is a plus too.

Proxmox isn't as beginner friendly but as long as you're comfortable with CLI you'll be fine. Just make sure you schedule scrubs, SMART tests, etc.
More than happy with cli, it's just configuring the scheduled tasks for ZFS which I haven't done before. Bit of research and I'm sure I could figure it out.

That's a whole lotta storage :D what case is going to be used for your storage server?
  • 500GB - 1TB MySQL databases (used for testing/debugging)
  • Elasticsearch data store
  • Backups
  • Media storage
  • docker cache
And some other stuff I can't think of at the moment.
 
  • Like
Reactions: Joel

nry

Active Member
Feb 22, 2013
312
61
28
Storage Upgrade - Benchmark ZFS

I'll be using RaidZ2 as I have been more than impressed with the performance of my existing array. My plan has been to benchmark in the following order:
  • Sequential Read
  • Sequential Write
  • Random Read
  • Random Write
First with no cache drives, then to add various SSD drives and see how it performs.

Test system:
  • Dual Xeon 2660
  • 96GB RAM
  • Samsung 950 Evo 250GB Boot drive
  • M1015 flashed to IT mode
  • 8x Seagate IronWolf 12TB
  • Ubuntu 18.04 LTS with ZFS
Although I might be better off posting this in the relevant software forum, figured I'd give here a try first. Unfortunately I haven't been seeing the sort of numbers I would have expected under sequential workloads. I'm not sure is this is due to my testing methods or my overestimation of what ZFS is capable of.

To try resolve this issue I started by trying to find the bottleneck. First test was to test the drives individually but simultaneously with fio (please correct me if I'm doing something wrong with fio here, haven't really used it in a long time).

Read

Code:
fio --name=seqread \
--rw=read \
--direct=1 \
--iodepth=32 \
--ioengine=libaio \
--bs=1M \
--numjobs=1 \
--size=10G \
--runtime=60 \
--group_reporting
Bandwidth: 240-250MiB/s
IOPS avg: 240-255

Write

Code:
fio --name=seqwrite \
--rw=write \
--direct=1 \
--iodepth=32 \
--ioengine=libaio \
--bs=1M \
--numjobs=1 \
--size=10G \
--runtime=60 \
--group_reporting
Bandwidth: 240-255MiB/s
IOPS avg: 240-255

fio

Screen Shot 2018-06-03 at 17.56.13.jpg

nmon

Screen Shot 2018-06-03 at 17.52.48.png

This is exactly what I expected to see based on various reviews I have read.

ZFS

Creation of array...

Code:
zpool create pool raidz2 EIGHT_DRIVES_HERE
[code]

Details of array

[code]
$zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool    87T  1.37M  87.0T         -     0%     0%  1.00x  ONLINE  -
ashift is set to 12

Benchmarking with fio, I feel there's something I'm missing with fio which is not giving me the numbers I'm expecting to see under very simple sequential read and write.

Read

Increasing to 128GB test files due to RAM.

Code:
fio --name=seqread \
--rw=read \
--direct=0 \
--iodepth=32 \
--ioengine=libaio \
--bs=4k \
--numjobs=1 \
--size=128G \
--group_reporting

Outputs

zpool iostat -v X, showing typically 40-60M per drive

Screen Shot 2018-06-03 at 22.46.17.png

nmon, showing around 55MB/s per drive

Screen Shot 2018-06-03 at 22.44.47.png

fio

Code:
fio --name=seqread \
> --rw=read \
> --direct=0 \
> --iodepth=32 \
> --ioengine=libaio \
> --bs=4k \
> --numjobs=1 \
> --size=128G \
> --group_reporting

seqread: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
seqread: Laying out IO file (1 file / 131072MiB)
fio: native_fallocate call failed: Operation not supported

Jobs: 1 (f=1): [R(1)][100.0%][r=209MiB/s,w=0KiB/s][r=53.4k,w=0 IOPS][eta 00m:00s]
seqread: (groupid=0, jobs=1): err= 0: pid=6673: Sun Jun  3 21:54:34 2018
   read: IOPS=68.8k, BW=269MiB/s (282MB/s)(128GiB/487876msec)
    slat (usec): min=4, max=211017, avg=12.32, stdev=224.50
    clat (usec): min=2, max=211288, avg=451.67, stdev=1235.02
     lat (usec): min=7, max=211294, avg=464.25, stdev=1254.57
    clat percentiles (usec):
     |  1.00th=[  269],  5.00th=[  318], 10.00th=[  326], 20.00th=[  330],
     | 30.00th=[  343], 40.00th=[  371], 50.00th=[  383], 60.00th=[  388],
     | 70.00th=[  392], 80.00th=[  400], 90.00th=[  412], 95.00th=[  449],
     | 99.00th=[  693], 99.50th=[ 2638], 99.90th=[20317], 99.95th=[23725],
     | 99.99th=[35914]
   bw (  KiB/s): min=83464, max=376191, per=100.00%, avg=275458.64, stdev=40776.52, samples=975
   iops        : min=20866, max=94045, avg=68864.61, stdev=10194.13, samples=975
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%
  lat (usec)   : 250=0.85%, 500=95.62%, 750=2.72%, 1000=0.22%
  lat (msec)   : 2=0.08%, 4=0.04%, 10=0.10%, 20=0.27%, 50=0.10%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=13.53%, sys=65.10%, ctx=840988, majf=0, minf=993
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=33554432,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=269MiB/s (282MB/s), 269MiB/s-269MiB/s (282MB/s-282MB/s), io=128GiB (137GB), run=487876-487876msec
A whopping 269MiB/s write?
Yet IOPS a way higher than I'd expect.

From what I understand I could expect to see around 6x SEQUENTIAL_DISK_SPEED for the read speeds and total IOPS of a single drive?

Write

zpool iostat -v X, showing typically 40-60M per drive

Screen Shot 2018-06-03 at 22.59.00.png

nmon, showing around 55MB/s per drive

Screen Shot 2018-06-03 at 22.59.04.png

fio

Code:
fio --name=seqwrite \
> --rw=write \
> --direct=0 \
> --iodepth=32 \
> --ioengine=libaio \
> --bs=4k \
> --numjobs=1 \
> --size=128G \
> --group_reporting
seqwrite: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
seqwrite: Laying out IO file (1 file / 131072MiB)
fio: native_fallocate call failed: Operation not supported
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=264MiB/s][r=0,w=67.6k IOPS][eta 00m:00s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=6518: Sun Jun  3 22:05:35 2018
  write: IOPS=66.3k, BW=259MiB/s (271MB/s)(128GiB/506424msec)
    slat (usec): min=9, max=103480, avg=12.96, stdev=49.65
    clat (usec): min=2, max=105349, avg=468.82, stdev=284.34
     lat (usec): min=14, max=105383, avg=481.99, stdev=288.99
    clat percentiles (usec):
     |  1.00th=[  404],  5.00th=[  433], 10.00th=[  437], 20.00th=[  445],
     | 30.00th=[  449], 40.00th=[  449], 50.00th=[  453], 60.00th=[  457],
     | 70.00th=[  461], 80.00th=[  465], 90.00th=[  494], 95.00th=[  570],
     | 99.00th=[  742], 99.50th=[  840], 99.90th=[ 1418], 99.95th=[ 2147],
     | 99.99th=[ 5604]
   bw (  KiB/s): min=205264, max=275920, per=99.99%, avg=264995.85, stdev=6609.62, samples=1012
   iops        : min=51316, max=68980, avg=66248.97, stdev=1652.39, samples=1012
  lat (usec)   : 4=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
  lat (usec)   : 500=90.62%, 750=8.44%, 1000=0.70%
  lat (msec)   : 2=0.18%, 4=0.04%, 10=0.02%, 20=0.01%, 250=0.01%
  cpu          : usr=13.98%, sys=85.10%, ctx=71376, majf=0, minf=923
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,33554432,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=259MiB/s (271MB/s), 259MiB/s-259MiB/s (271MB/s-271MB/s), io=128GiB (137GB), run=506424-506424msec

Any tips on pushing me in the right direction on configuration/benchmarking here would be greatly appreciated! :)
 

nry

Active Member
Feb 22, 2013
312
61
28
Storage Upgrade - Benchmark ZFS

Following on from my previous post, I have come to the conclusion that ZFS on Linux simply couldn't sustain the numbers I was expecting. Things I tried
  • ashift 9 and 12
  • Running it on my other server as a VM
  • Different HBAs
  • Striping the drives, still poor numbers
Figured I'd install FreeNAS 11.1 U5 on the server directly to keep it as simple as possible.

Some benchmarks using ashift=9 with no cache drives. All 8 drives in RAIDZ2.

These numbers make me a little bit happier!

Sequential write - 1152MiB/s - 1160 IOPS

Code:
fio --name=seqwrite \
--rw=write \
--direct=0 \
--iodepth=32 \
--bs=1M \
--numjobs=1 \
--size=128G \
--group_reporting
seqwrite: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=32
fio-3.0
Starting 1 process
seqwrite: Laying out IO file (1 file / 131072MiB)
Jobs: 1 (f=1): [W(1)][99.1%][r=0KiB/s,w=1130MiB/s][r=0,w=1130 IOPS][eta 00m:01s]
seqwrite: (groupid=0, jobs=1): err= 0: pid=9317: Mon Jun 11 13:45:36 2018
  write: IOPS=1152, BW=1152MiB/s (1208MB/s)(128GiB/113769msec)
    clat (usec): min=241, max=1238.6k, avg=826.25, stdev=4974.83
     lat (usec): min=252, max=1238.6k, avg=861.00, stdev=4976.13
    clat percentiles (usec):
     |  1.00th=[  273],  5.00th=[  355], 10.00th=[  437], 20.00th=[  611],
     | 30.00th=[  685], 40.00th=[  717], 50.00th=[  734], 60.00th=[  775],
     | 70.00th=[  938], 80.00th=[  988], 90.00th=[ 1045], 95.00th=[ 1254],
     | 99.00th=[ 1942], 99.50th=[ 2311], 99.90th=[ 3621], 99.95th=[ 4752],
     | 99.99th=[11994]
   bw (  MiB/s): min=  143, max= 2085, per=100.00%, avg=1161.05, stdev=249.76, samples=224
   iops        : min=  143, max= 2085, avg=1160.64, stdev=249.74, samples=224
  lat (usec)   : 250=0.03%, 500=12.91%, 750=43.29%, 1000=26.85%
  lat (msec)   : 2=16.03%, 4=0.82%, 10=0.06%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%, 2000=0.01%
  cpu          : usr=4.34%, sys=70.01%, ctx=905178, majf=5, minf=1
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,131072,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1152MiB/s (1208MB/s), 1152MiB/s-1152MiB/s (1208MB/s-1208MB/s), io=128GiB (137GB), run=113769-113769msec
Sequential read - 1091MiB/s - 1087 IOPS

Code:
fio --name=seqread \
--rw=read \
--direct=0 \
--iodepth=32 \
--bs=1M \
--numjobs=1 \
--size=128G \
--group_reporting
seqread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=32
fio-3.0
Starting 1 process
seqread: Laying out IO file (1 file / 131072MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=1237MiB/s,w=0KiB/s][r=1237,w=0 IOPS][eta 00m:00s]
seqread: (groupid=0, jobs=1): err= 0: pid=9652: Mon Jun 11 13:51:27 2018
   read: IOPS=1090, BW=1091MiB/s (1144MB/s)(128GiB/120145msec)
    clat (usec): min=386, max=55047, avg=914.67, stdev=788.22
     lat (usec): min=386, max=55048, avg=914.91, stdev=788.22
    clat percentiles (usec):
     |  1.00th=[  594],  5.00th=[  685], 10.00th=[  734], 20.00th=[  783],
     | 30.00th=[  816], 40.00th=[  840], 50.00th=[  857], 60.00th=[  873],
     | 70.00th=[  889], 80.00th=[  930], 90.00th=[ 1106], 95.00th=[ 1139],
     | 99.00th=[ 1729], 99.50th=[ 2573], 99.90th=[12780], 99.95th=[14877],
     | 99.99th=[41681]
   bw (  MiB/s): min=  414, max= 1307, per=99.72%, avg=1087.93, stdev=153.88, samples=240
   iops        : min=  414, max= 1307, avg=1087.35, stdev=153.78, samples=240
  lat (usec)   : 500=0.05%, 750=12.57%, 1000=70.54%
  lat (msec)   : 2=16.09%, 4=0.40%, 10=0.23%, 20=0.10%, 50=0.02%
  lat (msec)   : 100=0.01%
  cpu          : usr=0.39%, sys=95.74%, ctx=3411, majf=0, minf=256
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=131072,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=1091MiB/s (1144MB/s), 1091MiB/s-1091MiB/s (1144MB/s-1144MB/s), io=128GiB (137GB), run=120145-120145msec
Rand write - 1154MiB/s - 1151 IOPS

Code:
fio --name=randwrite \
--rw=randwrite \
--direct=0 \
--iodepth=32 \
--bs=1M \
--numjobs=1 \
--size=128G \
--group_reporting
randwrite: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=32
fio-3.0
Starting 1 process
randwrite: Laying out IO file (1 file / 131072MiB)
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=1350MiB/s][r=0,w=1350 IOPS][eta 00m:00s]
randwrite: (groupid=0, jobs=1): err= 0: pid=13010: Mon Jun 11 14:54:34 2018
  write: IOPS=1154, BW=1154MiB/s (1210MB/s)(128GiB/113572msec)
    clat (usec): min=255, max=33428, avg=838.41, stdev=346.81
     lat (usec): min=269, max=33449, avg=862.00, stdev=348.27
    clat percentiles (usec):
     |  1.00th=[  277],  5.00th=[  627], 10.00th=[  660], 20.00th=[  693],
     | 30.00th=[  717], 40.00th=[  725], 50.00th=[  742], 60.00th=[  758],
     | 70.00th=[  799], 80.00th=[  881], 90.00th=[ 1139], 95.00th=[ 1532],
     | 99.00th=[ 2147], 99.50th=[ 2507], 99.90th=[ 3064], 99.95th=[ 3294],
     | 99.99th=[ 7963]
   bw (  MiB/s): min=  253, max= 3227, per=99.83%, avg=1152.11, stdev=287.71, samples=227
   iops        : min=  253, max= 3227, avg=1151.63, stdev=287.76, samples=227
  lat (usec)   : 500=1.81%, 750=54.83%, 1000=29.79%
  lat (msec)   : 2=11.99%, 4=1.53%, 10=0.03%, 20=0.01%, 50=0.01%
  cpu          : usr=3.41%, sys=50.72%, ctx=1126468, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,131072,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1154MiB/s (1210MB/s), 1154MiB/s-1154MiB/s (1210MB/s-1210MB/s), io=128GiB (137GB), run=113572-113572msec
Rand read - 135MiB/s - 132 IOPS

Code:
fio --name=randread \
--rw=randread \
--direct=0 \
--iodepth=32 \
--bs=1M \
--numjobs=1 \
--size=1G \
--group_reporting
seqread: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=32
fio-3.0
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=149MiB/s,w=0KiB/s][r=149,w=0 IOPS][eta 00m:00s]
seqread: (groupid=0, jobs=1): err= 0: pid=11860: Mon Jun 11 14:29:06 2018
   read: IOPS=134, BW=135MiB/s (141MB/s)(1024MiB/7596msec)
    clat (usec): min=132, max=60771, avg=7413.53, stdev=7248.93
     lat (usec): min=132, max=60772, avg=7413.85, stdev=7248.94
    clat percentiles (usec):
     |  1.00th=[  249],  5.00th=[  355], 10.00th=[  375], 20.00th=[ 1287],
     | 30.00th=[ 1729], 40.00th=[ 2008], 50.00th=[ 3851], 60.00th=[ 9110],
     | 70.00th=[11076], 80.00th=[14484], 90.00th=[17695], 95.00th=[19530],
     | 99.00th=[25035], 99.50th=[27919], 99.90th=[35914], 99.95th=[60556],
     | 99.99th=[60556]
   bw (  KiB/s): min=101386, max=175426, per=98.76%, avg=136329.87, stdev=18807.61, samples=15
   iops        : min=   99, max=  171, avg=132.67, stdev=18.29, samples=15
  lat (usec)   : 250=1.07%, 500=15.82%, 750=0.88%, 1000=0.59%
  lat (msec)   : 2=21.09%, 4=10.74%, 10=13.18%, 20=32.23%, 50=4.30%
  lat (msec)   : 100=0.10%
  cpu          : usr=0.36%, sys=7.19%, ctx=857, majf=0, minf=256
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=1024,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1024MiB (1074MB), run=7596-7596msec
Virtualising FreeNAS

Now if I install ESXi 6.7 and run the same FreeNAS version/volume with the M1015, 80GB RAM. The above numbers will be no where near achievable!

For example the same Sequential read above which got me 1091MiB/s and 1087 IOPS has now dropped down to 759MiB/s and 737 IOPS.

Code:
fio --name=seqread \
--rw=read \
--direct=0 \
--iodepth=32 \
--bs=1M \
--numjobs=1 \
--size=128G \
--group_reporting
seqread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=32
fio-3.0
Starting 1 process
seqread: Laying out IO file (1 file / 131072MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=698MiB/s,w=0KiB/s][r=698,w=0 IOPS][eta 00m:00s]
seqread: (groupid=0, jobs=1): err= 0: pid=51781: Wed Jun 13 22:00:17 2018
   read: IOPS=759, BW=759MiB/s (796MB/s)(128GiB/172599msec)
    clat (usec): min=379, max=60771, avg=1312.73, stdev=901.76
     lat (usec): min=379, max=60773, avg=1313.47, stdev=901.77
    clat percentiles (usec):
     |  1.00th=[ 1123],  5.00th=[ 1188], 10.00th=[ 1205], 20.00th=[ 1221],
     | 30.00th=[ 1237], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270],
     | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1401],
     | 99.00th=[ 1745], 99.50th=[ 2474], 99.90th=[13960], 99.95th=[24773],
     | 99.99th=[36963]
   bw (  KiB/s): min=375798, max=812478, per=97.23%, avg=756112.13, stdev=67493.60, samples=345
   iops        : min=  366, max=  793, avg=737.87, stdev=65.92, samples=345
  lat (usec)   : 500=0.01%, 750=0.05%, 1000=0.25%
  lat (msec)   : 2=99.08%, 4=0.23%, 10=0.22%, 20=0.09%, 50=0.07%
  lat (msec)   : 100=0.01%
  cpu          : usr=0.53%, sys=96.10%, ctx=14307, majf=0, minf=257
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=131072,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=759MiB/s (796MB/s), 759MiB/s-759MiB/s (796MB/s-796MB/s), io=128GiB (137GB), run=172599-172599msec
Any ideas at this point would be much appreciated. I think next I'll be trying an older version of ESXi and see how this performs maybe even Proxmox!
 

Joel

Active Member
Jan 30, 2015
850
191
43
42
Can't be of much help but that's interesting that ZoL is performing so much less than BSD...

Running a 6x8tb RAIDZ2 array on Proxmox myself and have been mostly satisfied with it, but I also haven't pushed it very hard. Main issue right now is the crappy N300 router that is serving as wireless bridge.
 

nry

Active Member
Feb 22, 2013
312
61
28
Storage - Performance

Had a play with my 32GB Intel Optane drive and it seemed to perform quite well under very specific workloads. Then combining a ebay 15% off everything voucher with a pretty cheap new Intel Optane 900p 280GB I now have a very nice caching drive.

IMG_8956.jpg

But nothing is easy right?

Screen Shot 2018-08-21 at 21.00.22.png

For some reason the 32GB drive worked fine.

Theres a few bug reports of this but can't really find any resolution as of yet.

Bug #26508: Intel Optane 900p will not work in ESX passthrough - FreeNAS - iXsystems & FreeNAS Redmine

226086 – [nvme] Intel Optane 900P kernel panic when device is passed through ESXi (6.5)
 

lowfat

Active Member
Nov 25, 2016
131
91
28
40
I see you have the iKVM module for the Asus Z9PE-D8 WS. Where did you buy the module? I ordered one from eBay a few months ago and have been unable to get it to work. If I install it, the board takes between 2 and 12 hours to POST. And if I try to flash the firmware on the iKVM, it doesn't end up listing it in the flashing utility.
 

nry

Active Member
Feb 22, 2013
312
61
28
I see you have the iKVM module for the Asus Z9PE-D8 WS. Where did you buy the module? I ordered one from eBay a few months ago and have been unable to get it to work. If I install it, the board takes between 2 and 12 hours to POST. And if I try to flash the firmware on the iKVM, it doesn't end up listing it in the flashing utility.
I bought mine off eBay. To be honest I had similar issues with drastically increased boot times, in the end I sold the board and bought a dual socket Supermicro board which now boots in under 60 seconds.
 
  • Like
Reactions: Tha_14 and lowfat