Which disks for new array?

Discussion in 'Linux Admins, Storage and Virtualization' started by modder man, Aug 6, 2018.

  1. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58
    All, I currently have 24 8TB disks of 3 different models. I am trying to determine which ones best fit the needs for a home NAS. The Seagates are by far the fastest on paper but are also the most expensive. I figured I could test them in 8disk z2's to get a benchmark of how each different disk type does in an array. The problem is I was confused by the results of the benchmark on the first set of disks. The three disks models I have are as follows.
    • HGST Ultrastar HUH728080ALE600
    • HGST Ultrastar HUH728080AL4200
    • Seagate Enterprise v5

    Here are the first benchmark results from the HGST Ultrastar HUH728080ALE600
    virtadmin@ubuntu:/storage$ sudo fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=48G --readwrite=randrw --rwmixread=75
    [sudo] password for virtadmin:
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.2.10
    Starting 1 process
    test: Laying out IO file(s) (1 file(s) / 49152MB)
    Jobs: 1 (f=1): [m(1)] [99.5% done] [123.0MB/41452KB/0KB /s] [31.5K/10.4K/0 iops] [eta 00m:42s]
    test: (groupid=0, jobs=1): err= 0: pid=26123: Mon Aug 6 00:07:03 2018
    read : io=36868MB, bw=4830.1KB/s, iops=1207, runt=7814671msec
    write: io=12284MB, bw=1609.8KB/s, iops=402, runt=7814671msec
    cpu : usr=0.57%, sys=4.04%, ctx=510174, majf=0, minf=10006
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=9438086/w=3144826/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=64

    Run status group 0 (all jobs):
    READ: io=36868MB, aggrb=4830KB/s, minb=4830KB/s, maxb=4830KB/s, mint=7814671msec, maxt=7814671msec
    WRITE: io=12284MB, aggrb=1609KB/s, minb=1609KB/s, maxb=1609KB/s, mint=7814671msec, maxt=7814671msec
     
    #1
  2. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    What's your workload? From that fio bench, you're testing for random 4k reads and writes (75% read) - the test shows you seem to be running at about 1200 IOPS/~5MB/s reads, 400 IOPS/1.6MB/s writes.

    This is the sort of workload you might get from running a load of VMs, rather than what a home NAS typically does (i.e. mostly streaming of less than half a dozen large media files), so you need to say what you're planning and test accordingly (and say if you're planning to use any SSD caching or suchlike).
     
    #2
    Last edited: Aug 6, 2018
  3. j_h_o

    j_h_o Active Member

    Joined:
    Apr 21, 2015
    Messages:
    315
    Likes Received:
    62
    Is performance really the most important thing for you? Or power usage? Or hardware cost?
     
    #3
  4. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58
    Fair, The primary usecase is for media. I do plan for the array to be able to sustain ~20 streams as it currently isnt that uncommon for me to hit 15. From time to time I would run a few VM's on it assuming it can handle it. In this case what through me off is how low the throughput numbers were, Though perhaps it shouldn't. For the sake of this test I was not running with a slog but can absolutely use one ultimately.
     
    #4
  5. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58

    I think that is always a little bit of a balancing act...they are all important. The power differences in these disks have looked negligible so far. The costs are close enough that if one is significantly faster in the array I would consider using that. I was just looking for a good way to benchmark them as a baseline to get a direct comparison from one disk to another.
     
    #5
  6. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    You should modify your fio parameters in that case - you might try using 64k blocks, 98% reads and 20 streams and see if the aggregate throughput you get is less than the maximum combined bitrate of your streams.

    For running VMs off the same array (or better general performance of metadata just with the media workload) I'd recommend adding an SSD L2ARC before considering a SLOG.
     
    #6
  7. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58
    virtadmin@ubuntu:/storage$ sudo fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=48G --readwrite=randrw --rwmixread=98
    test: (g=0): rw=randrw, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=64
    fio-2.2.10
    Starting 1 process
    Jobs: 1 (f=1): [m(1)] [100.0% done] [16192KB/320KB/0KB /s] [253/5/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=26947: Mon Aug 6 10:05:31 2018
    read : io=48162MB, bw=47558KB/s, iops=743, runt=1036987msec
    write: io=990.39MB, bw=977.10KB/s, iops=15, runt=1036987msec
    cpu : usr=0.37%, sys=3.62%, ctx=94050, majf=0, minf=10720
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=770586/w=15846/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=64

    Run status group 0 (all jobs):
    READ: io=48162MB, aggrb=47558KB/s, minb=47558KB/s, maxb=47558KB/s, mint=1036987msec, maxt=1036987msec
    WRITE: io=990.39MB, aggrb=977KB/s, minb=977KB/s, maxb=977KB/s, mint=1036987msec, maxt=1036987msec
     
    #7
  8. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Instead of using iodepth=64, try keeping iodepth at 1 (default) and spawning 20 separate jobs (--numjobs=20) - that'd give you a better indication of what 20 different clients all attempting to do mostly-sequential-read accesses would look like. It'll give you a per-job summary at the end, so it's easy to tell from those if the read bandwidth during the fio test is greater than the bitrate of your source files.
     
    #8
    i386 likes this.
  9. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58

    Looks like it was much faster here though I think that all came from cache.

    virtadmin@ubuntu:/storage$ sudo fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=1 --size=48G --readwrite=read --rwmixread=98 --numjobs=20
    [sudo] password for virtadmin:
    test: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=1
    ...
    fio-2.2.10
    Starting 20 processes
    Jobs: 12 (f=12): [_(2),R(4),_(1),R(6),_(3),R(1),_(1),R(1),_(1)] [97.4% done] [20118MB/0KB/0KB /s] [322K/0/0 iops] [eta 00m:02s]
    test: (groupid=0, jobs=1): err= 0: pid=27120: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=694881KB/s, iops=10857, runt= 72432msec
    cpu : usr=2.35%, sys=56.99%, ctx=7391, majf=0, minf=195
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27121: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=695179KB/s, iops=10862, runt= 72401msec
    cpu : usr=2.32%, sys=57.10%, ctx=6822, majf=0, minf=316
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27122: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=683510KB/s, iops=10679, runt= 73637msec
    cpu : usr=2.30%, sys=57.66%, ctx=8062, majf=0, minf=196
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27123: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=685885KB/s, iops=10716, runt= 73382msec
    cpu : usr=2.51%, sys=57.40%, ctx=8251, majf=0, minf=379
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27124: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=685064KB/s, iops=10704, runt= 73470msec
    cpu : usr=2.55%, sys=57.36%, ctx=7987, majf=0, minf=220
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27125: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=685222KB/s, iops=10706, runt= 73453msec
    cpu : usr=2.40%, sys=57.40%, ctx=9161, majf=0, minf=212
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27126: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=699741KB/s, iops=10933, runt= 71929msec
    cpu : usr=2.02%, sys=57.25%, ctx=7219, majf=0, minf=313
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27127: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=686325KB/s, iops=10723, runt= 73335msec
    cpu : usr=2.46%, sys=57.31%, ctx=8024, majf=0, minf=210
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27128: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=684151KB/s, iops=10689, runt= 73568msec
    cpu : usr=2.26%, sys=57.82%, ctx=8173, majf=0, minf=192
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27129: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=686325KB/s, iops=10723, runt= 73335msec
    cpu : usr=2.44%, sys=57.51%, ctx=8525, majf=0, minf=213
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27130: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=688437KB/s, iops=10756, runt= 73110msec
    cpu : usr=2.57%, sys=57.13%, ctx=7907, majf=0, minf=221
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27131: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=681733KB/s, iops=10652, runt= 73829msec
    cpu : usr=2.48%, sys=57.61%, ctx=8723, majf=0, minf=213
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27132: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=681733KB/s, iops=10652, runt= 73829msec
    cpu : usr=2.47%, sys=57.38%, ctx=11474, majf=0, minf=218
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27133: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=698876KB/s, iops=10919, runt= 72018msec
    cpu : usr=2.01%, sys=57.35%, ctx=7899, majf=0, minf=240
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27134: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=694949KB/s, iops=10858, runt= 72425msec
    cpu : usr=2.10%, sys=57.41%, ctx=6893, majf=0, minf=281
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27135: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=695064KB/s, iops=10860, runt= 72413msec
    cpu : usr=2.66%, sys=56.70%, ctx=7990, majf=0, minf=194
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27136: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=685222KB/s, iops=10706, runt= 73453msec
    cpu : usr=2.55%, sys=57.49%, ctx=7502, majf=0, minf=188
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27137: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=695169KB/s, iops=10862, runt= 72402msec
    cpu : usr=2.10%, sys=57.42%, ctx=7668, majf=0, minf=309
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27138: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=683130KB/s, iops=10673, runt= 73678msec
    cpu : usr=2.30%, sys=57.77%, ctx=9039, majf=0, minf=167
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1
    test: (groupid=0, jobs=1): err= 0: pid=27139: Mon Aug 6 12:27:27 2018
    read : io=49152MB, bw=695064KB/s, iops=10860, runt= 72413msec
    cpu : usr=2.45%, sys=56.96%, ctx=7713, majf=0, minf=160
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=786432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    READ: io=983040MB, aggrb=13315MB/s, minb=681732KB/s, maxb=699740KB/s, mint=71929msec, maxt=73829msec
     
    #9
  10. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Assuming you picked the 48GB test file size to be bigger than your RAM cache, at least some of it must have come from the discs themselves. But 20 mostly-sequential mostly-read operations are considerably easier on the access patterns of hard drives and platter-based RAID arrays than your original tests.

    There are further refinements you can probably add to make the test more realistic - perhaps make the test space larger than 48GB, perhaps limit the size of the pseudo-files used to 2-5GB or whatever the typical size of your media files is (via the nrfiles and filesize options).

    Assuming you're using linux, you can always flush all disc buffers and empty the cache before a bench run using the below command, or else use --direct=1 to force fio to ignore buffers entirely and rely solely on disc IO - although that's likely less of a real-world workload than most linux servers would manage since they tend to try and cache things as intelligently as possible.
    Code:
    sync; echo 3 > /proc/sys/vm/drop_caches
     
    #10
  11. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58
    I made some more changes to run a 384GB test and do sequential reads/writes.

    Those tests returned 800MB/s read and 700MB/s writes. Does that not seem really fast for 8 spinners in z2?

    Also if 8 disks can do this 24 disks in 3x z2 should be able to do close to 2GB/s no?
     
    #11
  12. EffrafaxOfWug

    EffrafaxOfWug Radioactive Member

    Joined:
    Feb 12, 2015
    Messages:
    672
    Likes Received:
    233
    Disclaimer as in I'm not a ZFS expert (I'm one of those "knows enough to be dangerous" people); it does seem very fast yes but I don't know anything about the rest of the hardware you're running it on (specifically how much RAM), and remember that without other fio options al of those 20 jobs might possibly be reading the same thing so there's a high likelihood of all the jobs reading the same or similar blocks at the same time. But also remember that with mostly sequential reads, esp. with comparatively large block sizes, the OS can usually prefetch quite efficiently.

    You can either create lots of different test files instead of one big one - if at that point you really want to start looking at jobfiles since they make this sort of more complex testing much easier. Jens has a massive pile of pre-prepated job files on his GitHub as well if you want to have a look at those;
    axboe/fio

    For instance here's one I use based on the seq read example, using ten different 2GB files each being accessed sequentially by 20 different jobs (ten files, two jobs);
    Code:
    [global]
    name=seq_reads
    rw=read
    bs=256K
    direct=1
    numjobs=2
    time_based=1
    runtime=180
    iodepth=2
    ioengine=libaio
    
    [file0]
    filename=file0
    size=2G
    
    [file1]
    filename=file1
    size=2G
    
    [file2]
    filename=file2
    size=2G
    
    [file3]
    filename=file3
    size=2G
    
    [file4]
    filename=file4
    size=2G
    
    [file5]
    filename=file5
    size=2G
    
    [file6]
    filename=file6
    size=2G
    
    [file7]
    filename=file7
    size=2G
    
    [file8]
    filename=file8
    size=2G
    
    [file9]
    filename=file9
    size=2G
    If you want to do worst-case raw disc performance without letting caches interfere can just try running with the --direct=1 (or --buffered=0) option.
     
    #12
  13. aero

    aero Active Member

    Joined:
    Apr 27, 2016
    Messages:
    291
    Likes Received:
    50
    Your results are as expected in my experience. The particulars of the access pattern matters a ton. As soon as you throw some small block reads/writes (VMs) into the mix though your speeds will drop.

    24 disks can definitely get up around 2GB/s. I have a 30 disk (3 raidz2 vdevs of 10 disks each) that achieves around 3GB/s with that same workload, several large sequential reads.
     
    #13
  14. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58

    Wow, my other array 24 disks in 3 z2 vdevs does not ever exceed somewhere in the 300MB/s range.
     
    #14
  15. aero

    aero Active Member

    Joined:
    Apr 27, 2016
    Messages:
    291
    Likes Received:
    50
    I'd say something wrong with the setup. You using 1M record size? Helps a lot.

    Edit: I should also mention I'm not using any caching. No l2arc, very little ram, drop caches between test runs, and unique files for every read stream. Decent, brand new SAS 7.2k spinners.
     
    #15
  16. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58

    The fast benchmark I ran was 1MB block sizes. I am sure the array I have in use is using 128K block sizes. If the primary usecase is large files it would seem it may make the most size to bump up the default block size on the array.
     
    #16
  17. modder man

    modder man Active Member

    Joined:
    Jan 19, 2015
    Messages:
    614
    Likes Received:
    58
    Turns out it is quite difficult to get a direct comparison between different disk models and how their performance will affect the array.
     
    #17
Similar Threads: disks array
Forum Title Date
Linux Admins, Storage and Virtualization Formatting disks for ZFS on Linux May 8, 2018
Linux Admins, Storage and Virtualization Getting SMART data from ZFS disks in Proxmox Mar 10, 2018
Linux Admins, Storage and Virtualization Changing the disks from /sdX to UUID on an OS bootable ZFS raid-1 pool? Sep 18, 2017
Linux Admins, Storage and Virtualization ESXi 6 M1015 passthrough no disks detected Apr 24, 2016
Linux Admins, Storage and Virtualization Trying to Recover MDADM Array Jan 20, 2017

Share This Page