@Rand__
Writing to a zvol and increasing the size gave a far more reasonable result:
Writing to a zvol and increasing the size gave a far more reasonable result:
Code:
tsteine@san:/SAN$ sudo fio --max-jobs=1 --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/dev/zvol/SAN/TESTVOL --bs=4k --iodepth=1 --size=12G --readwrite=randread
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.16
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=281MiB/s][r=71.9k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2216158: Wed May 20 18:45:01 2020
read: IOPS=69.4k, BW=271MiB/s (284MB/s)(12.0GiB/45346msec)
bw ( KiB/s): min=77336, max=400160, per=100.00%, avg=281326.07, stdev=46263.62, samples=89
iops : min=19334, max=100040, avg=70331.39, stdev=11565.87, samples=89
cpu : usr=14.14%, sys=40.82%, ctx=3145792, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=3145728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=271MiB/s (284MB/s), 271MiB/s-271MiB/s (284MB/s-284MB/s), io=12.0GiB (12.9GB), run=45346-45346msec
tsteine@san:/SAN$