Hello all,
Looking for some thoughts on some odd array performance I have seen when building out a new box. I am running ZoL as a VM in ESXi currently with the array controller passed through. The VM has 16 CPU's and 24GB of memory. I recognize 24GB is not enough for an optimal setup but I was simply testing out different array configs and how that affected performance. The first test was with and 8disk z2 configuration, I then added two stripes to that to make and array with 24 disks in 3 Z2's.
8 disks in a Z2
virtadmin@ubuntu:/storage$ sudo fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test2 --filename=test --bs=128k --iodepth=1 --size=384G --readwrite=write --rwmixread=2
test2: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=0): [W(1)] [100.0% done] [0KB/448.0MB/0KB /s] [0/3584/0 iops] [eta 00m:00s]
test2: (groupid=0, jobs=1): err= 0: pid=6784: Sun Aug 12 11:57:58 2018
write: io=393216MB, bw=433290KB/s, iops=3385, runt=929292msec
cpu : usr=1.61%, sys=21.90%, ctx=82079, majf=0, minf=491
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=3145728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=393216MB, aggrb=433290KB/s, minb=433290KB/s, maxb=433290KB/s, mint=929292msec, maxt=929292msec
And 24 disks in a striped z2 were only able to manage 300MB/s write with the same benchmark
Looking for some thoughts on some odd array performance I have seen when building out a new box. I am running ZoL as a VM in ESXi currently with the array controller passed through. The VM has 16 CPU's and 24GB of memory. I recognize 24GB is not enough for an optimal setup but I was simply testing out different array configs and how that affected performance. The first test was with and 8disk z2 configuration, I then added two stripes to that to make and array with 24 disks in 3 Z2's.
8 disks in a Z2
virtadmin@ubuntu:/storage$ sudo fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test2 --filename=test --bs=128k --iodepth=1 --size=384G --readwrite=write --rwmixread=2
test2: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=0): [W(1)] [100.0% done] [0KB/448.0MB/0KB /s] [0/3584/0 iops] [eta 00m:00s]
test2: (groupid=0, jobs=1): err= 0: pid=6784: Sun Aug 12 11:57:58 2018
write: io=393216MB, bw=433290KB/s, iops=3385, runt=929292msec
cpu : usr=1.61%, sys=21.90%, ctx=82079, majf=0, minf=491
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=3145728/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=393216MB, aggrb=433290KB/s, minb=433290KB/s, maxb=433290KB/s, mint=929292msec, maxt=929292msec
And 24 disks in a striped z2 were only able to manage 300MB/s write with the same benchmark