RAID-5 plus NVME filesystem terrible performance on Linux

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

LikesChoco

New Member
Apr 23, 2020
17
0
1
I don't believe that.
Yeah, these tests should see massive write combining for large transfers. I mean, even if they were synchronous writes when I give it the easiest possible target, a 1MiB write it doesn't appear to help. At the filesystem layer it should be allocated and then written in a large transfer down to the md layer. Then at the MD layer they should be broken up into stripes and then write combined before writing to the devices. A bunch of splitting and coalescing should be happening for these transfers and I don't see any evidence of that.

Based on single device performance I believe I should conservatively be seeing >2GB/sec and possibly >3GB/sec writes for favorable workloads like 1MB writes.
 

LikesChoco

New Member
Apr 23, 2020
17
0
1
Different write types and sizes directly on the RAID-5 device. No filesystem was involved.
Queue depth 16.
I/O Type is: (b)uffered, (d)irect, (f)sync
fsync is performed at the end of the test.


Transfer SizeI/O TypeMB/secIOPS
4b564141004
16b65640994
64b68010618
256b5041970
1024b711694
4d33683879
16d67642274
64d118218465
256d25309882
1024d35623479
4f3637
16f10616
64f41637
256f121471
1024f317309


What strikes me is how bad buffered I/O is. Here's a config file. Did I do something wrong?

Code:
[global]
name=write_test
size=40G
bs=16K
rw=write
direct=0
numjobs=1
iodepth=16
ioengine=libaio
end_fsync=1
group_reporting=1

[job1]