Pitiful NVMe performance with PM983 ZFS pool

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Joel

Active Member
Jan 30, 2015
856
199
43
42
So I get that the PM983 is not the most performant drive on the market, and ZFS isn't the most optimized for SSD performance, but I must admit I was expecting a little better than 37MiB/s and 9k IOPS!



Code:
sudo fio --filename=test --sync=1 --rw=randwrite --bs=4k --numjobs=1 \
  --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && sudo rm test
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
fio-3.33
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1
Jobs: 1 (f=1): [w(1)][100.0%][w=53.6MiB/s][w=13.7k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1449573: Tue Mar 26 14:01:08 2024
  write: IOPS=9660, BW=37.7MiB/s (39.6MB/s)(10.0GiB/271355msec); 0 zone resets
    clat (usec): min=50, max=28271, avg=102.39, stdev=158.23
     lat (usec): min=50, max=28271, avg=102.60, stdev=158.27
    clat percentiles (usec):
     |  1.00th=[   61],  5.00th=[   68], 10.00th=[   71], 20.00th=[   75],
     | 30.00th=[   80], 40.00th=[   87], 50.00th=[   96], 60.00th=[  102],
     | 70.00th=[  108], 80.00th=[  115], 90.00th=[  128], 95.00th=[  139],
     | 99.00th=[  215], 99.50th=[  347], 99.90th=[ 1352], 99.95th=[ 2343],
     | 99.99th=[ 8094]
   bw (  KiB/s): min=28168, max=65584, per=99.97%, avg=38629.22, stdev=4121.49, samples=542
   iops        : min= 7042, max=16396, avg=9657.30, stdev=1030.38, samples=542
  lat (usec)   : 100=56.74%, 250=42.46%, 500=0.51%, 750=0.12%, 1000=0.04%
  lat (msec)   : 2=0.07%, 4=0.04%, 10=0.02%, 20=0.01%, 50=0.01%
  cpu          : usr=1.63%, sys=44.73%, ctx=2661386, majf=0, minf=8
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=10.0GiB (10.7GB), run=271355-271355msec
ZFS attributes:
Code:
sudo zfs get all vm-ssd
NAME    PROPERTY              VALUE                  SOURCE
vm-ssd  type                  filesystem             -
vm-ssd  creation              Fri Mar 15 10:31 2024  -
vm-ssd  used                  255G                   -
vm-ssd  available             3.13T                  -
vm-ssd  referenced            104K                   -
vm-ssd  compressratio         1.37x                  -
vm-ssd  mounted               yes                    -
vm-ssd  quota                 none                   default
vm-ssd  reservation           none                   default
vm-ssd  recordsize            128K                   default
vm-ssd  mountpoint            /mnt/vm-ssd            default
vm-ssd  sharenfs              off                    default
vm-ssd  checksum              on                     default
vm-ssd  compression           lz4                    local
vm-ssd  atime                 off                    local
vm-ssd  devices               on                     default
vm-ssd  exec                  on                     default
vm-ssd  setuid                on                     default
vm-ssd  readonly              off                    default
vm-ssd  zoned                 off                    default
vm-ssd  snapdir               hidden                 default
vm-ssd  aclmode               discard                local
vm-ssd  aclinherit            passthrough            local
vm-ssd  createtxg             1                      -
vm-ssd  canmount              on                     default
vm-ssd  xattr                 on                     default
vm-ssd  copies                1                      default
vm-ssd  version               5                      -
vm-ssd  utf8only              off                    -
vm-ssd  normalization         none                   -
vm-ssd  casesensitivity       sensitive              -
vm-ssd  vscan                 off                    default
vm-ssd  nbmand                off                    default
vm-ssd  sharesmb              off                    default
vm-ssd  refquota              none                   default
vm-ssd  refreservation        none                   default
vm-ssd  guid                  2962009918326450026    -
vm-ssd  primarycache          all                    default
vm-ssd  secondarycache        all                    default
vm-ssd  usedbysnapshots       0B                     -
vm-ssd  usedbydataset         104K                   -
vm-ssd  usedbychildren        255G                   -
vm-ssd  usedbyrefreservation  0B                     -
vm-ssd  logbias               latency                default
vm-ssd  objsetid              54                     -
vm-ssd  dedup                 off                    default
vm-ssd  mlslabel              none                   default
vm-ssd  sync                  standard               default
vm-ssd  dnodesize             legacy                 default
vm-ssd  refcompressratio      1.00x                  -
vm-ssd  written               104K                   -
vm-ssd  logicalused           28.5G                  -
vm-ssd  logicalreferenced     46K                    -
vm-ssd  volmode               default                default
vm-ssd  filesystem_limit      none                   default
vm-ssd  snapshot_limit        none                   default
vm-ssd  filesystem_count      none                   default
vm-ssd  snapshot_count        none                   default
vm-ssd  snapdev               hidden                 default
vm-ssd  acltype               posix                  local
vm-ssd  context               none                   default
vm-ssd  fscontext             none                   default
vm-ssd  defcontext            none                   default
vm-ssd  rootcontext           none                   default
vm-ssd  relatime              on                     default
vm-ssd  redundant_metadata    all                    default
vm-ssd  overlay               on                     default
vm-ssd  encryption            off                    default
vm-ssd  keylocation           none                   default
vm-ssd  keyformat             none                   default
vm-ssd  pbkdf2iters           0                      default
vm-ssd  special_small_blocks  0                      default

Anything I can do to improve performance or should I switch OS/FS?
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
Also, running compression with 4k writes is utterly pointless as all blocks with ashift=12 (you did set ashift=12 when creating the pool, right?) will be at least 4k anyway.

But really, reconsider whether 4k sync writes are really necessary or representative of your future workload.
 

Joel

Active Member
Jan 30, 2015
856
199
43
42
ashift was detected as 12. This particular pool was created using TrueNAS GUI. Main purpose of the pool will be for guest VM disks and somewhat write-heavy workloads for a homelab (mostly downloading media files via torrent or usenet). Would setting the recordsize to 4K make sense for that use case?
 

Joel

Active Member
Jan 30, 2015
856
199
43
42
Also as a point of comparison for myself haha, same fio command (4k random writes) on a 4-way mirror spinning rust pool is ~400KiB/s and ~120 IOPS... :eek:. So from that perspective 30MiB/s is still ridiculously fast.
 

Tech Junky

Active Member
Oct 26, 2023
351
120
43
downloading media
Why are you over complicating things?

From your OP you have a 3TB disk that should hit 3/2GB/s in terms of speed. I suppose you could bump the numbers up though with a RAM disk as a cache and then pass sequential data to the disk at higher speeds. Another option is a higher capacity disk will provide better numbers typically.

1711493712264.png

This is from Kioxia for example. Apples / Oranges w/ SS but, once you hit the 8TB drive you get double the Write speeds though the 4K numbers don't change much across the various sizes. But, the devil is in the details across different drives in the same series. In my case I wanted density and decent speeds vs worrying about top speeds while reducing capacity.