How slow is ZFS with low RAM or readcache disabled and slow disks?

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by gea, Nov 24, 2016.

  1. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,246
    Likes Received:
    743
    Based on a current support discussion I made some tests with readcache disabled and some disks
    to check effect of readcache (Cache for Metadate and/or Data) and disk iops.

    For these tests I used filebench/ singlestreamread (napp-it menu Pools > Benchmark > Filebench):
    Caches can be enabled/disabled in napp-it menu Pools > Pri/Sec cache.

    Filer with a 60% fillrate, 23% fragmentation, 2 x 6 disk raid-z2 vdev of HGST HUS724040AL,
    filer under decent load from students during test. The term mb/s in filebench means MegaByte/s.

    all caches on
    all caches off
    read drops to 458 MB/s

    Now values of single disks
    A singlestreamread on a single disk HGST HE8 Ultrastar
    the same but with metadata caching on (data caching off)
    helps a little. With higher fragmentation, the difference may become bigger

    and a single Intel S3610-480 (all caches off)
    and finally a very old and slow WD green 1TB (all caches off) with 3,1 MByte/s
    another low rpm disk (WD Re4 2TB enterprise)
    Scary how bad the Greens are!!
    Even the Re4 enterprise is not really good with 15 MB/s

    Writevalues:
    same WD Green with a singlestreamwrite and sync disabled (write cache on)
    same WD Green with a singlestreamwrite and sync set to always (write cache off for sync write)
    The result clearly shows. that ZFS performance especially on read highly depends on its caches, propably more than on other filesystems. So especially with very low RAM and disks with very low iops especially read performance of ZFS seems really bad. The faster the disks the better the values. With readcache performance can increase dramatically. High iops is a key value for performance.

    With the WD green, results with 3,1 MB/s are dramatically bad while results with the newer HGST 8TB disk and 85MB/s or an Intel SSD S3610 and 295 MB/s are ok. I am as well astonished about this very bad values on a slow disk with low iops values and cache disabled as mostly you do tests with all caches on to check overall system performance.

    On writes, you always use the write cache that optimizes writes so default write values are much better. With sync=on you see similar weak write values as well with low iops disks. On a regular load, ZFS will hide these bad behaviour mostly with the readcache. So the result: avoid low iops disks and use as much RAM as possible for readcache.
     
    #1
    Last edited: Nov 24, 2016
    Danic, sth, nle and 3 others like this.
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,545
    Likes Received:
    4,467
    Great post @gea !
     
    #2
  3. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,246
    Likes Received:
    743
    I have not expected that readperformance without cache on low iops disks is so extremely bad!
     
    #3
  4. markpower28

    markpower28 Active Member

    Joined:
    Apr 9, 2013
    Messages:
    393
    Likes Received:
    98
    Gea, great post! Very interesting number. I knew you always recommend put as much RAM as we could for ZFS instead of using RAM/SSD then HD. Should we start using SSD in between to increase HD performance?
     
    #4
Similar Threads: slow readcache
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it omnios+nappit 10gb performance: iperf fast, zfs-send slow Jun 17, 2019
Solaris, Nexenta, OpenIndiana, and napp-it Slow read speed for large files Nov 23, 2018
Solaris, Nexenta, OpenIndiana, and napp-it Solaris network slow (vmxnet3) Nov 21, 2018
Solaris, Nexenta, OpenIndiana, and napp-it slow network speed issue vmxnet3 ? Aug 9, 2017
Solaris, Nexenta, OpenIndiana, and napp-it Very slow ioping NFS Jun 5, 2017

Share This Page