How slow is ZFS with low RAM or readcache disabled and slow disks?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Based on a current support discussion I made some tests with readcache disabled and some disks
to check effect of readcache (Cache for Metadate and/or Data) and disk iops.

For these tests I used filebench/ singlestreamread (napp-it menu Pools > Benchmark > Filebench):
Caches can be enabled/disabled in napp-it menu Pools > Pri/Sec cache.

Filer with a 60% fillrate, 23% fragmentation, 2 x 6 disk raid-z2 vdev of HGST HUS724040AL,
filer under decent load from students during test. The term mb/s in filebench means MegaByte/s.

all caches on
IO Summary: 157739 ops, 5257.764 ops/s, (5258/0 r/w), 5256.7mb/s, 211us cpu/op, 0.2ms latency
19437: 34.518: Shutting down processes
all caches off
IO Summary: 13757 ops, 458.554 ops/s, (459/0 r/w), 458.5mb/s, 2168us cpu/op, 2.2ms latency
27837: 35.348: Shutting down processes
read drops to 458 MB/s

Now values of single disks
A singlestreamread on a single disk HGST HE8 Ultrastar
IO Summary: 2554 ops, 85.130 ops/s, (85/0 r/w), 85.1mb/s, 3578us cpu/op, 11.7ms latency
13519: 41.172: Shutting down processes
the same but with metadata caching on (data caching off)
IO Summary: 2807 ops, 93.565 ops/s, (94/0 r/w), 93.5mb/s, 3317us cpu/op, 10.7ms latency
4776: 41.229: Shutting down processes
helps a little. With higher fragmentation, the difference may become bigger

and a single Intel S3610-480 (all caches off)
IO Summary: 8874 ops, 295.792 ops/s, (296/0 r/w), 295.7mb/s, 2216us cpu/op, 3.4ms latency
16725: 37.100: Shutting down processes
and finally a very old and slow WD green 1TB (all caches off) with 3,1 MByte/s
IO Summary: 94 ops, 3.133 ops/s, (3/0 r/w), 3.1mb/s, 67645us cpu/op, 304.0ms latency
17184: 48.344: Shutting down processes
another low rpm disk (WD Re4 2TB enterprise)
IO Summary: 463 ops, 15.433 ops/s, (15/0 r/w), 15.4mb/s, 205954us cpu/op, 64.5ms latency
24202: 47.712: Shutting down processes
Scary how bad the Greens are!!
Even the Re4 enterprise is not really good with 15 MB/s

Writevalues:
same WD Green with a singlestreamwrite and sync disabled (write cache on)
IO Summary: 6318 ops, 210.594 ops/s, (0/211 r/w), 210.6mb/s, 3791us cpu/op, 4.7ms latency
21691: 31.116: Shutting down processes
same WD Green with a singlestreamwrite and sync set to always (write cache off for sync write)
IO Summary: 673 ops, 22.432 ops/s, (0/22 r/w), 22.4mb/s, 14577us cpu/op, 44.5ms latency
27214: 31.261: Shutting down processes ok.
The result clearly shows. that ZFS performance especially on read highly depends on its caches, propably more than on other filesystems. So especially with very low RAM and disks with very low iops especially read performance of ZFS seems really bad. The faster the disks the better the values. With readcache performance can increase dramatically. High iops is a key value for performance.

With the WD green, results with 3,1 MB/s are dramatically bad while results with the newer HGST 8TB disk and 85MB/s or an Intel SSD S3610 and 295 MB/s are ok. I am as well astonished about this very bad values on a slow disk with low iops values and cache disabled as mostly you do tests with all caches on to check overall system performance.

On writes, you always use the write cache that optimizes writes so default write values are much better. With sync=on you see similar weak write values as well with low iops disks. On a regular load, ZFS will hide these bad behaviour mostly with the readcache. So the result: avoid low iops disks and use as much RAM as possible for readcache.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I have not expected that readperformance without cache on low iops disks is so extremely bad!
 

markpower28

Active Member
Apr 9, 2013
413
104
43
Gea, great post! Very interesting number. I knew you always recommend put as much RAM as we could for ZFS instead of using RAM/SSD then HD. Should we start using SSD in between to increase HD performance?