SSDs, ZIL, and L2ARC

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

weust

Active Member
Aug 15, 2014
353
44
28
44
No VMs or databases are stored on the ZFS pool.
If I make it a AOI it will be on their own datastore SSDs (ESXi).

I will have to read up (again and more) on SLOG and L2ARC.
Everytime I do it feels like the information is different per source.
One source it's used this way, another source says it's used that way.

Memory won't be a problem, I believe, with 128GB RAM.

If I understand you correctly, in my use case I shouldn't use any disk based caching, and maybe only configure L2ARC to better suite my use case?
 

rune-san

Member
Feb 7, 2014
81
18
8
No VMs or databases are stored on the ZFS pool.
If I make it a AOI it will be on their own datastore SSDs (ESXi).

I will have to read up (again and more) on SLOG and L2ARC.
Everytime I do it feels like the information is different per source.
One source it's used this way, another source says it's used that way.

Memory won't be a problem, I believe, with 128GB RAM.

If I understand you correctly, in my use case I shouldn't use any disk based caching, and maybe only configure L2ARC to better suite my use case?
If you do not need security between files hitting the AiO, and being committed to disk, then you do not need an SLOG. If you *do* need that security, then you *might* need an SLOG if performance is too slow for your liking, as each file transaction will not be considered complete until it is committed to disk.

An L2ARC is a cheaper way to get additional read cache that can be faster than hard drives for small files accessed repeatedly (for sequential files, a cache isn't necessarily going to be better than just reading from disk).

With 128GB of RAM, you should analyze your read cache rates before adding an L2ARC. For most workloads, it won't make much of a difference, and actually consumes some RAM to host the L2ARC.
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
Arc (and L2Arc) are blockbased readcaches with a read last + read most strategy. This means that mostly metadata (about 1% of data) and small random reads benefit.

With a lot of RAM it is quite unlikely that you see any improvements from L2Arc. The only positive aspect of L2Arc in this situation may be its read ahead feature that must be activated seperately and that can improve sequential workloads ex for several concurrent users/mediastreams.

Slog with sync enabled for a pure ZFS filer gives only a small security advantage mainly with smaller files ex an Excel sheet that is committed and fully in the rambased writecache but not yet on the pool on a power outage. The Slog would then guarantee the write. While it is not nice that a committed write is lost in Nirwana then I would mostly disable sync for a filer and skip the Slog - beside situations where you cannot ignore this risk due the performance degration (ok, an Intel Optane as Slog may be a game changer. I have seen over 800 MB/s sync write values, enough for a 10G network)
 
Last edited: