How much RAM is required for ZFS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
Oracle, claims a minimum of 2GB for a 64bit Solaris (the origin OS of ZFS) does not matter the pool size.
This is enough for stable operation. On BSD or ZoL you may need a little more as the internal RAM handling of ZFS is Solaris alike (there are efforts to make this more platform independent).

But stable operation does not mean fast. I have made some tests with readcaching disabled. This is similar to a very low RAM situation: How slow is ZFS with low RAM or readcache disabled and slow disks? A fast SSD gives you 300-400MB/s. A slow WD green can go down to a few MB/s so slow disks and less RAM is not a good combination for ZFS.

So if you want a really fast server (without using dedup) you may want to
- add enough RAM to hold 5s of writes from network for the rambased writecache
On an 1G network this means 500MB, on a 10G network this means around 5GB of RAM

- add enough RAM to cache all metadata
If you count 1% of all data = metadata, this gives the 1GB RAM per TB used/active Data

- add enough RAM to cache all small random reads
This depends on use pattern or number of users

So 2GB is enough for a slow but stable system.
1-2 GB RAM per TB data may be desired for a very fast multiuser system where most data is read from RAM Mostly you are between. For a home filer 4-8 GB are ok. For a lab server for several user or VMs, 8-16GB are ok. But there are use cases where you want >128 GB to serve nearly all random data from RAM optionally with an extra L2Arc NVMe for sequential data.

If you want to use realtime dedup, add around 2-3 GB RAM or L2Arc per TB deduped data.
If you want to cache sequential data like video, allow sequential caching and add a very fast L2Arc disk ex an NVMe. Count around 5% RAM to manage the L2Arc
 
Last edited:

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
That's some great rule of thumb data.

Out of curiosity, are you aware of any caching on large sequential data like video? I'm sure the metadata cache helps there, but does ARC cache the reads without L2ARC available?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
From what I know, ARC does not cache sequential data in a larger amount (what means prefetch many data). If you want to cache/prefetch sequential data, you need a (larger) L2ARC with the setting in /etc/system: set zfs:l2arc_noprefetch=0

Arc is using a last read/ most read/ prefetch next method on datablocks