Oracle, claims a minimum of 2GB for a 64bit Solaris (the origin OS of ZFS) does not matter the pool size.
This is enough for stable operation. On BSD or ZoL you may need a little more as the internal RAM handling of ZFS is Solaris alike (there are efforts to make this more platform independent).
But stable operation does not mean fast. I have made some tests with readcaching disabled. This is similar to a very low RAM situation: How slow is ZFS with low RAM or readcache disabled and slow disks? A fast SSD gives you 300-400MB/s. A slow WD green can go down to a few MB/s so slow disks and less RAM is not a good combination for ZFS.
So if you want a really fast server (without using dedup) you may want to
- add enough RAM to hold 5s of writes from network for the rambased writecache
On an 1G network this means 500MB, on a 10G network this means around 5GB of RAM
- add enough RAM to cache all metadata
If you count 1% of all data = metadata, this gives the 1GB RAM per TB used/active Data
- add enough RAM to cache all small random reads
This depends on use pattern or number of users
So 2GB is enough for a slow but stable system.
1-2 GB RAM per TB data may be desired for a very fast multiuser system where most data is read from RAM Mostly you are between. For a home filer 4-8 GB are ok. For a lab server for several user or VMs, 8-16GB are ok. But there are use cases where you want >128 GB to serve nearly all random data from RAM optionally with an extra L2Arc NVMe for sequential data.
If you want to use realtime dedup, add around 2-3 GB RAM or L2Arc per TB deduped data.
If you want to cache sequential data like video, allow sequential caching and add a very fast L2Arc disk ex an NVMe. Count around 5% RAM to manage the L2Arc
This is enough for stable operation. On BSD or ZoL you may need a little more as the internal RAM handling of ZFS is Solaris alike (there are efforts to make this more platform independent).
But stable operation does not mean fast. I have made some tests with readcaching disabled. This is similar to a very low RAM situation: How slow is ZFS with low RAM or readcache disabled and slow disks? A fast SSD gives you 300-400MB/s. A slow WD green can go down to a few MB/s so slow disks and less RAM is not a good combination for ZFS.
So if you want a really fast server (without using dedup) you may want to
- add enough RAM to hold 5s of writes from network for the rambased writecache
On an 1G network this means 500MB, on a 10G network this means around 5GB of RAM
- add enough RAM to cache all metadata
If you count 1% of all data = metadata, this gives the 1GB RAM per TB used/active Data
- add enough RAM to cache all small random reads
This depends on use pattern or number of users
So 2GB is enough for a slow but stable system.
1-2 GB RAM per TB data may be desired for a very fast multiuser system where most data is read from RAM Mostly you are between. For a home filer 4-8 GB are ok. For a lab server for several user or VMs, 8-16GB are ok. But there are use cases where you want >128 GB to serve nearly all random data from RAM optionally with an extra L2Arc NVMe for sequential data.
If you want to use realtime dedup, add around 2-3 GB RAM or L2Arc per TB deduped data.
If you want to cache sequential data like video, allow sequential caching and add a very fast L2Arc disk ex an NVMe. Count around 5% RAM to manage the L2Arc
Last edited: