how much ram does zfs really need

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

agent0

New Member
May 28, 2013
15
0
1
I would like to know how much ram do you really need in order to run zfs? I have always stayed away from ZFS because from what I've read it seems to be a ram hog.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
24GB for 18TB (RAW) sounded like a good place to be.

I assume dedupe is why you want to use it :)

with $10-11 4gb RDIMM and 18 dimm ($25) motherboard for HP you can do 72gb for $180 + $25 for mobo and really be cooking!

What do you think about vSan? 1 SSD per machine plus IT (no hardware raid)!! That should really be a cool platform. If they would just add dedupe i'd switch for good!
 

sotech

Member
Jul 13, 2011
305
1
18
Australia
One of the local businesses runs ZFS on a system with 2GB RAM for 10TB of storage and it runs perfectly. His ARC is tiny but the array can easily saturate gigabit anyway, and given that he largely works with large files (raw photo and video) it doesn't make a noticeable difference for him.
 

agent0

New Member
May 28, 2013
15
0
1
Sotech is he using duplication also what hardware and operating system is he running?
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
I would like to know how much ram do you really need in order to run zfs? I have always stayed away from ZFS because from what I've read it seems to be a ram hog.
If you are satisfied with pure disk performance, 1-2 GB RAM is ok for a ZFS Server.
But pure disk performance is bad especially with multiple user and small files.

This is where ZFS needs RAM to cache all reads.
It is quite normal on a good ZFS server that most (>80%) reads are delivered from RAM Cache 100 times faster than from disk. This is why ZFS uses nearly all RAM dynamically on heavy use and even 128GB+ of RAM can make sense.

Some people think this is bad, but this is perfect.
You use all RAM that you have bought to increase performance not for silly high idle values.
With Solaris, this is done automatically, with BSD you mostly need some tweaking.

But be warned.
This is only true without dedup. You should never use it unless you know about the RAM needs and have good reasons to do.
In general, disable dedup and enable compress (best compress is LZ4 on Illumos based systems)
 

agent0

New Member
May 28, 2013
15
0
1
I was wanting to play around with ZFS on an older box I have laying around I think the cpu is either a cor2duo 1.6 or 2.6. I would be hooking 7 to 8 2TB drives to an IMB M1015 controller I can only get 8 gigs of ram max in this system hopefully it is enough for decent performance. The box will be used to serve large media files and your normal office documents. With that should I add a solid state drive in for read caching I am still learning the basics of ZFS? Also most likely I would be going with OI and napp-it
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
You should use SSD if it is relevant.

Honestly, ssd caching, good stuff, but at the end of the day, the stuff is so cheap you can just afford to put everything on it. I noticed HP SAS 400gb SSD were breaching $400 a while ago (pulls under warranty I checked). That is cheap!

Large media files are usually skipped by caching algorithms ! So if you want those to saturate your 10gbe or infiniband you will need to move those to SSD anyways :)

Also SSD+SLOW drives is not as good as FAST 15K SAS drives! Many folks think it is !

In the end I went All SSD and it is simple.

Now if ZFS could do tiering to avoid wear (journal to drives) and write caching to use ssd up slower that would be awesome!
 

brutalizer

Member
Jun 16, 2013
54
11
8
You need 1GB RAM for each TB disk space, only if you are doing dedupe. If you are not doing dedupe, then 2GB RAM is sufficient. You will not have any disk cache at all, so you will get disk speed, which is slow in comparison to disk cache in RAM. ZFS has a very good disk cache, that is why everyone recommends lot of RAM.
 

agent0

New Member
May 28, 2013
15
0
1
Thanks for all the info I will be getting this together over the next couple of days. My main file server still runs Windows Server 2012 this system will be used for ISCSI and NFS mainly for my Vmware lab. The box is nothing special it will have 4 gigs of ram, Intel PT1000 nics, IBM M1015, cor2duo 2.6 and 3 500 gig Seagate Constellation ES drives. I am debating on adding a fourth drive. So as you can see as of right now I'm talking about less than a TB of usable space. Again my main focus as I stated is a storage box for Windows Clustering and ESXI.
 

Aluminum

Active Member
Sep 7, 2012
431
46
28
Thanks for all the info I will be getting this together over the next couple of days. My main file server still runs Windows Server 2012 this system will be used for ISCSI and NFS mainly for my Vmware lab. The box is nothing special it will have 4 gigs of ram, Intel PT1000 nics, IBM M1015, cor2duo 2.6 and 3 500 gig Seagate Constellation ES drives. I am debating on adding a fourth drive. So as you can see as of right now I'm talking about less than a TB of usable space. Again my main focus as I stated is a storage box for Windows Clustering and ESXI.
Add a drive and do "raid10" instead of "raid5" aka raidz1, your vmware lab will thank you for it.
 

gea

Well-Known Member
Dec 31, 2010
3,140
1,182
113
DE
Thanks for all the info I will be getting this together over the next couple of days. My main file server still runs Windows Server 2012 this system will be used for ISCSI and NFS mainly for my Vmware lab. The box is nothing special it will have 4 gigs of ram, Intel PT1000 nics, IBM M1015, cor2duo 2.6 and 3 500 gig Seagate Constellation ES drives. I am debating on adding a fourth drive. So as you can see as of right now I'm talking about less than a TB of usable space. Again my main focus as I stated is a storage box for Windows Clustering and ESXI.
Use Raid 10 with 4 disks but expect very low write values with default settings with NFS and ESXi because ESXi requests secure sync writes on NFS where every write must be confirmed from disk until the next can happen (can reduce write performance down to 10% of default/nonsync value).

You have two options:
Disable sync with the danger of a corrupted VM on a powerloss.
You can reduce the danger with a UPS or

add a really fast ZIL device (A 8GB+ SSD or DRAM disk that needs to have a supercap).
Best is a fast ZeusRAM or newest SLC SSDs or the newest MLC SSDs like Intel S3700. A cheap Intel 330 with Supercap is not good but ok
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Since I run veeam replication every hour, I'm willing to live with the risk of disabling sync on my esxi dataset.