Best way to increase zfs read+write speeds

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

macki

New Member
Aug 30, 2023
2
0
3
Hi,
Previously i had a 1.5tb server made up of all SSDs (2x512gb,2x256gb) and it was pretty fast, i was only ever limited by the network speed.

Now i have a 20tb server made up of 4x5tb drives (1 redundant), with a 512gb ssd as l2arc and one of the 256gb drives as boot drive, and 189gb DDR4 RAM

Even with the 512gb drive as l2arc, my read speeds are only the speed of the hard drive when attempting to copy a file for the first time after being downloaded from qbittorent.

Is there a way to tell ZFS to cache a folder, or cache the most recently added files in a folder
 

unwind-protect

Active Member
Mar 7, 2016
386
132
43
Boston
You can always bring any file you want into RAM by doing `wc myfile`.

More advanced would be to lock it into memory using the mlock(2) system call.
 

reasonsandreasons

Active Member
May 16, 2022
100
68
28
Reconfiguring the main pool as mirrors would cut your usable storage space by a third but would meaningfully increase your speed.

A special metadata device (or a Fusion Pool in TrueNAS language) might increase performance in some domains (metadata and small file reads). That might be more useful than the L2ARC in your application. If you go that route make sure the special device is redundant as it'll become a load-bearing part of your pool.

My impression is that ARC is a bit fickle--it's hard to get it to cache the things you want as it's generally going to try to optimize itself. If you really need fast storage consider a scratch pool of SSDs, perhaps as a replacement for the L2ARC.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,589
2,015
113
RaidZ1 on spinning disk vs. all SSD pool - of course it's much slower, you don't have much to work with only having 4 spinning drives... VERY high % that the SSD L2ARC is doing nothing for you right now.

You're just not going to get a performant based system using 4 spinning disks w\parity .... nothing really more to say? even mirrored isn't going to be SSD speed.
 
  • Like
Reactions: MrGuvernment

macki

New Member
Aug 30, 2023
2
0
3
You can always bring any file you want into RAM by doing `wc myfile`.

More advanced would be to lock it into memory using the mlock(2) system call.
Thanks, using wc worked, any chance you know of a multitreaded alternative for wc, as it takes a while for large files, but maxes out only 1 core
 

gea

Well-Known Member
Dec 31, 2010
3,068
1,134
113
DE
With 189GB RAM all cacheable reads (ZFS does not cache files but small random io and metadata only) are already in RAM. I would also expect an L2Arc usage near to zero. ZFS writecache is always and only RAM (around 10% RAM, max 4GB as default) and readcache is always RAM that can be extended by a slower but persistent L2Arc SSD.

This means that there are mainly two options to increase performance

1. The pool must become faster (mirrors instead raid-Z, faster disks/SSD)
2. Use a tiering alike method like a special vdev mirror (SSD/NVMe). In this case you can force all (new) data to a filesystem with a recsize setting < special vdev threshold to the fast special vdev instead the slower pool.

A small improvement can be done with different recsize settings ex 512k-1M for large data and 16-64k for databases, zvols or VMs
 
  • Like
Reactions: reasonsandreasons

unwind-protect

Active Member
Mar 7, 2016
386
132
43
Boston
Thanks, using wc worked, any chance you know of a multitreaded alternative for wc, as it takes a while for large files, but maxes out only 1 core
If you get 100% of one CPU out of
`wc myfile`
then try
`wc -c < myfile`
which will reduce CPU usage.

Multithreading won't help since it is then limited by your storage speed. Unless you have more than one array.