ZFS on Linux - arc not used

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I have a strange problem:
ZoL on Ubuntu 18.04, 128 GB memory of which 96 GB are assigned as ARC. sync=disable

But:

ARC Size: 4.22% 4.05 GiB
Target Size: (Adaptive) 100.00% 96.00 GiB
Min Size (Hard Limit): 100.00% 96.00 GiB
Max Size (High Water): 1:1 96.00 GiB

option zfs_arc_max=103079215104
option zfs_arc_min=103079215104


--> ARC is not used.
There are about 3,500,000 files on the storage and the pool-fill is only 1% (1.2/140TB). Do you have any idea on how to force ARC to be used to speed-up directory listings?

Thank you for your help!

Regards,
Stril
 

dswartz

Active Member
Jul 14, 2011
610
79
28
what release of ZoL? What are the dataset parameters? (e.g. zfs 'zfs get all XXX | sort')
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Now, ARC is used - 10GB/96GB. My main problem seems to be the "list-performance" on directories with MANY, SMALL files.

ZoL-version is 0.7.5 on Ubuntu 18.04

ZFS params are mostly default. The only changes are:

zpool1 atime off local
zpool1 compression lz4 local
zpool1 dedup off default

zpool1/share acltype posixacl inherited from zpool1
zpool1/share logbias latency default
zpool1/share sync disabled local
zpool1/share xattr sa inherited from zpool1

The pool is available through SMB.

Do you have any idea for optimization? Something special for prefetch?

Stril
 

dswartz

Active Member
Jul 14, 2011
610
79
28
0.7.5 is pretty old. i'd consider updating. there have been bugs in older releases where ARC collapses to the minimum. i assume primarycache=all?
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Yes, its primarycache=all.
ZoL 0.7.5 is the latest version, that is provided by the ubuntu repositories. I will try to build a newer one, or do you know any repository with newer packages?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
I'm running CentOS 7 and have 0.7.12-1. I've found the distro repositories tend to lag badly. In Ubuntu 14 and earlier, there was a PPA you could use, but in 15 and later, you're stuck with the distro's version, it seems. I guess you could download and build from source if you wanted to. Might want to ask on the zfs on linux mailing list - devs hang out there, so someone might be able to speak to this...
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I just upgraded to 0.7.12, but performance is still very bad on scanning large directories. Robocopy from windows to the filer with VERY few or zero changes takes a lot of time.

Do you have any idea on how to speed this up?
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Hard so say for me...
ARC is used, but not too much, but as robocopy does only read timestamp and lists folder, that could be o.k.
I just see a very bad performance and have no idea on where to take a look at.
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
You assume the issue is ZFS, how about Samba? If you export the same directory content not on ZFS, do you see the same bad performance?

Edit
My environment is different but on Solaris-ish I have a couple of folder with more than 100k small files each (I’m a photographer) and exporting those through kernel SMB and NFS listening performance are fine. From my Mac (that has a tarrible smb client) takes maybe 30 second to show the whole content. Connected via WiFi ...
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
If the issue is robocopy performance, perhaps it would help to separate the samba and zfs aspects.
Try testing samba with a UFS filesystem or better yet, ramdisk.

If the issues persist, there are certainly optimizations that can be applied to samba.
Even if robo copy only reads timestamps, that may be a stat() call regardless, which can be a bottleneck in itself.
If so, here is an option to optimize samba for "Directories with a Large Number of Files":
Performance Tuning - SambaWiki
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I just tried to separate the processes. So I added an EXT4-volume on the (unchanged) SMB-server and did a scan of a directory with about 50.000 files.

SMB with ZFS: 3:16 min
SMB with EXT4: 0:16 min

--> Seems to be ZFS/ZoL-related.

Second scan (scan is time robocopy needs if nothing is changed) takes the same time although everything should be in ARC.

Do you have any idea on how to solve this?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Try a newer ZoL and/or compare Solaris 11.4
(or a Solaris fork like OmniOS, not as fast as Solaris with native ZFS but free)
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
Gea is 100% right. This issue is certainly not present in Solaris forks like OmniOSce. What you can try is another ZoL flavor (CentOS, Fedora), FreeBSD, and OmniOSce as well.
 
  • Like
Reactions: T_Minus

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
Yep. It looks like it would take some significant effort to tune ZoL, whereas Omnios or solaris should work well with minimal tuning -- perhaps no tuning for a 1gb network.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

@gea:
I already did the tests with ZoL 0.7.12

What I tried now, is FreeNAS (because this was easy):

SMB with ZoL: 3:16 min
SMB with EXT4: 0:16 min
SMB with FreeNAS: 0:19 min

I would prefer ZoL, but as I do not havy any idea, yet....
 

Stril

Member
Sep 26, 2017
191
12
18
41
F***, I was wrong...

Now, I installed FreeNAS on my production hardware and the performance is as bad, as before with ZoL.
I have NO idea, what happens.

It's an enterprise-class supermicro server with an LSI 9300 SAS 12G HBA and 32 10 TB Seagate Exos Enterprise SAS Disks.

My test-FreeNAS-system was a VM with a small single-disk-pool (and MUCH faster).

Do you have any idea, on how to debug this?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
What style of pool are we talking about here? Perhaps output of "zpool status"?

I'm surprised that the listing metadata doesn't appear to be cached in any case. That seems like an easy win for the cache. It wouldn't help the first go, but subsequent runs should be great.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Its a quite str simple pool:

16 mirrors, each with 2 dry drives. I will insert pool status later...

Very strange. Arc is bigger than pool data at the moment
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Very strange. Arc is bigger than pool data at the moment
Arc generally does noch cache files or sequential data. It caches metadata and random reads based on a read last/read most optimisation. So sequential read is only improved by metadata caching.

For L2Arc you can enable read ahead that may help a little.

For the rest, real pool iops and sequential performance is relevant.

But as said, install Solaris 11.4 (you can download for free, only commercial use is prohibited) and check performance there with native ZFS (do all tests from Windows) and the kernelbased SMB server. Then you have a reference about whats possible with ZFS on your hardware and how slower another setup is. If its as slow, its the hardware, otherwise the OS.
 
Last edited: