Fiddeling with my Proxmox host and I'm not really "happy" with the amount of RAM ZFS is using. I'm hoping that it would eat up much more by default - and I think I remember it did before(on other platforms?).
The default setting is to limit ZFS to 50% of the RAM available, but the min size is like 32MB. Would there be any benefit from setting it to something higher, like between 16GB min - 64GB max? My server is 96GB but the LXC's are not using much RAM.
Also, any other suggested tunables I could add?
I recently rebooted the box, since I changed vm.swappiness to 0, but arc_summary below:
The default setting is to limit ZFS to 50% of the RAM available, but the min size is like 32MB. Would there be any benefit from setting it to something higher, like between 16GB min - 64GB max? My server is 96GB but the LXC's are not using much RAM.
Also, any other suggested tunables I could add?
I recently rebooted the box, since I changed vm.swappiness to 0, but arc_summary below:
Code:
root@cat:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
12:26:41 12 1 8 1 8 0 0 0 0 21G 21G
root@cat:~# arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Wed Jul 19 12:26:55 2017
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 638.55k
Mutex Misses: 106
Evict Skips: 106
ARC Size: 46.44% 21.93 GiB
Target Size: (Adaptive) 46.45% 21.93 GiB
Min Size (Hard Limit): 0.07% 32.00 MiB
Max Size (High Water): 1511:1 47.22 GiB
ARC Size Breakdown:
Recently Used Cache Size: 51.01% 11.19 GiB
Frequently Used Cache Size: 48.99% 10.75 GiB
ARC Hash Breakdown:
Elements Max: 753.60k
Elements Current: 53.66% 404.41k
Collisions: 41.09k
Chain Max: 3
Chains: 4.72k
ARC Total accesses: 10.76m
Cache Hit Ratio: 91.31% 9.82m
Cache Miss Ratio: 8.69% 934.75k
Actual Hit Ratio: 90.97% 9.79m
Data Demand Efficiency: 95.58% 9.53m
Data Prefetch Efficiency: 4.47% 324.53k
CACHE HITS BY CACHE LIST:
Most Recently Used: 12.63% 1.24m
Most Frequently Used: 86.99% 8.54m
Most Recently Used Ghost: 0.95% 93.23k
Most Frequently Used Ghost: 0.01% 1.11k
CACHE HITS BY DATA TYPE:
Demand Data: 92.74% 9.11m
Prefetch Data: 0.15% 14.50k
Demand Metadata: 6.88% 675.84k
Prefetch Metadata: 0.23% 22.66k
CACHE MISSES BY DATA TYPE:
Demand Data: 45.10% 421.53k
Prefetch Data: 33.17% 310.02k
Demand Metadata: 20.47% 191.34k
Prefetch Metadata: 1.27% 11.85k
File-Level Prefetch: (HEALTHY)
DMU Efficiency: 69.46m
Hit Ratio: 98.42% 68.36m
Miss Ratio: 1.58% 1.10m
Colinear: 1.10m
Hit Ratio: 0.03% 322
Miss Ratio: 99.97% 1.10m
Stride: 68.09m
Hit Ratio: 99.98% 68.08m
Miss Ratio: 0.02% 15.05k
DMU Misc:
Reclaim: 1.10m
Successes: 1.91% 21.01k
Failures: 98.09% 1.08m
Streams: 282.92k
+Resets: 0.06% 169
-Resets: 99.94% 282.75k
Bogus: 0
ZFS Tunable:
l2arc_headroom 2
zfs_free_leak_on_eio 0
zfs_free_max_blocks 100000
zfs_read_chunk_size 1048576
metaslab_preload_enabled 1
zfs_dedup_prefetch 0
zfs_txg_history 0
zfs_scrub_delay 4
zfs_vdev_async_read_max_active 3
zfs_read_history 0
zfs_arc_sys_free 0
l2arc_write_max 8388608
zfs_dbuf_state_index 0
metaslab_debug_unload 0
zvol_inhibit_dev 0
zfetch_max_streams 8
zfs_recover 0
metaslab_fragmentation_factor_enabled 1
zfs_sync_pass_rewrite 2
zfs_object_mutex_size 64
zfs_arc_min_prefetch_lifespan 0
zfs_arc_meta_prune 10000
zfs_read_history_hits 0
l2arc_norw 0
zfs_dirty_data_max_percent 10
zfs_arc_meta_min 0
metaslabs_per_vdev 200
zfs_arc_meta_adjust_restarts 4096
zil_slog_limit 1048576
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
zfs_send_corrupt_data 0
zfs_delay_min_dirty_percent 60
zfs_vdev_sync_read_max_active 10
zfs_dbgmsg_enable 0
zio_requeue_io_start_cut_in_line 1
l2arc_headroom_boost 200
zfs_zevent_cols 80
spa_config_path /etc/zfs/zpool.cache
zfs_vdev_cache_size 0
zfs_vdev_sync_write_min_active 10
zfs_vdev_scrub_max_active 2
zfs_disable_dup_eviction 0
ignore_hole_birth 1
zvol_major 230
zil_replay_disable 0
zfs_dirty_data_max_max_percent 25
zfs_expire_snapshot 300
zfs_sync_pass_deferred_free 2
spa_asize_inflation 24
zfs_vdev_mirror_switch_us 10000
l2arc_feed_secs 1
zfs_autoimport_disable 1
zfs_arc_p_aggressive_disable 1
zfs_zevent_len_max 192
l2arc_noprefetch 1
zfs_arc_meta_limit 0
zfs_flags 0
zfs_dirty_data_max_max 25352953856
zfs_arc_average_blocksize 8192
zfs_vdev_cache_bshift 16
zfs_vdev_async_read_min_active 1
zfs_arc_num_sublists_per_state 12
zfs_arc_grow_retry 0
l2arc_feed_again 1
zfs_arc_lotsfree_percent 10
zfs_zevent_console 0
zvol_prefetch_bytes 131072
zfs_free_min_time_ms 1000
zio_taskq_batch_pct 75
zfetch_block_cap 256
spa_load_verify_data 1
zfs_dirty_data_max 10141181542
zfs_vdev_async_write_max_active 10
zfs_dbgmsg_maxsize 4194304
zfs_nocacheflush 0
zfetch_array_rd_sz 1048576
zfs_arc_meta_strategy 1
zfs_dirty_data_sync 67108864
zvol_max_discard_blocks 16384
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_arc_p_dampener_disable 1
zfs_txg_timeout 5
metaslab_aliquot 524288
zfs_mdcomp_disable 0
zfs_vdev_sync_read_min_active 10
metaslab_debug_load 0
zfs_vdev_aggregation_limit 131072
l2arc_nocompress 0
metaslab_lba_weighting_enabled 1
zfs_vdev_scheduler noop
zfs_vdev_scrub_min_active 1
zfs_no_scrub_io 0
zfs_vdev_cache_max 16384
zfs_scan_idle 50
zfs_arc_shrink_shift 0
spa_slop_shift 5
zfs_deadman_synctime_ms 1000000
metaslab_bias_enabled 1
zfs_admin_snapshot 0
zfs_no_scrub_prefetch 0
zfs_metaslab_fragmentation_threshold 70
zfs_max_recordsize 1048576
zfs_arc_min 0
zfs_nopwrite_enabled 1
zfs_arc_p_min_shift 0
zfs_mg_fragmentation_threshold 85
l2arc_write_boost 8388608
zfs_prefetch_disable 0
l2arc_feed_min_ms 200
zio_delay_max 30000
zfs_vdev_write_gap_limit 4096
zfs_pd_bytes_max 52428800
zfs_scan_min_time_ms 1000
zfs_resilver_min_time_ms 3000
zfs_delay_scale 500000
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_sync_write_max_active 10
zfs_mg_noalloc_threshold 0
zfs_deadman_enabled 1
zfs_resilver_delay 2
zfs_arc_max 0
zfs_top_maxinflight 32
zfetch_min_sec_reap 2
zfs_immediate_write_sz 32768
zfs_vdev_async_write_min_active 1
zfs_sync_pass_dont_compress 5
zfs_vdev_read_gap_limit 32768
zfs_vdev_max_active 1000