1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Proxmox ZFS RAM reservation/config

Discussion in 'Linux Admins, Storage and Virtualization' started by kroem, Jul 19, 2017.

  1. kroem

    kroem Active Member

    Joined:
    Aug 16, 2014
    Messages:
    225
    Likes Received:
    34
    Fiddeling with my Proxmox host and I'm not really "happy" with the amount of RAM ZFS is using. I'm hoping that it would eat up much more by default - and I think I remember it did before(on other platforms?).

    The default setting is to limit ZFS to 50% of the RAM available, but the min size is like 32MB. Would there be any benefit from setting it to something higher, like between 16GB min - 64GB max? My server is 96GB but the LXC's are not using much RAM.

    Also, any other suggested tunables I could add?

    I recently rebooted the box, since I changed vm.swappiness to 0, but arc_summary below:

    Code:
    root@cat:~# arcstat
        time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c  
    12:26:41    12     1      8     1    8     0    0     0    0    21G   21G  
    root@cat:~# arc_summary
    
    ------------------------------------------------------------------------
    ZFS Subsystem Report                            Wed Jul 19 12:26:55 2017
    ARC Summary: (HEALTHY)
            Memory Throttle Count:                  0
    
    ARC Misc:
            Deleted:                                638.55k
            Mutex Misses:                           106
            Evict Skips:                            106
    
    ARC Size:                               46.44%  21.93   GiB
            Target Size: (Adaptive)         46.45%  21.93   GiB
            Min Size (Hard Limit):          0.07%   32.00   MiB
            Max Size (High Water):          1511:1  47.22   GiB
    
    ARC Size Breakdown:
            Recently Used Cache Size:       51.01%  11.19   GiB
            Frequently Used Cache Size:     48.99%  10.75   GiB
    
    ARC Hash Breakdown:
            Elements Max:                           753.60k
            Elements Current:               53.66%  404.41k
            Collisions:                             41.09k
            Chain Max:                              3
            Chains:                                 4.72k
    
    ARC Total accesses:                                     10.76m
            Cache Hit Ratio:                91.31%  9.82m
            Cache Miss Ratio:               8.69%   934.75k
            Actual Hit Ratio:               90.97%  9.79m
    
            Data Demand Efficiency:         95.58%  9.53m
            Data Prefetch Efficiency:       4.47%   324.53k
    
            CACHE HITS BY CACHE LIST:
              Most Recently Used:           12.63%  1.24m
              Most Frequently Used:         86.99%  8.54m
              Most Recently Used Ghost:     0.95%   93.23k
              Most Frequently Used Ghost:   0.01%   1.11k
    
            CACHE HITS BY DATA TYPE:
              Demand Data:                  92.74%  9.11m
              Prefetch Data:                0.15%   14.50k
              Demand Metadata:              6.88%   675.84k
              Prefetch Metadata:            0.23%   22.66k
    
            CACHE MISSES BY DATA TYPE:
              Demand Data:                  45.10%  421.53k
              Prefetch Data:                33.17%  310.02k
              Demand Metadata:              20.47%  191.34k
              Prefetch Metadata:            1.27%   11.85k
    
    
    File-Level Prefetch: (HEALTHY)
    DMU Efficiency:                                 69.46m
            Hit Ratio:                      98.42%  68.36m
            Miss Ratio:                     1.58%   1.10m
    
            Colinear:                               1.10m
              Hit Ratio:                    0.03%   322
              Miss Ratio:                   99.97%  1.10m
    
            Stride:                                 68.09m
              Hit Ratio:                    99.98%  68.08m
              Miss Ratio:                   0.02%   15.05k
    
    DMU Misc:
            Reclaim:                                1.10m
              Successes:                    1.91%   21.01k
              Failures:                     98.09%  1.08m
    
            Streams:                                282.92k
              +Resets:                      0.06%   169
              -Resets:                      99.94%  282.75k
              Bogus:                                0
    
    
    ZFS Tunable:
            l2arc_headroom                                    2
            zfs_free_leak_on_eio                              0
            zfs_free_max_blocks                               100000
            zfs_read_chunk_size                               1048576
            metaslab_preload_enabled                          1
            zfs_dedup_prefetch                                0
            zfs_txg_history                                   0
            zfs_scrub_delay                                   4
            zfs_vdev_async_read_max_active                    3
            zfs_read_history                                  0
            zfs_arc_sys_free                                  0
            l2arc_write_max                                   8388608
            zfs_dbuf_state_index                              0
            metaslab_debug_unload                             0
            zvol_inhibit_dev                                  0
            zfetch_max_streams                                8
            zfs_recover                                       0
            metaslab_fragmentation_factor_enabled             1
            zfs_sync_pass_rewrite                             2
            zfs_object_mutex_size                             64
            zfs_arc_min_prefetch_lifespan                     0
            zfs_arc_meta_prune                                10000
            zfs_read_history_hits                             0
            l2arc_norw                                        0
            zfs_dirty_data_max_percent                        10
            zfs_arc_meta_min                                  0
            metaslabs_per_vdev                                200
            zfs_arc_meta_adjust_restarts                      4096
            zil_slog_limit                                    1048576
            spa_load_verify_maxinflight                       10000
            spa_load_verify_metadata                          1
            zfs_send_corrupt_data                             0
            zfs_delay_min_dirty_percent                       60
            zfs_vdev_sync_read_max_active                     10
            zfs_dbgmsg_enable                                 0
            zio_requeue_io_start_cut_in_line                  1
            l2arc_headroom_boost                              200
            zfs_zevent_cols                                   80
            spa_config_path                                   /etc/zfs/zpool.cache
            zfs_vdev_cache_size                               0
            zfs_vdev_sync_write_min_active                    10
            zfs_vdev_scrub_max_active                         2
            zfs_disable_dup_eviction                          0
            ignore_hole_birth                                 1
            zvol_major                                        230
            zil_replay_disable                                0
            zfs_dirty_data_max_max_percent                    25
            zfs_expire_snapshot                               300
            zfs_sync_pass_deferred_free                       2
            spa_asize_inflation                               24
            zfs_vdev_mirror_switch_us                         10000
            l2arc_feed_secs                                   1
            zfs_autoimport_disable                            1
            zfs_arc_p_aggressive_disable                      1
            zfs_zevent_len_max                                192
            l2arc_noprefetch                                  1
            zfs_arc_meta_limit                                0
            zfs_flags                                         0
            zfs_dirty_data_max_max                            25352953856
            zfs_arc_average_blocksize                         8192
            zfs_vdev_cache_bshift                             16
            zfs_vdev_async_read_min_active                    1
            zfs_arc_num_sublists_per_state                    12
            zfs_arc_grow_retry                                0
            l2arc_feed_again                                  1
            zfs_arc_lotsfree_percent                          10
            zfs_zevent_console                                0
            zvol_prefetch_bytes                               131072
            zfs_free_min_time_ms                              1000
            zio_taskq_batch_pct                               75
            zfetch_block_cap                                  256
            spa_load_verify_data                              1
            zfs_dirty_data_max                                10141181542
            zfs_vdev_async_write_max_active                   10
            zfs_dbgmsg_maxsize                                4194304
            zfs_nocacheflush                                  0
            zfetch_array_rd_sz                                1048576
            zfs_arc_meta_strategy                             1
            zfs_dirty_data_sync                               67108864
            zvol_max_discard_blocks                           16384
            zfs_vdev_async_write_active_max_dirty_percent     60
            zfs_arc_p_dampener_disable                        1
            zfs_txg_timeout                                   5
            metaslab_aliquot                                  524288
            zfs_mdcomp_disable                                0
            zfs_vdev_sync_read_min_active                     10
            metaslab_debug_load                               0
            zfs_vdev_aggregation_limit                        131072
            l2arc_nocompress                                  0
            metaslab_lba_weighting_enabled                    1
            zfs_vdev_scheduler                                noop
            zfs_vdev_scrub_min_active                         1
            zfs_no_scrub_io                                   0
            zfs_vdev_cache_max                                16384
            zfs_scan_idle                                     50
            zfs_arc_shrink_shift                              0
            spa_slop_shift                                    5
            zfs_deadman_synctime_ms                           1000000
            metaslab_bias_enabled                             1
            zfs_admin_snapshot                                0
            zfs_no_scrub_prefetch                             0
            zfs_metaslab_fragmentation_threshold              70
            zfs_max_recordsize                                1048576
            zfs_arc_min                                       0
            zfs_nopwrite_enabled                              1
            zfs_arc_p_min_shift                               0
            zfs_mg_fragmentation_threshold                    85
            l2arc_write_boost                                 8388608
            zfs_prefetch_disable                              0
            l2arc_feed_min_ms                                 200
            zio_delay_max                                     30000
            zfs_vdev_write_gap_limit                          4096
            zfs_pd_bytes_max                                  52428800
            zfs_scan_min_time_ms                              1000
            zfs_resilver_min_time_ms                          3000
            zfs_delay_scale                                   500000
            zfs_vdev_async_write_active_min_dirty_percent     30
            zfs_vdev_sync_write_max_active                    10
            zfs_mg_noalloc_threshold                          0
            zfs_deadman_enabled                               1
            zfs_resilver_delay                                2
            zfs_arc_max                                       0
            zfs_top_maxinflight                               32
            zfetch_min_sec_reap                               2
            zfs_immediate_write_sz                            32768
            zfs_vdev_async_write_min_active                   1
            zfs_sync_pass_dont_compress                       5
            zfs_vdev_read_gap_limit                           32768
            zfs_vdev_max_active                               1000
    
     
    #1
  2. ttabbal

    ttabbal Active Member

    Joined:
    Mar 10, 2016
    Messages:
    556
    Likes Received:
    156
    I haven't felt the need to set it to anything particular. It's pretty good at managing itself. It will grow to whatever it thinks it needs over time, though you can set a max amount. And there is other caching going on in the Linux kernel, along with other processes using RAM. Linux doesn't let RAM sit about unused, it's always doing something with it. Even just large cache areas that can be freed quickly should the system need it.

    On small memory systems I had to set a max amount to prevent it from bogging down the system, but that was also partly due to ARC not interacting with the kernel really well. I think that's been improved in more recent releases as my backup box also running Proxmox doesn't show signs of RAM starvation and it's only got 8GB.
     
    #2
  3. _alex

    _alex Active Member

    Joined:
    Jan 28, 2016
    Messages:
    552
    Likes Received:
    44
    Think you could adjust the max-size if you're comfortable with the ARC going that high.
    Not sure about the 'Target Size' and how it is tunable/what parameters affect this.
    This seems to be the sweet spot ZFS tries to hit, and probably evict older entries from ARC to meet this.

    'modinfo zfs|grep arc' might give a first hint.

    If i remember right there are some parameters that can control how aggressive ARC / L2ARC is.
    This is maybe the thing to look at if you want the ARC filled max. over time.
     
    #3
  4. dlasher

    dlasher New Member

    Joined:
    Dec 9, 2016
    Messages:
    4
    Likes Received:
    0
    From a performance standpoint, you want to give ZFS ARC about as much RAM as you can spare. I generally plan for 50% of the installed system ram reserved.

    Code:
    root@pmx:~# more /etc/modprobe.d/zfs.conf
    #32G
    #options zfs zfs_arc_max=34359738368
    #64G
    options zfs zfs_arc_max=68719476736
    
    here's my arc_summary
    Code:
    ------------------------------------------------------------------------
    ZFS Subsystem Report                            Wed Sep 13 10:48:00 2017
    ARC Summary: (HEALTHY)
            Memory Throttle Count:                  0
    
    ARC Misc:
            Deleted:                                35.39m
            Mutex Misses:                           917
            Evict Skips:                            917
    
    ARC Size:                               100.00% 64.00   GiB
            Target Size: (Adaptive)         100.00% 64.00   GiB
            Min Size (Hard Limit):          6.25%   4.00    GiB
            Max Size (High Water):          16:1    64.00   GiB
    
    ARC Size Breakdown:
            Recently Used Cache Size:       17.92%  11.47   GiB
            Frequently Used Cache Size:     82.08%  52.53   GiB
    
    ARC Hash Breakdown:
            Elements Max:                           3.52m
            Elements Current:               93.65%  3.30m
            Collisions:                             19.75m
            Chain Max:                              6
            Chains:                                 286.28k
    
    ARC Total accesses:                                     2.92b
            Cache Hit Ratio:                98.11%  2.87b
            Cache Miss Ratio:               1.89%   55.08m
            Actual Hit Ratio:               96.24%  2.81b
    
            Data Demand Efficiency:         99.78%  2.76b
            Data Prefetch Efficiency:       70.53%  70.36m
    
            CACHE HITS BY CACHE LIST:
              Anonymously Used:             1.76%   50.32m
              Most Recently Used:           7.12%   204.05m
              Most Frequently Used:         90.97%  2.61b
              Most Recently Used Ghost:     0.10%   2.76m
              Most Frequently Used Ghost:   0.06%   1.75m
    
            CACHE HITS BY DATA TYPE:
              Demand Data:                  96.16%  2.76b
              Prefetch Data:                1.73%   49.62m
              Demand Metadata:              1.73%   49.54m
              Prefetch Metadata:            0.38%   10.87m
    
            CACHE MISSES BY DATA TYPE:
              Demand Data:                  10.99%  6.05m
              Prefetch Data:                37.64%  20.74m
              Demand Metadata:              48.16%  26.53m
              Prefetch Metadata:            3.21%   1.77m
    
     
    #4
  5. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    5,718
    Likes Received:
    1,092
    Maybe instead of changing max allotment you try changing how ZFS caches?

    That could fill it up more, and maybe provide more performance for your needs?
     
    #5
Similar Threads: Proxmox reservation/config
Forum Title Date
Linux Admins, Storage and Virtualization Proxmox vs. ESXi...what am I losing? Monday at 8:16 AM
Linux Admins, Storage and Virtualization [CLOSED]Setup and use Ceph on single node Proxmox? A little crazy idea? Sep 8, 2017
Linux Admins, Storage and Virtualization Proxmox Blade Cluster storage options Aug 20, 2017
Linux Admins, Storage and Virtualization Upgrade from Proxmox 4.4 to 5.0 Aug 18, 2017
Linux Admins, Storage and Virtualization Proxmox VE 5 with Intel Atom C3000 Series Denverton Aug 17, 2017

Share This Page