TrueNAS core special/metadata vdev underutilized?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Railgun

Active Member
Jul 28, 2018
148
56
28
I've just rejiged my NAS by creating 11 vdevs of two disk HDD mirrors, a three-wide SSD mirror special vdev with 2x HDD spares and 1x SSD spare.

I've set both datasets I have to 32KiB for the metadata block size. However, after reloading all the data to the corresponding datasets, I have the following, truncated for brevity:

Code:
truenas[~]# zpool list -v

NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
TEH                                              121T  57.7T  63.2T        -         -     0%    47%  1.00x    ONLINE  /mnt
...
special                                             -      -      -        -         -      -      -      -  -
  mirror-13                                      928G  38.7G   889G        -         -     3%  4.17%      -    ONLINE
    gptid/66aaf24d-3c49-11ee-bc86-000c297d3363   932G      -      -        -         -      -      -      -    ONLINE
    gptid/66acf448-3c49-11ee-bc86-000c297d3363   932G      -      -        -         -      -      -      -    ONLINE
    gptid/66b04e38-3c49-11ee-bc86-000c297d3363   932G      -      -        -         -      -      -      -    ONLINE

Code:
truenas[~]# zdb -Lbbbs -U /data/zfs/zpool.cache TEH
...
Block Size Histogram

  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:   192K  95.8M  95.8M   192K  95.8M  95.8M      0      0      0
     1K:   259K   299M   395M   259K   299M   395M      0      0      0
     2K:   215K   570M   965M   215K   570M   965M      0      0      0
     4K:  2.01M  8.30G  9.24G   259K  1.42G  2.37G  1.34M  5.37G  5.37G
     8K:  1.98M  22.0G  31.2G   240K  2.64G  5.01G  2.02M  17.6G  23.0G
    16K:  1.17M  25.8G  57.0G   391K  7.86G  12.9G  2.41M  54.6G  77.6G
    32K:  2.57M   117G   174G  2.49M  83.8G  96.7G  2.58M   117G   195G
    64K:  7.04M   652G   826G   334K  28.5G   125G  7.06M   653G   848G
   128K:   150M  19.0T  19.8T   158M  19.8T  19.9T   150M  19.0T  19.8T
   256K:   152M  37.9T  57.7T   154M  38.6T  58.5T   152M  37.9T  57.7T
   512K:      0      0  57.7T      0      0  58.5T      0      0  57.7T
     1M:      0      0  57.7T      0      0  58.5T      0      0  57.7T
     2M:      0      0  57.7T      0      0  58.5T      0      0  57.7T
     4M:      0      0  57.7T      0      0  58.5T      0      0  57.7T
     8M:      0      0  57.7T      0      0  58.5T      0      0  57.7T
    16M:      0      0  57.7T      0      0  58.5T      0      0  57.7T
So I'm either doing something incorrectly here, which is entirely possible/probable, or something else is odd as by simple math, that special vdev should have more than the 38GB it suggests.

That said, my assumption is that the utilization/filling of said special vdev is automatic, and the simple tweak to the config to adjust the block size to utilize it would be all that's required.
 
Last edited: