1.6tb NVMe to use for L2ARC (zfs)? - have 256g ram

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

james23

Active Member
Nov 18, 2014
441
122
43
52
I try very hard to not cross post (ie make same post here as over at the freenas forums), but in this case i feel its important enough to break my "rule"- There is a different perspective/experience level over here on STH forums and i really wanted to get that input on this important question. (i will delete this thread if users object to my cross post, tks) </disclaimer></apology>

so i have a extra intel p3605 1.6tb PCIe nvme card, and a few extra slots, so im considering adding it as a L2ARC, but wanted to get some feedback as to if i should even add an L2ARC. (i was all ready to do it, and even have tested it on a separate FN test server i keep, but after reading quite a bit, im having second thoughts as to if i should even run a l2arc).


The server specs/pool types are at the bottom (see SPOILER "Main FN BOX SPECS" ), it does have 256g of fast ecc ram. My usecase for the files on this FN box (physical box btw):


MY PLAN IS TO SPLIT THE L2ARC (ie ~400gb each for 3x of these pools)- please, i would like this discussion to mainly focus on if i should add an L2ARC, not on the wisdom (or lack of) of splitting/sharing a single 1.6tb drive l2arc. Thanks


pool name="he8x8TBz2": (has split optane slog)
SMB file access during work hours (from 4x windows pcs, here at my home office)
Backups via SMB and NFS - (ie veeam for some vms onsite, some offsite)
maybe 5x low use VMs running via NFS from it.
about 20tb of video files, that a plex VM accesses via SMB share


pool name="ssd3x6x480GBz2": (has split optane slog)
~ 15x VMs running via NFS from it. (all service type VMs, many windows 2012r2)
2x large PRTG VMs store their files here via NFS - (PRTG = snmp network motioning app)
2x heavy disk IO "torrent" VMs running 24/7 (use this ssd pool via NFS as temp/scratch download storage, then move out to other HDD pools via SMB when download completed)


pool name="red4tbENCz1": (has split optane slog)
3x milestone NVRs (cctv) archive their video to it daily (about 75gb per day)
light daily Backups via SMB - (ie some FTP / syncbackpro)
1x low use VMs running via NFS from it.


pool name="he5x8TBz1": (no slog)
ONLY ~ 20tb of video files, that a plex VM accesses via SMB share


random output of arcstats:
Code:
oot@freenas:~# arcstat.py
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
03:28:22   46G  2.5G      5  1.8G    3  748M   87  583M    1   182G  181G

 # arc_summary.py
System Memory:
        0.09%   225.63  MiB Active,     5.13%   12.80   GiB Inact
        92.35%  230.43  GiB Wired,      0.00%   0       Bytes Cache
        2.19%   5.46    GiB Free,       0.24%   617.35  MiB Gap
        Real Installed:                         256.00  GiB
        Real Available:                 99.97%  255.93  GiB
        Real Managed:                   97.49%  249.51  GiB
        Logical Total:                          256.00  GiB
        Logical Used:                   92.87%  237.74  GiB
        Logical Free:                   7.13%   18.26   GiB
Kernel Memory:                                  3.77    GiB
        Data:                           98.90%  3.73    GiB
        Text:                           1.10%   42.53   MiB
Kernel Memory Map:                              249.51  GiB
        Size:                           7.81%   19.48   GiB
        Free:                           92.19%  230.03  GiB
                                                                Page:  1
------------------------------------------------------------------------
ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0
ARC Misc:
        Deleted:                                1.64b
        Mutex Misses:                           1.82m
        Evict Skips:                            1.82m
ARC Size:                               73.21%  181.94  GiB
        Target Size: (Adaptive)         73.23%  181.99  GiB
        Min Size (Hard Limit):          12.50%  31.06   GiB
        Max Size (High Water):          8:1     248.51  GiB
ARC Size Breakdown:
        Recently Used Cache Size:       94.96%  172.80  GiB
        Frequently Used Cache Size:     5.04%   9.18    GiB
ARC Hash Breakdown:
        Elements Max:                           7.23m
        Elements Current:               79.66%  5.76m
        Collisions:                             314.86m
        Chain Max:                              6
        Chains:                                 440.55k
                                                                Page:  2
------------------------------------------------------------------------
ARC Total accesses:                                     46.83b
        Cache Hit Ratio:                94.56%  44.28b
        Cache Miss Ratio:               5.44%   2.55b
        Actual Hit Ratio:               94.34%  44.18b
        Data Demand Efficiency:         53.34%  2.62b
        Data Prefetch Efficiency:       12.03%  844.63m
        CACHE HITS BY CACHE LIST:
          Anonymously Used:             0.19%   85.99m
          Most Recently Used:           4.48%   1.98b
          Most Frequently Used:         95.29%  42.19b
          Most Recently Used Ghost:     0.01%   3.38m
          Most Frequently Used Ghost:   0.03%   14.49m
        CACHE HITS BY DATA TYPE:
          Demand Data:                  3.15%   1.40b
          Prefetch Data:                0.23%   101.60m
          Demand Metadata:              96.60%  42.77b
          Prefetch Metadata:            0.02%   8.12m
        CACHE MISSES BY DATA TYPE:
          Demand Data:                  47.93%  1.22b
          Prefetch Data:                29.16%  743.03m
          Demand Metadata:              22.68%  578.00m
          Prefetch Metadata:            0.23%   5.84m
                                                                Page:  3
------------------------------------------------------------------------
                                                                Page:  4
------------------------------------------------------------------------
DMU Prefetch Efficiency:                        7.83b
        Hit Ratio:                      9.43%   738.60m
        Miss Ratio:                     90.57%  7.10b
                                                                Page:  5
------------------------------------------------------------------------
zpool status:
Code:
root@freenas:~ # zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:04 with 0 errors on Sat Sep 14 03:45:04 2019
config:
        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0p2  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0
errors: No known data errors
  pool: he5x8TBz1
 state: ONLINE
  scan: none requested
config:
        NAME                                            STATE     READ WRITE CKSUM
        he5x8TBz1                                       ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/f9ad6c75-d748-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/fac4063a-d748-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/febf393e-d748-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/02df6e7a-d749-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/044e4834-d749-11e9-bae4-00259084f1c8  ONLINE       0     0     0
errors: No known data errors
  pool: he8x8TBz2
 state: ONLINE
  scan: scrub repaired 0 in 1 days 07:56:31 with 0 errors on Wed Sep 11 12:55:02 2019
config:
        NAME                                            STATE     READ WRITE CKSUM
        he8x8TBz2                                       ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/ab4a8be7-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/ac4d1939-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/ad4d7a28-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/ae4b5c46-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/af54390e-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/b05a4b41-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/b15cf9b3-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/b2675d33-c451-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/e1eba74e-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/e34d7ec3-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/e4a114e3-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/e608021e-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/e727453f-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/eb0401da-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/eed1a7b2-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/f013e8e4-d1bc-11e9-bae4-00259084f1c8  ONLINE       0     0     0
        logs
          gptid/ed098586-c7bc-11e9-975e-00259084f1c8    ONLINE       0     0     0
errors: No known data errors
  pool: hus9x4TBz2
 state: ONLINE
  scan: none requested
config:
        NAME                                            STATE     READ WRITE CKSUM
        hus9x4TBz2                                      ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/9a817388-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/9d201743-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/a077cd9e-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/a800a13f-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/b4bd6736-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/c2588d0b-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/cfea05aa-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/dabb68f5-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
            gptid/e76a9705-c456-11e9-bbf0-00259084f1c8  ONLINE       0     0     0
errors: No known data errors
  pool: red4tbENCz1
 state: ONLINE
  scan: resilvered 487G in 0 days 01:41:38 with 0 errors on Mon Sep  2 01:56:54 2019
config:
        NAME                                                STATE     READ WRITE CKSUM
        red4tbENCz1                                         ONLINE       0     0     0
          raidz1-0                                          ONLINE       0     0     0
            gptid/85a9cf3a-cd40-11e9-975e-00259084f1c8.eli  ONLINE       0     0     0
            gptid/7afcb936-c450-11e9-bbf0-00259084f1c8.eli  ONLINE       0     0     0
            gptid/7ca5439e-c450-11e9-bbf0-00259084f1c8.eli  ONLINE       0     0     0
            gptid/7e4df0e4-c450-11e9-bbf0-00259084f1c8.eli  ONLINE       0     0     0
            gptid/7f8f24d1-c450-11e9-bbf0-00259084f1c8.eli  ONLINE       0     0     0
        logs
          gptid/e82c363e-c7bc-11e9-975e-00259084f1c8        ONLINE       0     0     0
errors: No known data errors
  pool: ssd3x6x480GBz2
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:29:33 with 0 errors on Mon Sep 16 02:03:06 2019
config:
        NAME                                            STATE     READ WRITE CKSUM
        ssd3x6x480GBz2                                  ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/b238bbd8-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b2e8a1de-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b382351d-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b421e71c-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b4d79d37-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b580104e-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/b670b4d0-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b714597f-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b7d8bca7-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b88979e7-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b94e1216-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/b9f0c93b-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/bb0fa8d3-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/bbb3f9cf-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/bc74e749-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/bd35b67b-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/bde00cf9-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
            gptid/be9f1fbc-d82e-11e9-bae4-00259084f1c8  ONLINE       0     0     0
        logs
          gptid/f166ac78-c7bc-11e9-975e-00259084f1c8    ONLINE       0     0     0
errors: No known data errors
root@freenas:~ #

Some stats on the p3605;

(i know diskinfo is a write/slog relevant test, so not really relevant for a L2ARC drive, but i figured i would post at the bottom anyway)

Code:
root@freenas:~ # diskinfo -wS /dev/nvd0

/dev/nvd0

        512             # sectorsize

        1600321314816   # mediasize in bytes (1.5T)

        3125627568      # mediasize in sectors

        131072          # stripesize

        0               # stripeoffset

        INTEL SSDPEDME016T4S    # Disk descr.

        CVMD4414004X1P6KGN      # Disk ident.

        Yes             # TRIM/UNMAP support

        0               # Rotation rate in RPM


Synchronous random writes:

         0.5 kbytes:     22.4 usec/IO =     21.8 Mbytes/s

           1 kbytes:     22.1 usec/IO =     44.3 Mbytes/s

           2 kbytes:     20.4 usec/IO =     95.9 Mbytes/s

           4 kbytes:     15.4 usec/IO =    253.8 Mbytes/s

           8 kbytes:     20.5 usec/IO =    380.7 Mbytes/s

          16 kbytes:     26.9 usec/IO =    581.8 Mbytes/s

          32 kbytes:     36.4 usec/IO =    858.3 Mbytes/s

          64 kbytes:     49.5 usec/IO =   1263.3 Mbytes/s

         128 kbytes:     94.9 usec/IO =   1316.8 Mbytes/s

         256 kbytes:    168.1 usec/IO =   1487.0 Mbytes/s

         512 kbytes:    316.9 usec/IO =   1577.7 Mbytes/s

        1024 kbytes:    648.5 usec/IO =   1542.0 Mbytes/s

        2048 kbytes:   1288.0 usec/IO =   1552.8 Mbytes/s

        4096 kbytes:   2586.7 usec/IO =   1546.4 Mbytes/s

        8192 kbytes:   5110.6 usec/IO =   1565.4 Mbytes/s

root@freenas:~ #


root@freenas:~ # smartctl -a /dev/nvme0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Number:                       INTEL SSDPEDME016T4S
Serial Number:                      CVMD4414004X1P6KGN
Firmware Version:                   8DV1RA13
PCI Vendor ID:                      0x8086
PCI Vendor Subsystem ID:            0x108e
IEEE OUI Identifier:                0x5cd2e4
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1,600,321,314,816 [1.60 TB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Sat Sep 21 16:47:45 2019 CDT
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0006):   Format Frmw_DL
Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
Maximum Data Transfer Size:         32 Pages
Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +    25.00W       -        -    0  0  0  0        0       0
Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         2
 1 -     512       8         2
 2 -     512      16         2
 3 -    4096       0         0
 4 -    4096       8         0
 5 -    4096      64         0
 6 -    4096     128         0
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                        30 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    1%
Data Units Read:                    2,757,905 [1.41 TB]
Data Units Written:                 1,053,105 [539 GB]
Host Read Commands:                 6,052,800,385
Host Write Commands:                8,210,809,402
Controller Busy Time:               0
Power Cycles:                       388
Power On Hours:                     19,775
Unsafe Shutdowns:                   1
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged
Main system FINAL (still testing/building as of aug 2019)
OS: FreeNAS-11.2-U5 (stable) (boot volume is a mirror of 60gb intel 520 SSDs)
MB: X9DR3-LN4F+ rev 1.20A
CPU: 2x E5-2637 v2 (4c/3.5ghz) 2x e5-2640v2
RAM: 8x 32gb (256g@1866mhz) Samsung M386B4G70DM0-CMA4 (ecc + is on MB's QVL list)
Case: 4u 24bay (cse-825)
PSUs: 2x 920sq (Supermicro)
HBAs: 1x- lsi 9207-8i / 2x- lsi 9207-8e (all are at FW= 20)
10g NIC: CHELSIO T520-CR (2x 10g sfp+)
NVMe: 280g Optane p900 (as a speed test pool or zil for now)


DISKS:
MANYx 8TB HGST "He8" SAS 7200rpm (HUH728080AL5200)

21x 480gb HGST SSD (HUSMR1650ASS204)
6x 200gb HGST SSD (HUSMM8020ASS201)
9x 4tb HGST SAS 7200rpm (HUS724040ALE640)
5x 4tb WD RED SATA 5400rpm
23x 3tb HGST SAS 7200rpm (HUS724030ALS640) (might not use these)


POOLs:
* 16x 8tb - 2x vDEVs of 8x-disk in Z2

(on main chassis bays via lsi 9207-8i to BPN-SAS2-846el1, name="he8x8TBz2")
* 5x 8tb - Single vDEVs of 5x-disk in Z1
(on main chassis bays via lsi 9207-8i to BPN-SAS2-846el1, name="he5x8TBz1")
*18x 480gb SSD - 3x vDEVs of 6x-SSDs in Z2
(on 2u "disk shelf2" via lsi 9207-8e to BPN-SAS-216el1 name="ssd3x6x480GBz2")
* 9x 4tb - Single vDEV of 9x-disks in Z2
(on 3u "disk shelf1" via lsi 9207-8e to BPN-SAS2-836el1 name="hus9x4TBz2")
* 5x 4tb - Single Encrypted vDEV of 5x-disk in Z1

(on main chassis bays via lsi 9207-8i to BPN-SAS2-846el1, name="red4tbENCz1")

Disk Shelf 1: xx
Case: 3u 16bay (Supermicro)
PSUs: 2x 920sq (SM)
Expander/BP: BPN-SAS2-836el1
Other: no MB, just a on/off SW and a Noctura Fan dial/knob
DISKS: 15x 3tb HGST SAS 7200rpm (HUS724030ALS640)


Disk Shelf 2: SSDs (ie pool "ssd3x6x480GBz2" disks are here)
Case: 2u 24bay 2.5" bays (Supermicro)
PSUs: 2x 920sq (SM)
Expander/BP: BPN-SAS-216el1
Other: no MB, just a on/off SW and a Noctura Fan dial/knob
DISKS: 19x 480gb HGST SAS3 SSDs (HUSMR1650ASS204)

Disk Shelf 3: xx
Case: 4u 24bay (Supermicro)
PSUs: 2x 920sq (SM)
Expander/BP: BPN-SAS2-846el1
Other: no MB, just a on/off SW and a Noctura Fan dial/knob
DISKS: 15x 3tb HGST SAS 7200rpm (HUS724030ALS640)


Other:
10g SW: Ubiquiti ES‑16‑XG (not unifi sw)
1g SW: D-Link DGS-1510 (24x 1g eth + 4x 10g SFP)
ePDU: 3x Eaton G3 EMA115-10 (per outlet power monitoring / switching)
UPS: 3x APC SMT-1500 (w AP9631 Network MGMT cards, FN connected/working)
Rack1: 42U StarTech Open Frame 4-Post- Adjustable Depth
Rack2
: 25U StarTech Open Frame 4-Post- Adjustable Depth - on "furniture" sliders so can slide around
 
Last edited:

james23

Active Member
Nov 18, 2014
441
122
43
52
(does my arcstats data show that i really wont benefit from adding a large NVMe l2arc ?)

as just one specific example, i notice that sometimes if i run a find command like:
find /mnt/ -type d -iname "*7.71*"
(or -type f for files)

sometimes it will take quite a while to search (10minutes+ understandable as i have tons of files/folders), however other times, (often if i have run the command a few times), the search will complete pretty quickly (as in ~1-3 minutes). I attribute the fast search to alot of the data happening to be in my ram/arc. so as just one (very specific) usecase for having a large nvme l2arc, i was hoping something like this may be more likely to remain in arc/l2arc (thus faster searches?)

that is just one specific example, any feedback is appreciated. thanks!