@Rand__ I'd love your take on the below bold, italicized remarks. Nearing the end of this dilemma. I have an interest in "leaving no stone un-turned" after it taking two blown pools to get here.
Btw, with the newest esxi hw upgrade (v14 i think) Freenas U6 complains about the nvme controller, so don't do that. Have not looked into it tbh since it does not seem to impact anything (and going back would mean rebuild):.
Soo ... where I'm going with this, I've been testing all possibilities of Optane config + FreeNAS, but in 6.7 (and on 10176752, 10/2) to land on a zPool + SLOG config that I can live with until I get the P4800x. [kidding]
Briefly, my findings, and I'll report back in a more digestable / informative manner once complete ...
- 6.7 + 1 Optane + pass thru map trick = no good (no boot)
- 6.7 + 1 Optane + RDM + NVMe Controller = no good (no boot)
- 6.7 + 1 or more Optanes + RDM + SCSI Controller | LSI Logic SAS for RDM (LSI Logic SAS existing controller) = good
- 6.7 + 1 or more Optanes + vDisk + SCSI Controller | VMware Paravirtual for VDisk (LSI Logic SAS existing controller) = no good (boot, but couldn't see disks, which I guess that makes sense).
- 6.7 + 1 or more Optanes + vDisk + SCSI Controller | LSI Logic SAS for VDisk (LSI Logic SAS existing controller) = good
So without introducing any further data, would you agree with the following asserts culled from your prior comments:
(1) vDisks are preferred to RDM (but essentially the same), true / false
(2) Using a vDisk + SCSI Controller | LSI Logic SAS = preferred (at least more so than NVMeController - which is moot as that controller doesn't even work in 6.7, true /false
(3) In regards to deploying this schema, which route would you take: (a) or (b) below, or (c) other?
(a) would you slice in ESXi, i.e.
- nvme0 = 20GB vDisk (SLOG1 for HDD pool) + 220 GB vDisk for ISCSI (slightly smaller due to ESXi + FreeNAS boot);
- nvme1 = 20 GB vDisk (SLOG2 for HDD pool) + 240 GB vDisk for ISCSI;
- so I end up with two mirrored SLOGs for the HDD pool, but they are on different devices;
- and I end up with 460GB of mirrored storage for iSCSI, again different devices; so
- provided both devices are taking a beating and only one or the other is, I believe this is my optimal play. And if neither are getting hammered, it definitely is more performant than not striping;
(b) or would you simply attach 2 vDisks (240GB + 260GB) and slice in ZFS gpart create / add etc. to end up with the same slices?
Hopefully, those 3 questions are easy for you, but before I add a forth, let me throw you for a loop:
#1 above did briefly work (physical pass thru), but the benchmark (not a tell all o/c) shows RDM (#3) w/ ESXi overhead outperforming at higher sizes. How is that possible (rhetorical, not adding a 5th question). And this is a "clean" bench, in that it was super controlled, no other VMs, identical config, etc. The numbers are somehow lying to us, man.
6.7 - current
[using P4800x VIB]
And now to take it a step further into bizarro world, while the above was on 6.7, here is the same command executed on a 6.5 vDisk + NVMe Controller, suggesting a much more performant config @ 2,140 MB/s peak. This is higher than report on the FreeNAS forum for a data collection thread for ANY physical pass through device (it must be wrong).
6.5 - prior
[can't recall VIB, but I think it was 1.25]
So that brings me to question #4, which is hopefully also easy, which wins here ...
(a) 6.7 using LSI SAS SCSI Controller or (b) a reversion to 6.5 to use the NVMe Controller to obtain the stronger benchmarks? Which I don't see how they can be correct anyway ... I should note that as soon as I'm done testing + integrate your comments, I'm secure erasing, starting from a clean state (clean install etc) and no more tinkering, so that marginal time for a 6.5 install is no trouble.
Thanks very much for your valued feedback. I don't mean to bombard you with questions and I only ask as I think this is a personal interest of yours (my apologies if I'm being a pest).
NB1: Hopefully my thoughts weren't too disjointed and easy enough to follow
NB2: I've strayed from my normal sarcasm as I'm exhausted and don't want to be more trouble than I already am, but I believe that later graph warrants your favorite comment on the P3700, eh?