ESXi Storage Architecture Question [Intel P3700]

svtkobra7

Active Member
Jan 2, 2017
348
74
28
I have a very simple ESXi AIO server functioning as a firewall (pfSense) and storage server (FreeNAS), which also hosts a half dozen or so VMs, and during the winter months it serves as a space heater, too. :) I hopped on a deal last year and bought 2 Sun F40 Flash Accelerators, passed them through to FreeNAS, and played around with with different pool layouts of the 8 drives, always used that pool exclusively as an iSCSI datastore, but I was never really happy with them.

So ... I upgraded and bought an 800 GB Intel P3700 (and it can't get here fast enough because the temporary block storage is sitting on spinning disks, and, well, that is kind of s l o w). Likely irrelevant, but my motherboard being a Supermicro X9DRi-LN4F+ doesn't have U.2 / SFF-8639 connectors so I purchased a StarTech U.2 to PCIe Adapter for 2.5" U.2 NVMe SSD adapter.

I have a few questions if you would be so kind as to assist ...

1. Is there any advantage to passing the Intel P3700 through to FreeNAS and then presenting it back to ESXi via iSCSI?
I don't believe there is any to speak of, only downside (previously, the reason the Sun F40s were passed through to FreeNAS were so they could be striped and mirrored)

2. Also, it looks like there are both VMware (nvme version 1.2.1.34-1vmw) and Intel VIBs (intel-nvme version 1.2.1.15) for the drive. Is one to be used instead of the other, if so, is one preferred?

3. Anything else I should be aware of as I eagerly await all of those IOPs headed my way?

Probably totally unnecessary, but I used Excel to diagram my old and future storage architecture to present the two options. If you guessed that I don't have an IT background, you would be correct (there is probably a much better way to create such diagrams, I'm just not aware of one, and my background being Finance I could probably still use Excel blind, and quickly).

Thanks in advance for your time. :)

Old Config (Rightmost Column in Scope, Sun F40s have been deprecated)


New Config Option #1 (Rightmost Column in Scope, Intel P3700 = new VM datastore via iSCSI)


New Config Option #2 (Rightmost Column in Scope, Intel P3700 = new VM datastore)

 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
1. Is there any advantage to passing the Intel P3700 through to FreeNAS and then presenting it back to ESXi via iSCSI?
You could partition the drive and use it as slog and l2arc if you pass it through if you need more capacity than 800GB. Else leave it in ESX (unless you have multiple clients for iSCSI then pass through o/c)

2. Also, it looks like there are both VMware (nvme version 1.2.1.34-1vmw) and Intel VIBs (intel-nvme version 1.2.1.15) for the drive. Is one to be used instead of the other, if so, is one preferred?
Intel is usually newer but might not be on HCL yet if you care about such things

3. Anything else I should be aware of as I eagerly await all of those IOPs headed my way?
There were issues with the P3x00 series on ESX (especially vSan) having bad performance, but as a native datastore it was alright when I used it.
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
@Rand__ : Thanks for your kind (and gentle) reply as I know that my questions are elementary at best, but I suppose you learn by asking the stupid questions at the risk of sounding stupid, right?

You could partition the drive and use it as slog and l2arc if you pass it through if you need more capacity than 800GB. Else leave it in ESX (unless you have multiple clients for iSCSI then pass through o/c)
  • My storage needs as a VM datastore = not exceeding 400 GB and if I were to pass it through to FreeNAS, ESXi would be my only iSCSI client (in reply to your remarks). As such, my interpretation of your reply is just to leave it in ESXi. But, you raise an interesting point ...
  • ... as my next upgrade was to look into adding a SLOG / L2ARC. Before you read on, I caution that I'm not too knowledgeable on those topics yet. On the L2ARC point, I know the "official" FreeNAS forum guidance is to max out your RAM prior to considering adding an L2ARC, which I've done, all 24 DIMM slots are populated with 8 GB modules. Of that 192 GB total RAM on the host, I currently have 128 GB reserved / locked to the FreeNAS VM. Probably more than needed (if there is such a thing), but I certainly meet the 1 GB RAM : 1 TB HDD rule of thumb (which I know loses relevance with higher capacity pools, mine is 16 x 6 TB raw). My point with this detail is that I've read not having enough RAM and adding L2ARC can be detrimental to performance; however, I believe I have enough and that prerequisite has been met.
  • If I followed your comment correctly, you are suggesting I can pass it through to FreeNAS and create partitions for the block storage, SLOG, and L2ARC, right? For some reason, I thought it was best to have separate physical (not logical) devices for your SLOG and L2ARC, or am I mistaken?
  • Ultimately, I present the additional query as adding a SLOG and L2ARC are next on my list and if I can do it all with the drive I just purchased, great. If not, I can purchase another (I nearly did). I'm not sure how the math works out as I thought for best performance utilization of block storage shouldn't exceed 50% of available*, so if that is the case and I pass through to FreeNAS my 400 GB need = 800 GB drive capacity right there. * I'm sure the P3700 is nicely over provisioned and not sure how that comes into play with that calculation.
Intel is usually newer but might not be on HCL yet if you care about such things
  • So the reason for my question was because I thought I read something about one VIB or the other working better (but prior to posting and looking for that reference I couldn't find it).
  • I'm not sure I follow the comment about it not being on the HCL (but would like to understand).
  • The link previously provided [VMware (nvme version 1.2.1.34-1vmw) and Intel VIBs (intel-nvme version 1.2.1.15)] is to the "VMWare Compatitibility Guide" for the Intel SSD DC P3700 Series SSDPE2MD800G4 (800 GB, 2.5-inch), i.e. the drive I purchased.
  • The Intel VIBs all list the type as "VMware Inbox, native" and the VMware VIBs all list the type as "Partner Async, native" and while I'm not sure what "Inbox,native" and "Async, native" mean, I'm assuming that the Intel VIB = the Partner acceptance level and the VMware = the VMware certified acceptance level, but either way either would be considered to be on the HCL?
  • If my supposition is correct, it is definitely not to be a smarta$$ ... I just would like to understand.
There were issues with the P3x00 series on ESX (especially vSan) having bad performance, but as a native datastore it was alright when I used it.
  • Hopefully one day I have a vSAN cluster, but not yet. Thanks for the additional nugget of knowledge.
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
You can add partitions as slog/l2arc so that would work, o/c its not the FreeNas recommended way but they are fairly conservative ;)
With 128 GB Ram allocated I don't think you need a l2arc, unless you have a lot of files that you access regularly that would benefit from being in Ram.
O/c that raises the question of network speed - are you consuming data from FreeNas on 10G? For client side? Else 16 drives will saturate 1 Gigabit without problem.
VMs hosted on it would be another topic, but I don't see the need with your current setup > If you don't need more than one client at the moment I'd leave it with ESX, saves the hassle.

HCL is the vmware Hardware compatibility list (different ones for esx/vsan), thats what you linked. There are also drivers avbailable at the vmware (on HCL) and intel (maybe not on HCL yet ) webpage s
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
You can add partitions as slog/l2arc so that would work, o/c its not the FreeNas recommended way but they are fairly conservative ;)
  • Well, I suppose it depends how you look at it (as to whether the "FreeNAS way") is conservative or not.
  • If we are speaking to technical requirements, agreed, and I still have a chuckle as to the extreme which their recommendations are hammered into their user base. Examples: If you want to lose your data, go ahead and virtualize FreeNAS. If you don't care about your data, don't worry about not using ECC RAM. I think they have backed off the former point, and to be fair I do see some merit in both points.
  • Now, if we are speaking to declaring a certain release ** cough ** Corral ** cough ** production ready, and then downgrading it to a technical preview a few weeks later, then one might suggest the "FreeNAS way" is anything but conservative. <= Meant to be a joke, although I realize it wasn't a good one.
  • Ughh, I hate to think about that, Corral was released right before I pieced my build together and while I've built PCs before, this was my first foray into "enterprise grade" components. I installed it and it was so slow I was seriously pondering if I had made a horrible mistake going with 2x E5-2670 v1s, i.e. hardware that is several generations old. Alas, I quickly determined the hardware wasn't the issue (as I know today, they are overkill, let alone the 2x E5-2680 v2s I upgraded to).
With 128 GB Ram allocated I don't think you need a l2arc, unless you have a lot of files that you access regularly that would benefit from being in Ram.
  • ARC is just the "index" of the files in the L2ARC (the files are actually in the L2ARC, right?). Not seeking to correct if so, just trying to learn.
  • I'm not sure if my ARC Hit Ratio (as reported in FreeNAS => Reporting => ZFS => ARC Hit Ratio can serve as proxy, it but it looks pretty solid looking back 1 week with Arc Size = 98.1 GB min | 98.2 GB avg | 98.6 GB max and ARC Hit Ratio = 44.4% min | 96.8% avg | 100% max. I'm not sure I trust those numbers as they look a little bit too good.
  • Maybe that is because I watch the same Linux ISOs over and over again. Wait ... did I say watch? Sorry, I meant install, of course. <= More bad humor
O/c that raises the question of network speed - are you consuming data from FreeNas on 10G? For client side? Else 16 drives will saturate 1 Gigabit without problem.
  • Yes, and no. By way of physical adapter, no. By way of virtual adapter, yes. (I believe)
  • Moving to 10 GigE is on my list too (seems like this might be a long list). What has stopped me to date: It can be done so cheaply with Fiber, right? But 10GBASE-T is more pricey to implement. Unfortunately, I don't have a my primary workstation in a location a meter or two from the server, rather it is some distance away and my condo is wired with Cat 5e. Due to not having a basement or attic to work with, it would be damn near impossible to do a fiber run, and while Cat 5e should support 10GigE over short distance, the specification calls for Cat6 (@ 55M) and I hate to buy NICs, a switch, etc. only to find out that I'm not able to get 10GigE over Cat 5e. Go figure that that the 4 drops I added behind wall (yes, i love to remove/replace sheetrock) to my TVs (2 to each for redundancy and in conduit) are Cat6, but the NICs on the TVs are 100Mbps.
  • Anyway, I digress, yes, I saturate my 1Gbps network all day long currently from my workstation; however, when I need to do some heavy lifting, I'll just use a VM ... and with a VMXNET 3 virtual adapter and the VM and storage residing on the same host, I'm "emulating" a 10GigE network, right? That being said, I haven't ever really benchmarked my two pools of spinning disks, but I think there is some optimization to be had there ...
  • Sorry if that is less than a prescriptive answer.
VMs hosted on it would be another topic, but I don't see the need with your current setup > If you don't need more than one client at the moment I'd leave it with ESX, saves the hassle.
  • Done (no pass through). I never truly optimized my last attempt at block storage, but of course this time around I expect performance to be much better (in my opinion, there isn't much of a comparison between 2 x Sun F40s vs. 1 x P3700.
HCL is the vmware Hardware compatibility list (different ones for esx/vsan), thats what you linked. There are also drivers avbailable at the vmware (on HCL) and intel (maybe not on HCL yet ) webpage s
  • Got it. Thanks for explaining.
I truly appreciate your time and sharing your knowledge. :):):)
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
Well lets not discuss Corral;) I am glad I didnt jump on that one for my primary box...

Conservative as in adhering to a more or less proven setup to ensure maximum data safety (good thing) to a point of resisting alternative solutions (bad thing).

ARC should be primary cache (RAM) and additionally for L2arc you need to keep an index in RAM too, so in using L2Arc you take away memory from (L1)ARC.

And yes, using teh same ISOs over and over will help in Arc Cache Hit Ratio, but is that sustainable behavior for the future?;)

Access inside ESX will use the maximum speed available (actually I am not sure whether there is an internal limit or not).

And I'd look into other options regarding speeding up you pool when you got 10G in your workstation ... and you can test that by getting two 10G adapters and going point to point, should not be that expensive...
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
Well lets not discuss Corral;) I am glad I didnt jump on that one for my primary box...
  • Agreed.
Conservative as in adhering to a more or less proven setup to ensure maximum data safety (good thing) to a point of resisting alternative solutions (bad thing).
  • Also agreed, we are on the same page. Don't mind my poor sense of humor.
ARC should be primary cache (RAM) and additionally for L2arc you need to keep an index in RAM too, so in using L2Arc you take away memory from (L1)ARC.
  • Righto ... I was saying the same thing in a different way, but not as clear.
  • Thus my earlier comment, that I thought I had enough memory such that adding L2ARC could be beneficial, as opposed to if I was running 16GB, I'd take quite a performance hit.
And yes, using teh same ISOs over and over will help in Arc Cache Hit Ratio, but is that sustainable behavior for the future?;)
  • Poor humor again. I was reading the "data hoarder" reddit a number of months ago and these guys (presumably) were making jokes about storing dozens of TBs of Linux ISOs. Nobody is storing that volume of ISOs and especially not distros that are free to (legally) download. My understanding is that the term "Linux ISO" has an entirely different meaning ... rather "Linux ISO" = a certain type of video file if you catch my drift. Maybe the type of cinematography that you would be embarrassed if your parents popped over while viewing ... ;)
  • I just found the whole thing hilarious for some reason. Maybe you caught my drift, i.e. the wink?
Access inside ESX will use the maximum speed available (actually I am not sure whether there is an internal limit or not).
  • That is a good question! And it begs two more questions: (1) If there is a limit, why does there have to be one? (2) Whenever someone posts a disk speed test reporting a figure higher than that 10 Gbit = 1250 MB/s,, is it (a) because there is no limit, (b) they are using the storage bare metal, (c) their network is faster than 10 GigE, or (d) the test is inaccurate, such as using compressible data.
  • I thought the limit was actually the speed the virtual NICs inside VMs report and every VM I've checked (FreeNAS, Ubuntu, Windows) all report 10Gbase-T. Now, not so sure.
And I'd look into other options regarding speeding up you pool when you got 10G in your workstation ... and you can test that by getting two 10G adapters and going point to point, should not be that expensive...
  • 10-4. But ...
  • I'm ready to test ... feel free to send a pair right over. I'll be kind enough to pay for shipping of course. ;) Kidding.
Thanks again for the assistance, your time, and the conversation. I wish you a good weekend.
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
Ah thats why you're watching the ISO, I wondered;)

And ESX network limit should not be related to actual physical NIC speed, else you'd only get 1G within ESX if you did not have 10G NICs
It will be limited to RAM/CPU cycle spped at the ultimate end; just not sure what the actual vmxnet3 limit is. I ran tests with 56GBE nics and iperf was working fine up to 40GBe iirc, but there were issues with the NICs that I found later (MLX, dropped packages on TCP Offload) and I never retested.
Just wondered if vmxnet3 would be capable of 100GBe (or rather PCIe3 x16 which is lower iirc).

Wouldn't mind sending you a spare NIC but got only one atm. Not sure about sending a 10G capable board as counterpoint though;)
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
Ah thats why you're watching the ISO, I wondered;)
  • LOL. I have the worst sense of humor.
And ESX network limit should not be related to actual physical NIC speed, else you'd only get 1G within ESX if you did not have 10G NICs
  • Understood, otherwise I wouldn't see 10GigE virtual adapters, as I have no 10GigE physical adapters:
  • Ubuntu_VMXNET3.png
  • Win10_VMXNET3.png
  • Nor would I see speeds in excess of 1 Gbps (which I do).
  • But apparently this isn't a real "limit" anyway.
It will be limited to RAM/CPU cycle spped at the ultimate end; just not sure what the actual vmxnet3 limit is. I ran tests with 56GBE nics and iperf was working fine up to 40GBe iirc, but there were issues with the NICs that I found later (MLX, dropped packages on TCP Offload) and I never retested.
Just wondered if vmxnet3 would be capable of 100GBe (or rather PCIe3 x16 which is lower iirc).
  • WOW!
Wouldn't mind sending you a spare NIC but got only one atm. Not sure about sending a 10G capable board as counterpoint though;)
  • I was totally kidding, but the offer is much appreciated.
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28


Yeah ... it is never fun with a post starts out with the PSOD, eh? So I was super excited to receive my P3700 a day earlier, and then, things went downhill:

  • First, my benchmarks left me lusting for the desired specs.
    • ATTO v3.05: They are odd, and I think I'm to blame as the drive is rated up to 2,800 MB/s read & 1,900 MB/s write, and not only am I "missing" performance, but I'm seeing max write (not exceeding 2,000 MB/s) exceed max read (not exceeding 1,400 MB/s). Windows 10 with a QD of 4.
    • CrystalDiskMark 5.2.2 x64: Seq Q32t1 read = 892.5 MB/s and write 1629 MB/s.
    • Anvil's Storage Utilities 1.10: Read = highest speed = 922.94 MB/s (seq 4MB) and highest IOPS of 67k (4K QD16). Write: Highest speed = 789.51 MB/s (seq 4MB) and highest IOPS of 72k (4K QD16).
  • But that would have been OK had that been the extent of it ... I've needed to learn how to use fio and properly benchmark for some time.
  • Then FreeNAS starts whining at me, and not a slight complaint. There is absolutely no chance that I pulled a 3.5" HDD and forgot, or otherwise bumped a drive somehow, there is a bezel on the the front.
Code:
The volume Tank1 state is DEGRADED: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state.
Device: /dev/da5 [SAT], unable to open ATA device
  • Ultimately, ESXi becomes completely unresponsive and ends up producing a PSOD, a number of times.
As much as I really want to point the finger at myself, and despite my inexperience relative to other forum members, I've installed ESXi more than a few times, and the only item that changed was moving a passing through a small SSD as RDM to FreeNAS and of course the installation of the P3700 as a datastore.

Any thoughts on items to look into? Thanks in advance. I'm dumbfounded. I suppose next steps (what makes sense to me, but you guys likely know a wiser approach) would be to install Win10 baremetal and see what performance looks like and then run through reinstalling everything (ESXi).


Here was what landed me there:
  • Clean 6.5 U2 install on SATADOM.
  • Installed two vibs (which I'm nearly certain are the correct vibs). See Figure 1.
  • Shutdown, added the drive and booted back up.
  • Peeked into the BIOS and indeed it recognized the drive. Only odd thing there is that it declares the firmware as P3600, but it is a P3700 drive. I thought nothing of it considering it is a dated board. See Figure 2.
  • Using the intel_ssd_data_center_tool-3.0.11-400_signed.vib I looked at the drive properties, but didn't change anything. I dont know what I'm looking at yet (but was happy to confirm it is running at max power draw, 25W, as I had read a lower consumption negatively impacted performance). See Figure 3.
  • Created a datastore (no pass through or anything) with the entire amount of the drive.
  • Added a couple of ISOs (pfSense / FreeNAS). Installed them both. Restored prior config from backup immediately prior to installing 6.5U2.
  • After FreeNAS was up, I added a Windows 10 VM to the P3700 datastore, I noticed performance I wasn't expecting and then FreeNAS started thinking I took a HDD from it and the instability started. Note: Since this system has been configured (~May 17, I've never once seen that error), which remains unresolved (shouldn't be a big deal to wipe the drive and pop it back in there and let it resliver), but it can't possibly be a coincidence.
Figure 1 - VIBs installed
Code:
esxcli software vib install -v /tmp/intel_ssd_data_center_tool-3.0.11-400_signed.vib
esxcli software vib install -v /tmp/intel-nvme-1.3.2.4-1OEM.650.0.0.4598673.x86_64.vib
Figure 2- BIOS settings




Figure 3 - intel_ssd_data_center_tool output
Code:
[root@ESXi:~] /opt/intel/isdct/isdct show -all -intelssd

- Intel SSD DC P3700 Series CVFT730000A9800QGN -

AggregationThreshold : 0
AggregationTime : 0
ArbitrationBurst : 0
Bootloader : 8B1B0131
CoalescingDisable : 1
DevicePath : intel-nvme0
DeviceStatus : Healthy
DirectivesSupported : False
DynamicMMIOEnabled : The selected drive does not support this feature.
EnduranceAnalyzer : Media Workload Indicators have reset values. Run 60+ minute workload prior to running the endurance analyzer.
ErrorString :
Firmware : 8DV10171
FirmwareUpdateAvailable : Firmware=8DV101H0 Bootloader=8B1B0133
HighPriorityWeightArbitration : 0
IOCompletionQueuesRequested : 30
IOSubmissionQueuesRequested : 30
Index : 0
Intel : True
IntelGen3SATA : False
IntelNVMe : True
InterruptVector : 0
IsDualPort : False
LatencyTrackingEnabled : False
LowPriorityWeightArbitration : 0
Lun : 0
MediumPriorityWeightArbitration : 0
ModelNumber : INTEL SSDPE2MD800G4
NVMeControllerID : 0
NVMeMajorVersion : 1
NVMeMinorVersion : 0
NVMePowerState : 0
NVMeTertiaryVersion : 0
NamespaceId : 4294967295
NamespaceManagementSupported : False
NativeMaxLBA : 1562824367
NumErrorLogPageEntries : 63
NumberOfNamespacesSupported : 1
OEM : Generic
PCILinkGenSpeed : 3
PCILinkWidth : 4
PLITestTimeInterval : The selected drive does not support this feature.
PhySpeed : The selected drive does not support this feature.
PhysicalSectorSize : The selected drive does not support this feature.
PowerGovernorAveragePower : The desired feature is not supported.
PowerGovernorBurstPower : The desired feature is not supported.
PowerGovernorMode : 0 25W
Product : Fultondale
ProductFamily : Intel SSD DC P3700 Series
ProductProtocol : NVME
ReadErrorRecoveryTimer : Device does not support this command set.
SMARTEnabled : True
SMARTHealthCriticalWarningsConfiguration : 0
SMBusAddress : 106
SMI : False
SectorSize : 512
SerialNumber : REDACTED
TCGSupported : False
TempThreshold : 85
TemperatureLoggingInterval : The selected drive does not support this feature.
TimeLimitedErrorRecovery : 0
TrimSupported : True
VolatileWriteCacheEnabled : False
WriteAtomicityDisableNormal : 0
WriteCacheReorderingStateEnabled : The selected drive does not support this feature.
WriteCacheState : The selected drive does not support this feature.
WriteErrorRecoveryTimer : Device does not support this command set.
Figure Fail - Epicly
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
Hi,
sorry to hear you have issues.

1.the only item that changed was moving a passing through a small SSD as RDM to FreeNAS and of course the installation of the P3700 as a datastore.

Can you get rid of the RDM? It should work but i always try to prevent this as its not the safest way to have a drive passed through

2. You don't necessarily need the Intel nvme driver vib
3. There is a firmware update available from isdct (which seems to be able to update firmware on the Lenovo drive which is uncommon for OEM)


Performance - while i still have not seen recent official data I still think that the Spectre/Meltdown patches take a massive amount off of nvme drives
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
Hi,
sorry to hear you have issues.
Certainly not your fault and I'm sure it gets resolved. Slightly frustrating in the interim.

1.the only item that changed was moving a passing through a small SSD as RDM to FreeNAS and of course the installation of the P3700 as a datastore.

Can you get rid of the RDM? It should work but i always try to prevent this as its not the safest way to have a drive passed through
  • Your point is noted, and not to say prior precedent creates a state which will always be the case, but I passed a little 40GB INTL 320 series through to FreeNAS for the system dataset writes, have a place for my scripts, and also keep a single jail there. It has been there for months.
  • My point, is that I actually added another one (before I had one RDM, now I have two). It was done correctly, or at least same way as in the past, added 2nd and 3rd SCSI controller of course.
  • I wish I could do it correctly (the pass through), but then I can't boot of my little satadom if I pass through the SATA controller (all or nothing per my understanding), and would have to revert back to booting via USB. Ultimate the solve is to just buy a cheap HBA to appropriately get those drives into FreeNAS (I'll actually do that today).
2. You don't necessarily need the Intel nvme driver vib
  • Yeah I was thinking when I have another go at it, if RDM removal doesn't fix anything, I don't bother with the vibs.
  • Thanks for the confirm.
3. There is a firmware update available from isdct (which seems to be able to update firmware on the Lenovo drive which is uncommon for OEM)
  • I was surprised to see the reference to Lenovo as this is what I bought: PROVANTAGE: Intel SSDPE2MD800G401 800GB DC P3700 OEM Series 2.5" PCIe 3.0 SSD. My knowledge base is quite limited, but I do know that if I had opted for a P3605 I should fully expect to see "Oracle," right? [rhetorical] But that part number (ssdpe2md800g401) matching Intel's website, I was surprised to see Lenovo.
  • Is something wonky going on here? Any downside to having a drive with Lenovo firmware on it for some reason?
  • Presumably since the data center toolbox in ESXi is all CLI and I'm completely unfamiliar with it, it would be OK to sping up a Win10 VM, pass through the drive, and use this DL, correct? Download Intel® SSD Data Center Tool
Performance - while i still have not seen recent official data I still think that the Spectre/Meltdown patches take a massive amount off of nvme drives
  • Certainly would make sense. What do you make of the write speed exceeding the read speed though, a bit wonky, eh? Or would that align with the same explanation? Happy to post screenshots of those attempts at benchmarking (admittedly I have much to learn).
Thanks again so much for all of your help. Initially, I thought I owed you a 6 pack of bier or so, but now we are trending towards a keg! Seriously, thank you!
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
Hm, If you bought a genuine Intel drive and intelssd tool seems to think the same then I would assume its fine - especially given the fact that your bios sees it as P3600 anyway.
I dont think its actually Lenovo firmware, so no reason not to perform the firmware upgrade imho. You can do that from ESX if its stable.

Performance - I have seen this in my own tests a couple of time already and never found an explanation. Important in benchmarking seems to be to get rid of balanced power plan and move to high performance
 

svtkobra7

Active Member
Jan 2, 2017
348
74
28
  • I'm genuinely impressed you nailed it on the first try. How did you know?
  • Sure enough I shut down FreeNAS, removed those RDM drives, saved the update ... and guess what I get ... a PSOD! This one I welcomed as it means you found the issue!
  • Do you think the issue was the fact I went from 1 to 2 RDM drives, or should I stay away altogether?
  • Stupid question: I'm running a 9211-4i to a BPN-SAS2-EL1, isn't the whole point of an expander backplane that I can pop a cable into J1 or J2 (bottom on page 3, here: https://www.supermicro.com/manuals/other/BPN-SAS-836EL.pdf) and connect a few more drives via the existing HBA for only the cost of the cable. Or alternatively, and again for the cost of a cable, I believe I can simply connect those two drives to the SATAII SCU on the motherboard and pass the SCU through to FreeNAS. I'm not concerned about speed in this case SATAII more than gets the job done for a few GBs of data. Supermicro | Products | Motherboards | Xeon® Boards | X9DRi-LN4F+
  • I know it is a waste and not sure if it changes anything, but I hate the slots above the HDD bays on the 836 (love it otherwise and that they can be re-purposed), so have these installed which are SAS3/SATA3: Slim DVD Size Drive [MCP-220-81506-0N] | Slim Floppy Size Drive [MCP-220-81504-0N].
Important in benchmarking seems to be to get rid of balanced power plan and move to high performance
  • Thanks for that tid bit ... there are two settings in the BIOS (both are at max performance or whatever the actual term is) and the ESXi power setting is max peformance as well. And of course, we have already confirmed the drive is sucking down the max 25W.
Thank you again. Darn you are good.
 

Rand__

Well-Known Member
Mar 6, 2014
4,608
918
113
-Glad to hear you found it - pure guesswork - it was unlikely to be the P3700 as I have passed those through endlessly but could have been driver or firmware too (or a combination).
-You can try with 1 RDM if that worked for you (but might not due to new device in the chain now) but me personally prefer not to use them
-Hm I am not sure you can directly add disks to these ports to be honest, you might need reverse cable as that port is intended to be connected to another backplane iirc from my 847
-DVD should pose no problem. good place for a low power ssd though;)

Don't forget performance setting in the guest OS (windows for CDM)