Thoughts on ESXi 7.0 Boot Drive (Samsung 970 EVO Plus or WD SN750), please? (for my specific use case)

Best ESXi 7.0 Boot Drive (and local datastore for me)?

  • Samsung 970 EVO Plus

    Votes: 2 33.3%
  • WD SN750

    Votes: 1 16.7%
  • Neither, you are silly and don't deserve servers

    Votes: 3 50.0%

  • Total voters
    6
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

svtkobra7

Active Member
Jan 2, 2017
362
87
28
I'd love some thoughts on what is the better ESXi 7.0 boot drive, the Samsung 970 EVO Plus or WD SN750, both @ 500 GB. Specs seem quite similar and just prefer to toss the matter out to the pros (which I am NOT). Please let me know - after review of my constraints - if I'm completely barking up the wrong tree.

And a Question: Both should allow for ESXi 7.0 to detect and boot properly, right? I could boot ESXi from the 900p on 6.7, but never on 7.0, and believe it is because of an older NVMe spec.

Before you scratch your head
  • I've never loved the idea of using a USB boot drive, but beyond that have grown accustomed to storing VMs on the same drive as boot.
    • I previously had Optane 900ps and used them for ESXi boot, FreeNAS boot, and SLOG (vdisk).
  • As I wind up for rev 3.0 of my home network, I want to make purchases that align with forecasted future needs, and that was part of the reason I sold the 900p AIC.
    • I think that going the M.2 route aligns with that notion.
An option, but also not,
  • Buy a used 80GB INTL SSD and velco it to the case.
    • Honestly, I'm far too OCD to want to do that.
  • I'm limited on (current) placement in my 2U chassis with all 12 3.5" drive bays filled and the NVMe rear hot swap bays to be filled in short order.
  • I have used SATADOMs before (me likey), but not really an option as my 900ps sold on ebay faster than I could procure replacements and need to buy something Saturday.
On the matter of connectivity,
  • I'm quite hopeful that this guy supports bifurcation on my board (X9DRH-7TF), to be joined by either a 905p or P4801x (both in transit).
  • I could never pass through the 900p to FreeNAS properly and look forward to properly doing so (I might add).


970 EVO Plus 500 GB Specs
  • FORM FACTOR: M.2 (2280) / INTERFACE: PCIe Gen 3.0 x4, NVMe 1.3
  • SEQUENTIAL WRITE: 500GB: Up to 3,200 MB/s
  • SEQUENTIAL READ: Up to 3,500 MB/s
  • RANDOM WRITE (4KB, QD1): 500GB: Up to 60,000 IOPS
  • RANDOM WRITE (4KB, QD32): 500GB: Up to 550,000 IOPS
  • RANDOM READ (4KB, QD1): 500GB: Up to 19,000 IOPS
  • RANDOM READ (4KB, QD32): 500GB: Up to 480,000 IOPS
  • Endurance: 5 Years or 300 TBW
WD SN750 500 GB Specs
  • Sequential Read up to (MB/s) (Queues=32, Threads=1): 3,430
  • Sequential Write up to (MB/s) (Q_ueues=32, Threads=1): 2,600
  • Random Read 4KB lOPS up to (Q_ueues=32, Threads=8): 420K
  • Random Write 4KB lOPS up to (Queues=32, Threads=8): 380K
  • Endurance: 5 Years or 300 TBW

On a more comical note, and I will probably get flamed for this, I think the OLED display on this heatsink is hella cool:

But then again, I had to order this guy for the in transit 905ps I bought.

And at least this article, which I cite realizing it isn't authoritative on the matter, suggests they may be needed and do make a difference.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Best boot option: satadom.
Second best boot disk option: usb

Leave the boot disk for boot only, even if it dies, your esxi will continue to run. Get a 16GB usb standby if your primary boot disk dies. It takes 5 min to reinstall and configure an esxi server.

Create datastore on the entire 970 pro or sn750, don't mess up with partition.
 
  • Like
Reactions: i386 and svtkobra7

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Best boot option: satadom.
I agree with this. PERIOD.

Leave the boot disk for boot only, even if it dies, your esxi will continue to run.
And this (even though I haven't gone this route prior).

It takes 5 min to reinstall and configure an esxi server.
Speedy Gonzalez here! It takes me longer than that just to configure networking using esxcli. Point noted.

Currently I have 7.0 boot only installed on USB, but burned two hours trying to figure out how to get it to recognize another USB to use as a temporary datastore for FreeNAS boot (need access to my files stat). If I could manage that I could afford to wait for a SATADOM to arrive. Any thoughts on how to achieve? And I wish I could say I have a spare SSD laying around, but all IT assets are sold if not used.

(and yes, I learned a couple lessons here)
thx for your reply.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
I agree with this. PERIOD.


And this (even though I haven't gone this route prior).


Speedy Gonzalez here! It takes me longer than that just to configure networking using esxcli. Point noted.

Currently I have 7.0 boot only installed on USB, but burned two hours trying to figure out how to get it to recognize another USB to use as a temporary datastore for FreeNAS boot (need access to my files stat). If I could manage that I could afford to wait for a SATADOM to arrive. Any thoughts on how to achieve? And I wish I could say I have a spare SSD laying around, but all IT assets are sold if not used.

(and yes, I learned a couple lessons here)
thx for your reply.

I use VDS, as long as my vcenter is up, configuring network is just add the physical nic to vswitch. But I could script it and still finish it in 5 min or use host profile. I used to manage 3000 esx servers.

ESXi deosn't support usb datastore, your time was wasted. Best way is to mount disk on another system and export it through nfs.
 
  • Like
Reactions: svtkobra7

svtkobra7

Active Member
Jan 2, 2017
362
87
28
I use VDS, as long as my vcenter is up, configuring network is just add the physical nic to vswitch. But I could script it and still finish it in 5 min. I used to manage 3000 esx servers.
  • Thats a lot of servers!
  • vDS here too, but I was moving from 6.7 to 7.0 and was starting from secure erased drives.
  • So host network config (twice), vCenter Server install and more network config, not to mention new vDS iteration in 7.0 (not sure how much would be transposed) ...
  • I'm aware you can import an exported vDS config, but something still has to be manually added (vmk maybe?) ...
  • Anyway, at least in that scenario, and with Server on one of the two hosts (to be corrected) I don't think it could be done in 5 min.
  • You probably could figure out a way, but I have a small fraction of your know-how.
  • It isn't scripted per se in my case, but just have to copy/paste all the esxcli commands (still a PITA).
ESXi deosn't support usb datastore, your time was wasted.
This shows it to be possible??? USB Devices as VMFS Datastore in vSphere ESXi 6.5
ack on different version.

Grin and bear the wait for a satadom is your vote, I assume? (and thx again)
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
  • Thats a lot of servers!
  • vDS here too, but I was moving from 6.7 to 7.0 and was starting from secure erased drives.
  • So host network config (twice), vCenter Server install and more network config, not to mention new vDS iteration in 7.0 (not sure how much would be transposed) ...
  • I'm aware you can import an exported vDS config, but something still has to be manually added (vmk maybe?) ...
  • Anyway, at least in that scenario, and with Server on one of the two hosts (to be corrected) I don't think it could be done in 5 min.
  • You probably could figure out a way, but I have a small fraction of your know-how.
  • It isn't scripted per se in my case, but just have to copy/paste all the esxcli commands (still a PITA).

This shows it to be possible??? USB Devices as VMFS Datastore in vSphere ESXi 6.5
ack on different version.

Grin and bear the wait for a satadom is your vote, I assume? (and thx again)
I have 2 environment. 1 for management/storage and another for workloads. 2 vcenter and 1 in each environment cross managing. This give me option to wipe out the entire cluster without touching vcenter.

I use nics that support npar. I can create 4 physical function from each physical port. The first physical function is used for a standard vswitch for management vmk0. VDS uses other 3 physical functions. This get the network up in a min after installation, you use dcui to configure network. vcenter uses standard vswitch.

If you self host vcenter on the same esx server, then I see it's chicken and egg thing, very bad idea. If you have 2 servers, try what I do, have 2 vcenters, one on each esx and cross manage them. You can link the 2 together and vmotion, clone vms across.
 

Spearfoot

Active Member
Apr 22, 2015
111
51
28
@svtkobra7! How are you doing! Haven't talked to you in quite a while.

I recently built a new AIO based on a SuperMicro 5028D-TLN4. I used a 512GB Samsung 970 PRO for the ESXi 6.7 boot device/datastore, because:
  • I'm passing the SATA controller through to the FreeNAS VM, so no SSDs for me!
  • It has higher durability than the 970 EVO model (1200TBW vs 600TBW) which leads me to believe it will last a little longer, all else being equal.
To be honest, both the 970 PRO & EVO models have a 5 year warranty and either would probably serve you equally well.

I store FreeNAS, pfSense, and UniFi controller virtual machines on it with room to spare.

Good luck, and "May you choose wisely." :)
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
I have 2 environment. 1 for management/storage and another for workloads. 2 vcenter and 1 in each environment cross managing. This give me option to wipe out the entire cluster without touching vcenter.

I use nics that support npar. I can create 4 physical function from each physical port. The first physical function is used for a standard vswitch for management vmk0. VDS uses other 3 physical functions. This get the network up in a min after installation, you use dcui to configure network. vcenter uses standard vswitch.

If you self host vcenter on the same esx server, then I see it's chicken and egg thing, very bad idea. If you have 2 servers, try what I do, have 2 vcenters, one on each esx and cross manage them. You can link the 2 together and vmotion, clone vms across.
  • Doesn't two instances of vCenter Server (I'm assuming you are referring to HA? but maybe not as "cross managing" contradicts HA) require a third box for a witness? Which I know you have (and I plan to add down the road).
  • Had never heard of npar, but cool stuff. Missed it by one MLNX gen.
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
  • Doesn't two instances of vCenter Server (I'm assuming you are referring to HA? but maybe not as "cross managing" contradicts HA) require a third box for a witness? Which I know you have (and I plan to add down the road).
  • Had never heard of npar, but cool stuff. Missed it by one MLNX gen.
Not vcenter HA. Just not managing yourself. But if you have 2 hosts, you have many more options.

Qlogic supports npar, you can get 57810 for really cheap. You will see 8 nics in esxi for dual port 57810. Mellanox spec sheet says it supports npar, but it never has it.

You can of course use nvme for boot as well as datastore, but I always prefer to use separate dedicated boot disk.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
Not vcenter HA. Just not managing yourself. But if you have 2 hosts, you have many more options.
Must be missing something ... or missing the leap to my environment ... unsure how you could even add the same hosts to separate server iterations (i'll have to spend some time googling this, as it is something I didn't heard of).

Qlogic supports npar, you can get 57810 for really cheap. You will see 8 nics in esxi for dual port 57810. Mellanox spec sheet says it supports npar, but it never has it.
  • Wow they are cheap and come with fans too boot!
  • I'm going to grab one to play with and want to see what network perf looks like as passthrough instead of with vDS (and existing NICs) and the LAGs that I can't imagine I created correctly anyway. Horrible with networking.
You can of course use nvme for boot as well as datastore, but I always prefer to use separate dedicated boot disk.
  • I see the merit without doubt.
  • Earlier comment was that USB can be used as local datastore too per cited link (conflicting with my understanding of your remark).
 

vangoose

Active Member
May 21, 2019
326
104
43
Canada
Must be missing something ... or missing the leap to my environment ... unsure how you could even add the same hosts to separate server iterations (i'll have to spend some time googling this, as it is something I didn't heard of).


  • Wow they are cheap and come with fans too boot!
  • I'm going to grab one to play with and want to see what network perf looks like as passthrough instead of with vDS (and existing NICs) and the LAGs that I can't imagine I created correctly anyway. Horrible with networking.

  • I see the merit without doubt.
  • Earlier comment was that USB can be used as local datastore too per cited link (conflicting with my understanding of your remark).
No,
vCenter A -> DC A -> Cluster A -> esx01, esx02 -> vCenter B(VM)
vCenter B -> DC B -> Cluster B -> esx03, esx04 -> vCenter A(VM)

57810 from dell has fan, from hp doesn't. Make sure you update firmware after you get it. It support sr-iov too, but freenas doesn't have vf driver for it, linux and windows have no problem.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
@svtkobra7! How are you doing! Haven't talked to you in quite a while.
  • Howdy!!! I'm well and you? It has been a moment for sure.
  • I've gotten the upgrade bug again, so it was only a matter of time before I blew something up and reached out. ;)
I recently built a new AIO based on a SuperMicro 5028D-TLN4.
  • I plan to use the chassis in the SYS-5019D for what I had planned to use as a dedicated pfSense / vCenter Server host (need the short depth and ports out front).
To be honest, both the 970 PRO & EVO models have a 5 year warranty and either would probably serve you equally well.
  • I'm sure you are correct ... it just seems that in a post-Optane world (at least when I became informed of the relevance of endurance) that 600TBW is a few days of usage (exaggerating)
  • But looking at my 940 Pro on a desktop it only has 140 TB written ...
I store FreeNAS, pfSense, and UniFi controller virtual machines on it with room to spare.
  • Remember when you got me off COTS NASes (what the heck is the proper plural for NAS?) and into the ZFS world and the mantra was still "virtualize freenas only if you dislike your data"?
  • Its refreshing to know that we can do so safely now!
Good luck, and "May you choose wisely." :)
Finance guys need all the luck they can get with this stuff, so thanks.
 

svtkobra7

Active Member
Jan 2, 2017
362
87
28
No,
vCenter A -> DC A -> Cluster A -> esx01, esx02 -> vCenter B(VM)
vCenter B -> DC B -> Cluster B -> esx03, esx04 -> vCenter A(VM)

57810 from dell has fan, from hp doesn't. Make sure you update firmware after you get it. It support sr-iov too, but freenas doesn't have vf driver for it, linux and windows have no problem.
I'm still not seeing it, as for me it would look like:

vCenter A -> DC A -> Cluster A (of A) -> esx01, esx02 -> vCenter B(VM)
vCenter B -> DC A -> Cluster A (of A) -> esx01, esx02 -> vCenter A(VM)

if you have 2 servers, try what I do, have 2 vcenters, one on each esx and cross manage them.
Doesn't work with one "cluster" and two hosts, right? I can't see how. But I'm sure you know what you are talking about. But I'm exhausted, I'll have to re-read in the AM (or later this AM) after I get some sleep.
 

Spearfoot

Active Member
Apr 22, 2015
111
51
28
Remember when you got me off COTS NASes (what the heck is the proper plural for NAS?) and into the ZFS world and the mantra was still "virtualize freenas only if you dislike your data"?

Its refreshing to know that we can do so safely now!
Pretty sure that's still the mantra... but the unstated caveat has always been "unless you know what you're doing, and do it right"!
 

badskater

Automation Architect
May 8, 2013
129
44
28
Canada
If you have only 2 hosts in the same cluster, you can't use 2 VC to manage them, unless you go with Linked mode, but that is not recommended to host them on the same clusters. (Still work, just not recommended :D )

As other said, for your boot, a USB or a SataDom is better. Please use a separate drive for your VMs, just for performance and data integrity. (That's a best practice for a good reason)

With 7.0, I started recommending more and more 64GB or more for SATA DOM, due to the ESX=OSData Partition. (for more infos, see here: vSphere 7 - ESXi System Storage Changes - VMware vSphere Blog )

64 is fine for smaller clusters like most home labs are. I don't ever recommend over 128gb ever also for the same reasons as other said above. ESXi needs a boot drive and only for boot partitions.

I personally prefer a sata ssd if a SATA DOM is impossible as well, tho a USB Key is also quite good (which i still use in most of my personal home lab)

3000 hosts, I wish for you that you had some automation included, as someone who managed that much and more as well in the past until I moved to full time consulting/pre sales designs.