Have you seen the new vSphere 7 boot device guidance?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Hey,

I am just coming across this, so it is still rather new to me, but I'd been using USB flash drives for all my vSphere hosts since v5.5. Apparently that's not recommended anymore as of v7:


I did just get some innodisk 32GB SATA DOMs that are powered by pin 8 (fingers crossed, I haven't tried them yet). If you're in the market for these, there was a seller taking offers on eBay selling the 3SE version which I *think* are SLC. Hopefully he still has some left. They accepted $60 for 3 of them in my case, and sent them promptly.

How are you mitigating the new boot dev requirement in vSphere 7, and what potential pitfalls should I be aware of going forward?

Edit: the 3SE version is powered by pin 7. Ports that use pin 7 are apparently pretty rare, I made a mistake in my research and think I was looking at a newer model (3SE3 vs 3SE). I wanted to make sure to fix this before anyone saw it and could have potentially made the same mistake I did.
 
Last edited:

Stephan

Well-Known Member
Apr 21, 2017
923
700
93
Germany
Those small DOMs are good to boot a printer nothing more. ;-) For ESXi boot we use RAID-1 of two smallest 2.5 inch hotpluggable SSDs available, 128 GB or 256 GB. OEMs provide firmware updates and that is important for us. Because there are bugs like "40.000 hours uptime and SSD is suddenly dead", all the time. Also no reason to lower reliability of a server with its expensive components to like 1/10th by introducing a vastly less reliable block device for the hypervisor.
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Those small DOMs are good to boot a printer nothing more. ;-) For ESXi boot we use RAID-1 of two smallest 2.5 inch hotpluggable SSDs available, 128 GB or 256 GB. OEMs provide firmware updates and that is important for us. Because there are bugs like "40.000 hours uptime and SSD is suddenly dead", all the time. Also no reason to lower reliability of a server with its expensive components to like 1/10th by introducing a vastly less reliable block device for the hypervisor.
I get what you're saying and agree in principle, but I've been booting off USB flash drives since 2015 and have only had one die on me. It used to be you'd just add a couple flags in BOOT.CFG to specify not to create a swap on the flash itself, or this can also be done in the hardware config of the web client. Obviously, don't create any datastores on the flash ;)

Other than that, ESXi completely runs from memory, so there's no real reason one used to need to be worry about booting from flash, it's a 100% read cycle as long as it's set up right. That was until the new guidance.

TBH I am kicking myself for not getting something with a more explicit power source, though, because these DOMs are not showing up as drives on my motherboard. Both they and my spec sheet for my motherboards say they use pin 8 for power, I should just be able to plug the thing in and it should be recognized as another SATA drive, right?
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Other than that, ESXi completely runs from memory, so there's no real reason one used to need to be worry about booting from flash, it's a 100% read cycle as long as it's set up right. That was until the new guidance.
I think that differs between standalone and vcenter managed... I had (one or two) drives die on me from too much log IO (I assume)...
 

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I think that differs between standalone and vcenter managed... I had (one or two) drives die on me from too much log IO (I assume)...
Yeah sorry, I forgot one of the BOOT.CFG flags is to avoid using the flash as a log space, too. There was an official document from VMware about how to manually edit BOOT.CFG, sorry I'm not finding it right now though.

So my SATA DOMs are pin 7 powered, which is not compatible with my motherboard, and there's no external power connector on these modules. So it looks like I am SOL.

I could have sworn they were pin 8 powered (additional pin on the outside of the connector) but I think I was looking at 3SE3 manual, and these are 3SE (no 3 at the end - an older model). Pin 8 should work, pin 7 does not apparently on my X10SRL-F.

So bummed, the DOMs are SLC, too

Edit: Looks like the best option I have besides returning these are getting some SATA power adapters on Digikey - $24 for two of them. That makes the total price outside of what I really wanted to pay, but the DOMs are SLC so they should last a good long time. It's either that or go back to the drawing board.

 
Last edited:

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
Those small DOMs are good to boot a printer nothing more. ;-) For ESXi boot we use RAID-1 of two smallest 2.5 inch hotpluggable SSDs available, 128 GB or 256 GB. OEMs provide firmware updates and that is important for us. Because there are bugs like "40.000 hours uptime and SSD is suddenly dead", all the time. Also no reason to lower reliability of a server with its expensive components to like 1/10th by introducing a vastly less reliable block device for the hypervisor.
Since I had to send the SATA power pin #7 DOMs back, I 1/2 took your advice and got some actual SSD NAND, but with a twist so I can still do my cheesy USB thumb drive thing I love so much:

128GB Samsung PM991a drives, $15/ea (in lot of 2) + M.2 2242 NVMe to USB 3.1 enclosures from China ($23 ea - ouch). I think the enclosures are more expensive because they're smaller than the 2280 variety (thus, more rare).

So hopefully Samsung consumer NAND shouldn't take a shit from looking at it funny. I've always had good luck with them and most Intel, Micron.

I still don't see why these need to be anything too crazy since the whole hypervisor loads into and works from memory, RAID1 still seems like overkill to me. But I don't have clients, so ...
 
Last edited:

AveryFreeman

consummate homelabber
Mar 17, 2017
413
54
28
42
Near Seattle
averyfreeman.com
I think that differs between standalone and vcenter managed... I had (one or two) drives die on me from too much log IO (I assume)...
I will have to look into this and see what the difference is ("free" single host v. vCenter managed).

There are 3 main things I look at:

Does boot.cfg make partitions for kernel panic dump and other logs on the drive itself, or elsewhere, or omit those features entirely?

Where is swap located - with the VM, or in another specified location? (I personally keep 1 NVMe just for swap and away from the VMs). If vSphere can manage this on its own it might stick a swap partition on your boot media "just cuz"

Cache - same as swap, make sure it's configured to be somewhere other than on the boot drive.

These three things are all manageable in a single-host environment, but requires going through and turning off a lot of "auto"s.

I'm not sure how keeping a datastore on the boot media might mitigate/exacerbate these possible issues vs contiguous free space, but...

it seems like configuring vSphere NOT to do something explicitly is generally a good idea in most cases IMO.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
What kind of SAN?

Any thoughts on esos for lab?
I've used these SANs previously: Netapp 3020/2240, FAS8020, AFF8040, AFF-A200, EMC VMAX3, VNX, Pure //M20 //X20, boot mostly from FC, but sometimes also iSCSI.
For this to work, you'd need a motherboard that supports iBFT or NIC hardware that supports iBFT. Unless you're using server hardware, that is unlikely to work.

Esos? this one - ESOS - Enterprise Storage OS This is the first time I've heard about this project, and to be honest, as I love learning all things storage, it is highly unusual for me. It sounds like a mildly interesting project, trying to replicate larger EMC SAN at home, but I doubt this sum of many things is as coherent and well executed as TrueNAS. I am also skeptical regarding its owner - Quantum Corp.