ESXi Boot Device - Rust, SSD, SATA DOM?

Discussion in 'VMware, VirtualBox, Citrix' started by RobertFontaine, Sep 24, 2016.

  1. BennyT

    BennyT Member

    Joined:
    Dec 1, 2018
    Messages:
    88
    Likes Received:
    24
    Hi,

    From reading this older thread it sounds like installing esxi hypervisor to boot and run from a USB flash drive is pretty common practice. I was planning to do the same. I'm preparing to setup my very first esxi homelab and I abound with questions.

    What do you do for the esxi scratch drive if your hypervisor is on USB?

    VMWare has a kb article where they recommend persistent scratch drive rather than using RAM.

    VMware Knowledge Base

    2018-12-30_21-30-59.png

    So now I'm wondering if I should purchase a 32 or 64gb SataDOM or perhaps even a very small SSD to house the hypervisor + scratch. But I don't think I'd loose anything other than logs between reboots if I kept scratch in RAM.

    What are your thoughts on that? Is it no big deal to have the scratch run in RAM or should I really have it persistent as the kb article says? Thanks!
     
    #21
    Last edited: Dec 31, 2018
  2. Dawg10

    Dawg10 Associate

    Joined:
    Dec 24, 2016
    Messages:
    199
    Likes Received:
    91
    ESXi used to give an error message upon startup if the scratch folder wasn’t on persistent storage. Not sure if it still does as I haven’t seen it in a while (on 6.7, and 6.5, 6.0 and 5.5 before that). I have my Dell hosts configured with ESXi on a stick and scratch set to the SSD datastore that replaces the DVD.

    Startup error messages are annoying.
     
    #22
    BennyT likes this.
  3. dwright1542

    dwright1542 Active Member

    Joined:
    Dec 26, 2015
    Messages:
    355
    Likes Received:
    68
    I've actually moved back to using rust / SSD's and partitioning off a small boot drive. I've had enough issues with even quality USB flash drives that it's just not worth it. The hassle of replacing them far outweigh any benefits. Redundant SD cards are ok, but I'll never use USB again.
     
    #23
    BennyT likes this.
  4. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    11,520
    Likes Received:
    4,450
    #24
    BennyT and dwright1542 like this.
  5. AveryFreeman

    AveryFreeman ESXi + ( ILLUMOS / ZFS ) = HAPPY

    Joined:
    Mar 17, 2017
    Messages:
    147
    Likes Received:
    16
    I have had one die on me but it was the same one I'd used from 5.5 to 6.5 - that's a lot of changes and updates. It was a 8GB sandisk cruzer glide. I lost my network settings but it wasn't really a big deal as they weren't complicated, I just made another one, imported my datastores and imported all the VMs.

    I had never really thought about persistent storage before - I started putting 'swap' on one of my SSD datastores after I ran out of room when I was doing updates (plus, I'm sure it's important for performance, too).

    But now that I look at it closer, there's no scratch partition installed by default.

    I look at one host that is using USB flash and one that is using an SSD, both installed from ESXi 6.5 installer, USB one in a VM, SSD one bare-metal. They're both partitioned exactly the same (the SSD one is wasting a good bit of space).

    tl;dr

    Here's a quick # ls -lah from my /dev/disks:

    Code:
    [root@robotboy:/dev/disks] ls -lah
    total 3933440105
    drwxr-xr-x    2 root     root         512 Jan  3 20:53 .
    drwxr-xr-x   16 root     root         512 Jan  3 20:53 ..
    -rw-------    1 root     root      465.8G Jan  3 20:53 t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____
    -rw-------    1 root     root      465.8G Jan  3 20:53 t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____:1
    -rw-------    1 root     root      953.9G Jan  3 20:53 t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC
    -rw-------    1 root     root      953.9G Jan  3 20:53 t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC:1
    -rw-------    1 root     root      447.1G Jan  3 20:53 t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538
    -rw-------    1 root     root      447.1G Jan  3 20:53 t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538:1
    -rw-------    1 root     root       14.3G Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
    -rw-------    1 root     root        4.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
    -rw-------    1 root     root      250.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
    -rw-------    1 root     root      250.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
    -rw-------    1 root     root      110.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
    -rw-------    1 root     root      286.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
    -rw-------    1 root     root        2.5G Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
    lrwxrwxrwx    1 root     root          69 Jan  3 20:53 vml.0100000000303030325f323042355f323030305f323533380053414d53554e -> t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538
    lrwxrwxrwx    1 root     root          71 Jan  3 20:53 vml.0100000000303030325f323042355f323030305f323533380053414d53554e:1 -> t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538:1
    lrwxrwxrwx    1 root     root          68 Jan  3 20:53 vml.0100000000333532435f303038305f363132455f4534414300504333303020 -> t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC
    lrwxrwxrwx    1 root     root          70 Jan  3 20:53 vml.0100000000333532435f303038305f363132455f4534414300504333303020:1 -> t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC:1
    lrwxrwxrwx    1 root     root          56 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:1 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:5 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:6 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:7 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:8 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
    lrwxrwxrwx    1 root     root          58 Jan  3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:9 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
    lrwxrwxrwx    1 root     root          72 Jan  3 20:53 vml.010000000053335a314e42304b31343932363059202020202053616d73756e -> t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____
    lrwxrwxrwx    1 root     root          74 Jan  3 20:53 vml.010000000053335a314e42304b31343932363059202020202053616d73756e:1 -> t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____:1
    
    It's kind of interesting - the devices themselves are listed first, are only rw by root and not executable at all, and the vml.UUID links are listed below the devices, they're volume 'devices' that link to their respective physical devices, and are 777 accessible (that's what you actually see in /vmfs/volumes/) . But if you don't look closely it kind of looks like the devices are listed twice.

    But that's kind of an aside - what's important here is this bit:

    Code:
    -rw-------    1 root     root       14.3G Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
    -rw-------    1 root     root        4.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
    -rw-------    1 root     root      250.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
    -rw-------    1 root     root      250.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
    -rw-------    1 root     root      110.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
    -rw-------    1 root     root      286.0M Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
    -rw-------    1 root     root        2.5G Jan  3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
    
    These are the partitions on the USB drive (each one is denoted with a colon and a number, e.g. :7) and the device itself is the first one listed without a :x is the whole device (USB drive).

    Code:
    [root@robotboy:/dev/disks] partedUtil getptbl t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
    gpt
    
    1869 255 63 30031872
    1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
    5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
    6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
    7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
    8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
    9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
    
    these should be present:
    • Partition 1: systemPartition -> Bootloader Partition (4MB ) - yep
    • Partition 5: linuxNative -> /bootbank (250MB) - yep
    • Partition 6: linuxNative -> /altbootbank ( 250MB) - yep
    • Partition 7: vmkDiagnostic -> First Diagnostic Partition (110MB) - yep
    • Partition 8: linuxNative -> /store (286MB) - yep
    • Partition 9: vmkDiagnostic -> Second Diagnostic Partition ( 2.5 GB) - yep
    • Partition 2: linuxNative -> /scratch (4GB) -- not there!
    So yeah, maybe it doesn't make a scratch if you install on a USB drive. Maybe it's just space constraints, or maybe it's because a lot of people use USB drives and it'd kill them quick if they kept writing log files to them.

    I noticed this, I have a .locker location on a VMFS volume:

    Code:
    [root@robotboy:/dev/disks] find .locker*
    find: .locker*: No such file or directory
    [root@robotboy:/dev/disks] find / -name .locker*
    /vmfs/volumes/5ad0d577-f1258bf6-6b49-003048b3b832/.locker
    
    [root@robotboy:/dev/disks] df -h /vmfs/volumes/5ad0d577-f1258bf6-6b49-003048b3b832/.locker
    Filesystem   Size   Used Available Use% Mounted on
    VMFS-6     447.0G 378.1G     68.9G  85% /vmfs/volumes/Samsung SM953 480GB
    
    That's not the only datastore, so it must be created by default when you make the first one (that was the first datastore SSD I set up on that host).

    I have another host with a 24GB SLC SSD I use to boot off of, that one doesn't have a /scratch, either.

    Code:
    -rw-------    1 root     root       22.3G Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ
    -rw-------    1 root     root        4.0M Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:1
    -rw-------    1 root     root      250.0M Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:5
    -rw-------    1 root     root      250.0M Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:6
    -rw-------    1 root     root      110.0M Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:7
    -rw-------    1 root     root      286.0M Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:8
    -rw-------    1 root     root        2.5G Jan  3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:9
    
    
    But it appears to have LOTS of storage left on the drive where I could make one. However, the locker is on the VMFS volume:

    Code:
    [root@robotgirl:~] find / -name .locker*
    /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker
    
    [root@robotgirl:~] df -h /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
    Filesystem   Size   Used Available Use% Mounted on
    VMFS-6     745.0G 659.2G     85.8G  88% /vmfs/volumes/Intel S3500 800GB
    
    [root@robotgirl:~] du -h /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var/tmp
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var/core
    384.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/downloads
    71.8M   /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/log
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/core
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/vmware/loadESX
    256.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/vmware
    128.0K  /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/packages
    72.9M   /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
    
    
    So it doesn't look like a big deal.

    If you want to configure yours, you could look at this:

    VMware Knowledge Base

    Also there's this:

    VMware Knowledge Base

    To workaround this limitation:

    Code:
    [LIST=1]
    [*]Connect to the ESXi host via SSH. For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
    [*]Back up the existing boot.cfg file, located in /bootbank/, using this command:
    
    cp /bootbank/boot.cfg /bootbank/boot.bkp
     
    [*]Open the boot.cfg file using VI editor. For more information, see Editing files on an ESX host using vi or nano (1020302).
    [*]Modify the following line:
    
    kernelopt=no-auto-partition
    
    to
    
    kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE
     
    [*]Save and close the boot.cfg file.
    [*]Restart the ESXi host.
    [/LIST]
    
    
     
    #25
    BennyT likes this.
  6. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    693
    Likes Received:
    163
    IMO, it's getting to the point that smaller SSDs are so cheap the only reason to look at thumb drive booting is because your out of SATA ports. SATADOMs work as well but are 2-3x the cost of a smaller SATA SSD.
     
    #26
  7. BennyT

    BennyT Member

    Joined:
    Dec 1, 2018
    Messages:
    88
    Likes Received:
    24
    Patrick posted a link to an old 80GB intel someone was selling on ebay for $20 and free shipping. I bought one :) how could I go wrong? I'll try to revive that old ssd with secure erase once it arrives.
     
    #27
    Last edited: Jan 18, 2019
  8. BennyT

    BennyT Member

    Joined:
    Dec 1, 2018
    Messages:
    88
    Likes Received:
    24
    I must say, that was a good purchase. I updated to latest firmware then used diskpart to clean the drive. Health shows "Good" condition and 100% life remaining. Not bad for $20. I think it will make for a fine esxi boot drive and store persistant logs. Thanks patrick!

    -Benny

    upload_2019-1-18_22-21-38.png
     
    #28
  9. leonroy

    leonroy Member

    Joined:
    Oct 6, 2015
    Messages:
    62
    Likes Received:
    7
    I’ve used USB sticks for years but after a few failures (Sandisk 16GB drives) I’ve switched to small, cheap SSDs or SATA DOMs for servers in critical or remote locations.

    If you check Digi-Key for the decent industrial grade SD cards or USB sticks you’ll see they cost upwards of $50 for a 16GB module. I don’t think consumer grade USB sticks are a good idea in mission critical servers.
     
    #29
    AveryFreeman likes this.
  10. AveryFreeman

    AveryFreeman ESXi + ( ILLUMOS / ZFS ) = HAPPY

    Joined:
    Mar 17, 2017
    Messages:
    147
    Likes Received:
    16
    I would agree with that. None of my stuff is mission-critical, unfortunately I'm no pro - just a homelabber trolling the interwebs (it's a series of tubes!).

    That being said, I don't think I've had to replace more than 1 usb key in 4 years of running up to 3 ESXi host off of 8-16GB Sandisk cruz slider 2.0s or fit 3.0 16GBs (I did upgrade at one point).

    The only usb key that ever died on me was the cruz slider and who knows how many OS installation images had been put on that flash before it was graced by the presence of ESXi.
     
    #30
    leonroy likes this.
Similar Threads: ESXi Boot
Forum Title Date
VMware, VirtualBox, Citrix ESXi boot disk failover / fallback (clone)? Apr 23, 2019
VMware, VirtualBox, Citrix ESXi: Is it possible to boot guest VM from the HBA in DirectPath mode Nov 22, 2018
VMware, VirtualBox, Citrix ESXi 6.7 - USB Flash to Boot and Datastore? May 30, 2018
VMware, VirtualBox, Citrix ESXi 6.7 can't change settings, don't survive reboot (passthrough issues, too!) May 1, 2018
VMware, VirtualBox, Citrix boot guest from passed through AHCI on ESXi 6.0 (can that work??) Dec 29, 2016

Share This Page