I have had one die on me but it was the same one I'd used from 5.5 to 6.5 - that's a lot of changes and updates. It was a 8GB sandisk cruzer glide. I lost my network settings but it wasn't really a big deal as they weren't complicated, I just made another one, imported my datastores and imported all the VMs.
I had never really thought about persistent storage before - I started putting 'swap' on one of my SSD datastores after I ran out of room when I was doing updates (plus, I'm sure it's important for performance, too).
But now that I look at it closer, there's no scratch partition installed by default.
I look at one host that is using USB flash and one that is using an SSD, both installed from ESXi 6.5 installer, USB one in a VM, SSD one bare-metal. They're both partitioned exactly the same (the SSD one is wasting a good bit of space).
tl;dr
Here's a quick # ls -lah from my /dev/disks:
Code:
[root@robotboy:/dev/disks] ls -lah
total 3933440105
drwxr-xr-x 2 root root 512 Jan 3 20:53 .
drwxr-xr-x 16 root root 512 Jan 3 20:53 ..
-rw------- 1 root root 465.8G Jan 3 20:53 t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____
-rw------- 1 root root 465.8G Jan 3 20:53 t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____:1
-rw------- 1 root root 953.9G Jan 3 20:53 t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC
-rw------- 1 root root 953.9G Jan 3 20:53 t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC:1
-rw------- 1 root root 447.1G Jan 3 20:53 t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538
-rw------- 1 root root 447.1G Jan 3 20:53 t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538:1
-rw------- 1 root root 14.3G Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
-rw------- 1 root root 4.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
-rw------- 1 root root 250.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
-rw------- 1 root root 250.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
-rw------- 1 root root 110.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
-rw------- 1 root root 286.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
-rw------- 1 root root 2.5G Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
lrwxrwxrwx 1 root root 69 Jan 3 20:53 vml.0100000000303030325f323042355f323030305f323533380053414d53554e -> t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538
lrwxrwxrwx 1 root root 71 Jan 3 20:53 vml.0100000000303030325f323042355f323030305f323533380053414d53554e:1 -> t10.NVMe____SAMSUNG_MZ1WV480HCGL2D000MV______________000220B520002538:1
lrwxrwxrwx 1 root root 68 Jan 3 20:53 vml.0100000000333532435f303038305f363132455f4534414300504333303020 -> t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC
lrwxrwxrwx 1 root root 70 Jan 3 20:53 vml.0100000000333532435f303038305f363132455f4534414300504333303020:1 -> t10.NVMe____PC300_NVMe_SK_hynix_1TB_________________352C0080612EE4AC:1
lrwxrwxrwx 1 root root 56 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:1 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:5 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:6 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:7 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:8 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
lrwxrwxrwx 1 root root 58 Jan 3 20:53 vml.01000000003443353330303031313030323230313033333135556c74726120:9 -> t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
lrwxrwxrwx 1 root root 72 Jan 3 20:53 vml.010000000053335a314e42304b31343932363059202020202053616d73756e -> t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____
lrwxrwxrwx 1 root root 74 Jan 3 20:53 vml.010000000053335a314e42304b31343932363059202020202053616d73756e:1 -> t10.ATA_____Samsung_SSD_860_EVO_500GB_______________S3Z1NB0K149260Y_____:1
It's kind of interesting - the devices themselves are listed first, are only rw by root and not executable at all, and the vml.UUID links are listed below the devices, they're volume 'devices' that link to their respective physical devices, and are 777 accessible (that's what you actually see in /vmfs/volumes/) . But if you don't look closely it kind of looks like the devices are listed twice.
But that's kind of an aside - what's important here is this bit:
Code:
-rw------- 1 root root 14.3G Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
-rw------- 1 root root 4.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:1
-rw------- 1 root root 250.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:5
-rw------- 1 root root 250.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:6
-rw------- 1 root root 110.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:7
-rw------- 1 root root 286.0M Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:8
-rw------- 1 root root 2.5G Jan 3 20:53 t10.SanDisk00Ultra_Fit000000000000004C530001100220103315:9
These are the partitions on the USB drive (each one is denoted with a colon and a number, e.g. :7) and the device itself is the first one listed without a :x is the whole device (USB drive).
Code:
[root@robotboy:/dev/disks] partedUtil getptbl t10.SanDisk00Ultra_Fit000000000000004C530001100220103315
gpt
1869 255 63 30031872
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
these should be present:
- Partition 1: systemPartition -> Bootloader Partition (4MB ) - yep
- Partition 5: linuxNative -> /bootbank (250MB) - yep
- Partition 6: linuxNative -> /altbootbank ( 250MB) - yep
- Partition 7: vmkDiagnostic -> First Diagnostic Partition (110MB) - yep
- Partition 8: linuxNative -> /store (286MB) - yep
- Partition 9: vmkDiagnostic -> Second Diagnostic Partition ( 2.5 GB) - yep
- Partition 2: linuxNative -> /scratch (4GB) -- not there!
So yeah, maybe it doesn't make a scratch if you install on a USB drive. Maybe it's just space constraints, or maybe it's because a lot of people use USB drives and it'd kill them quick if they kept writing log files to them.
I noticed this, I have a .locker location on a VMFS volume:
Code:
[root@robotboy:/dev/disks] find .locker*
find: .locker*: No such file or directory
[root@robotboy:/dev/disks] find / -name .locker*
/vmfs/volumes/5ad0d577-f1258bf6-6b49-003048b3b832/.locker
[root@robotboy:/dev/disks] df -h /vmfs/volumes/5ad0d577-f1258bf6-6b49-003048b3b832/.locker
Filesystem Size Used Available Use% Mounted on
VMFS-6 447.0G 378.1G 68.9G 85% /vmfs/volumes/Samsung SM953 480GB
That's not the only datastore, so it must be created by default when you make the first one (that was the first datastore SSD I set up on that host).
I have another host with a 24GB SLC SSD I use to boot off of, that one doesn't have a /scratch, either.
Code:
-rw------- 1 root root 22.3G Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ
-rw------- 1 root root 4.0M Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:1
-rw------- 1 root root 250.0M Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:5
-rw------- 1 root root 250.0M Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:6
-rw------- 1 root root 110.0M Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:7
-rw------- 1 root root 286.0M Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:8
-rw------- 1 root root 2.5G Jan 3 21:22 t10.ATA_____H2FW_RAID1_______________________________3NK83QZLWJ1E6WY2FUUZ:9
But it appears to have LOTS of storage left on the drive where I could make one. However, the locker is on the VMFS volume:
Code:
[root@robotgirl:~] find / -name .locker*
/vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker
[root@robotgirl:~] df -h /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
Filesystem Size Used Available Use% Mounted on
VMFS-6 745.0G 659.2G 85.8G 88% /vmfs/volumes/Intel S3500 800GB
[root@robotgirl:~] du -h /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var/tmp
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var/core
384.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/var
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/downloads
71.8M /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/log
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/core
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/vmware/loadESX
256.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/vmware
128.0K /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/packages
72.9M /vmfs/volumes/5aea3a14-7bec6dec-0de5-0cc47a733796/.locker/
So it doesn't look like a big deal.
If you want to configure yours, you could look at this:
VMware Knowledge Base
Also there's this:
VMware Knowledge Base
Limitation when installing on USB flash drive or SD flash card:
Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk. After the installation, you should reconfigure /scratch to use a persistent datastore. For more information, see
Creating a persistent scratch location for ESXi 4.x and 5.x (1033696). VMware recommends using a retail purchased USB flash drive of 16 GB or larger so that the "extra" flash cells can prolong the life of the boot media but high quality parts of 4 GB or larger are sufficient to hold the extended coredump partition.
To workaround this limitation:
Code:
[LIST=1]
[*]Connect to the ESXi host via SSH. For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
[*]Back up the existing boot.cfg file, located in /bootbank/, using this command:
cp /bootbank/boot.cfg /bootbank/boot.bkp
[*]Open the boot.cfg file using VI editor. For more information, see Editing files on an ESX host using vi or nano (1020302).
[*]Modify the following line:
kernelopt=no-auto-partition
to
kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE
[*]Save and close the boot.cfg file.
[*]Restart the ESXi host.
[/LIST]