Hey prt727 I have been running esx for a bit (at work and home) so ill try to help you out as best I can.
What is the recommended storage configuration to host ESX?
That really depends on your requirements & budget there is no 1 answer fits all.
Some examples I can give are
1 Host for All
(ESXI OS boots from thumb drive | ESXI VM's use Internal HDD/SSD Storage)
requires at least 1 thumb drive (or dedicated SSD, HDD, or LUN) for hypervisor OS and 1 hard drive or SSD (though more drives are defiantly preferred) for VM Storage.
1 Host + 1 Storage Back End Server (Can be physical storage server or NAS/Appliance)
(ESXI OS boots from thumb drive (or dedicated SSD, HDD, or LUN) | ESXI VM's are stored on separate storage that is connected to the Host using ISCI, NFS, Fiber Channel, Infiniband)
most people use a variation of these methods.
Do I use a single SSD drive and install ESX on the SSD, possibly wasting the unused space?
This is also a personnel preference (and dictated by your requirements) but most people I know (myself included) use a thumb drive as Patrick mentioned for the Hypervisor OS.
At work we use dual 32 GB Enterprise SSD's in RAID-1 for maximum reliability. At home I use 1 thumb drive (16GB) for ease of use and low cost. They both have worked equally well thus far.
If you do go the SSD route depending on size you can install ESXI on the SSD and then use the remaining free space as space for the VM's (depends on how you partation this SSD). I personally do not like that idea beacuase if you loose the disk you loose everything but it is possible and somewhat mitigated with backups.
Do I create a just large enough LUN on RAID for ESX and boot from the LUN?
Again that is personal choice some work environments boot from luns some do not. The NIC or FC Card will need to support booting from Lun so that the lun is avaible to ESXI before it loads.
I still recommend thumb drive for home use.
What type of RAID, RAID1 ok, any need for higher performance or resilience?
Lol boy this one can open up a can of worms on this forum many people have many different opinions on this.
I'll just comment on what I use and you can do some research on the rest.
I currently have 24TB of HDD and 670GB of SSD for daily VM use.
For the 24TB it is comprised of 12 Disks in a RAID 50 with 3 spans of 4 disks each.
This lets me have decent performance with the ability to loose 3 drives as long as they are in separate spans. Performance wise I can hit about 1GB/s on the storage server itself with slightly lower performace to the vm once you calculate the overhead of the protocol (ISCSI in my case) and other things.
How do I recover VM's in case of ESX drive failure?
Well you have a few options here but I use backup software and backup my vm's daily to a Synology NAS Applicance. Loosing the host itself isnt to bad as its faily quick to reload that (less than 15 mins) and then import back in the VM's if they were on a seperate array from the one that was lost if not restore from backup.
I use Veeam Backup and Recovery with a NFR license (Not for Resale).
Get Your NFR License
- What is the recommended way to manage RAID controllers on the host?
I use Adaptec 7 series controllers, I had to install an updated driver to get ESX 5.5 to see the array, I still have to install maxView in ESX, and I need to run maxView for ESX on some other host.
Do I make a VM just for the maxView ESX mgmt software?
Is there no way easier way to manage direct attached RAID via say vCenter?
Sorry I dont have experience with Adaptec cards for Vmware (though i used them plenty for other OS's) .
For LSI their cards generally can be managed by installing a VIB on the host.
Also if you are using a separate storage sever then this can be easily mitigated (example Windows Storage Server that connects back to the ESXI host using ISCSI) .
Then you would not have the headache of trying to manage the array inside of vmware.
kind of long but hope it helps.