ESXi boot drive on SuperMicro X10SDV-F?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Since this board doesn't come with an onboard USB port, I'm just wondering how others are configuring their ESXi (or other bare metal hypervisor) boot drive. It seems my only option is to go with an mSATA drive for this but it just seems like a waste since ESXi takes up such little space.
I would have gone with a smaller capacity SATA DOM but then I wouldn't be able to populate all 4 hot swap drive bays in addition to the RAID1 array (2 x SSDs) I'll be using for my local VM datastore.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I would use something along these these lines, a small capacity mSATA.

Edit:
SanDisk 16 GB mSATA Mini PCI E SSD Laptop Solid State Drive SDSA6FM 016G 1004 | eBay
Perfect! Thanks.

I'm going to pick one of those up but I'm still considering picking up a larger one like 60GB because I'm still considering making the transition to Hyper-V from ESXi in which case I'll be needing to do a full Windows Server install on the mSATA drive.

EDIT: I just realized the slot on the board is not an mSATA it's an M.2 PCIe 3.0.
 
  • Like
Reactions: neo

strh

New Member
Mar 2, 2015
20
6
3
  • Like
Reactions: JimPhreak

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Morning Jim,

I connect up the USB connectors on the motherboard giving me 4 USB2 and 2 USB3 ports total and then use a USB thumbdrive. I have also been considering something like this: Amazon.com: StarTech.com 2 Port USB Motherboard Header Adapter (USBMBADAPT2): Computers & Accessories but not pulled the trigger yet.

Regarding RAID1 and those SSDs, I could be wrong but I'd check if you can actually RAID 1 with that board and controller such that ESXi sees a raid.

Cheers
Simon
Thanks for chiming in Simon,

I didn't want to use any of my external USB ports for my ESXi boot drive but I do like the idea of that USB Header Adapter you just posted. Didn't even know those existed but that looks like a great option.

I will have to look into the RAID1 compatibility like you said. I kind of just assumed it wouldn't be an issue with this high end of a board but I shouldn't be assuming anything.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
They have some USB header adapters that split the USB too, so you might be able to run 2 in RAID1 that way.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
They have some USB header adapters that split the USB too, so you might be able to run 2 in RAID1 that way.
I'm not looking to RAID my ESXi boot drive. I've got two 2.5" SSD's that I want to run in RAID1 for my local VM datastore.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Nice find. Thanks for the link.

Now my bigger concern is my SSD datastore. I don't know why I thought I could create a RAID1 array that ESXi would recognize without an add-on RAID card. Really wanted to avoid populating the only PCIE expansion slot with a RAID card. Wanted to keep that open for future expansion.

Maybe I need to re-think whether I even need to RAID my VM datastore. I already backup all my VMs nightly. I just wanted the redundancy so there was no downtime if my datastore drive went.

Even though I use my home server for a lot of testing, I also consider my media server "production" in a sense because there is a rarely a time a file is not being streamed on it locally or remotely.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Since this board doesn't come with an onboard USB port, I'm just wondering how others are configuring their ESXi (or other bare metal hypervisor) boot drive. It seems my only option is to go with an mSATA drive for this but it just seems like a waste since ESXi takes up such little space.
I would have gone with a smaller capacity SATA DOM but then I wouldn't be able to populate all 4 hot swap drive bays in addition to the RAID1 array (2 x SSDs) I'll be using for my local VM datastore.
My vote...use vSphere autodeploy and skip the boot disk alltogether and just load the hypervisor image into host memory/stateless, save yourself a usb slot/cost of disk.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
My vote...use vSphere autodeploy and skip the boot disk alltogether and just load the hypervisor image into host memory/stateless, save yourself a usb slot/cost of disk.
Hmmm, never explored Auto Deploy. How hard is it to setup and what's the recommended topology for it? My home network basically only consists of two nodes (vSphere server and unRAID server). Not sure I have the setup to run Auto Deploy efficiently without creating a less reliable point of failure (local boot drive compared to a network resource).
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I could have SWORN I saw a vSphere autodeploy howto article posted here sometime but maybe I am imagining things. It essentially consists of:

Deploy vCenter, enable autodeploy, configure svc (dhcp/tftp), install vSphere PowerCLI, setup ESXi boot images using imagebuilder cmdlets, setup autodeploy rulesets for firstboot, cluster join, hostprofiles, configure hostprofiles in legacy vSphere client or web client (optional but extremely useful), boot host, done.

If you only have a single node/hypervisor and a dedicated SAN box this may not be too beneficial, for a 3 node cluster or more it's essential in my book.

If a guide is needed I can whip one up but there are great docs out there in the internet ether already covering vSphere autodeploy.
 
  • Like
Reactions: T_Minus

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I could have SWORN I saw a vSphere autodeploy howto article posted here sometime but maybe I am imagining things. It essentially consists of:

Deploy vCenter, enable autodeploy, configure svc (dhcp/tftp), install vSphere PowerCLI, setup ESXi boot images using imagebuilder cmdlets, setup autodeploy rulesets for firstboot, cluster join, hostprofiles, configure hostprofiles in legacy vSphere client or web client (optional but extremely useful), boot host, done.

If you only have a single node/hypervisor and a dedicated SAN box this may not be too beneficial, for a 3 node cluster or more it's essential in my book.

If a guide is needed I can whip one up but there are great docs out there in the internet ether already covering vSphere autodeploy.
Thanks for the information. I'm even considering moving from a 2-node setup to a single node (all in one VM server + storage) so it wouldn't even be possible in that scenario. I'll have to see what I wind up doing and go from there.
 
  • Like
Reactions: whitey

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I run a 3 node AIO build (ESXi, LSI HBA passed through to OmniOS/distro fo ZFS, NFS pools served back to cluster)...works like a dream w/ a TON of flexibility, you can still have it all on a single AIO setup if you go nested hypervisors...just depends how big ya wanna build your single host. Hell a D-1540 system would stack up quite nice v.s. my maxed out E3 systems but you would certainly have to take the nested (inception muuuhahahaha) route and things always get a bit more complicated and less fault tolerant IMO.
 
  • Like
Reactions: JimPhreak

JimPhreak

Active Member
Oct 10, 2013
553
55
28
I run a 3 node AIO build (ESXi, LSI HBA passed through to OmniOS/distro fo ZFS, NFS pools served back to cluster)...works like a dream w/ a TON of flexibility, you can still have it all on a single AIO setup if you go nested hypervisors...just depends how big ya wanna build your single host. Hell a D-1540 system would stack up quite nice v.s. my maxed out E3 systems but you would certainly have to take the nested (inception muuuhahahaha) route and things always get a bit more complicated and less fault tolerant IMO.
I'm still debating whether or not I should combine my storage into this server or not. I use unRAID for my storage (15TB of media) and I can virtualize it but then I'd have to install a storage controller to pass through to my unRAID VM. I'm hesitant to do that with this build since that would take away my only PCIe slot. I was hoping to leave the free for future expansion.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Yessir, that's the one thing missing that jumps out at me on the D-1540 mobo's out so far, call me a dedicated HBA freak for my SAN builds but I feel like that platform should have also included a onboard LSI HBA option...preferably honestly in the minisas/minisas-HD variant so that can be flashed straight into IT mode and plugged into a backplane/expander. Bums me out that that last slot currently IS for a HBA pretty much only in my world anyways...maybe a Fusion-Io PCI-e card but poppa gotta have a stg controller...those ports onboard don't cut the mustard for me...I like disks/iops...and lots of them!

I am also still holding out for D-1540 mobo SFP+ variants since I am already invested in the 10G switch space in that form factor.
 

JimPhreak

Active Member
Oct 10, 2013
553
55
28
Yessir, that's the one thing missing that jumps out at me on the D-1540 mobo's out so far, call me a dedicated HBA freak for my SAN builds but I feel like that platform should have also included a onboard LSI HBA option...preferably honestly in the minisas/minisas-HD variant so that can be flashed straight into IT mode and plugged into a backplane/expander. Bums me out that that last slot currently IS for a HBA pretty much only in my world anyways...maybe a Fusion-Io PCI-e card but poppa gotta have a stg controller...those ports onboard don't cut the mustard for me...I like disks/iops...and lots of them!

I am also still holding out for D-1540 mobo SFP+ variants since I am already invested in the 10G switch space in that form factor.
Yea I mean I'll probably wind up just throwing the M1015 I already have in there. Just feels like I'm wasting all the onboard SATA ports since the case I'm using (SM SYS-5028D-TN4T) can only support 6 drives + one M.2 drive. But I can't complain about the case not fitting my drives because I specifically chose this system to save space.
 
  • Like
Reactions: T_Minus