Workstation (ESXi lab) to full Type 1 ESXi 6.7u3 storage issues

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Gunnyp

New Member
Oct 23, 2019
6
0
1
Built a Server/Workstation for a business start up back in August. Set up a virtual lab using VMware Workstation 15 and deployed an ESXi environment for practice. Everthing was working, appeared ready to end the lab and go bare metal. Boy howdy, nothing works the same!

The server consists of:
SuperMicro X11DPG-QT
Dual Xeon Platinum 8260L
2TB Hynix 2933Mhz RAM
VM Storage: 1TB NVMe Evo 970 Pro
2x 128GB SATADOM RAID1 - ESXi Home
LSI 9361-8i w/ 2x Adaptec 12Gbps Expanders for all SAS 12Gbps Enterprise storage:
VM Storage: 2TB (5x 400GB) HGST SSD RAID0
Cold storage: 1TB (4x 300GB HGST SSD) CacheCade Pro 2.0
100TB (10x 10TB HGST) RAID10 - 2 Hot Spares
Offsite Backup: 32TB (4x 8TB HGST) in rotation

First, installing ESXi ignored the Intel software RAID1 SATADOM. It appeared to be working splendidly under Workstation. ESXi installed on only one of the SATADOMs and gives warning that the installation is in need of attention. Can force boot from the SATADOM but cannot select it as a viable boot option. About 20 re-installs have failed to result in a working boot-selectable SATADOM installation.

Ah, see someone else had this same issue. Could swear the whole idea of SATADOM RAID1 for ESXi was on a STH article. Is it just this motherboard? Well, will remove the software RAID1 and try again.

Secondly, and likely independently, no new datastores are available other than the capacity of this single SATADOM on which ESXi is installed. Had expected that the 1TB NVMe would at least be available and that perhaps after installing vCenter or maybe upgrading the LSI drivers that the RAID0 SSDs would become available.

My understanding is that the first VM (Server 2016 AD, DHCP, DNS) ought be placed on a non-SATADOM datastore along with vCenter. So, at present stopped in my tracks with no datastore on which to install Server 2016 and vCenter.

Are these issues due to a corrupt ESXi install or is there some way to create datastores before even installing ESXi that I've missed in relying on the VM Workstation lab environment?

Or is it that vCenter has the more robust datastore feature set and one ought move forward with the SATADOM installation and then move the Server and vCenter to safer datastores once created?

Thanks in advance!
 

Peanuthead

Active Member
Jun 12, 2015
839
177
43
44
In short, SataDOM RAID is not going to happen. It worked since you had Windows controlling it originally. Now you are trying to use ESXI directly (or the MB) to control it and they won't. With regard to the other datastores not being available it sounds like the drivers are not loaded for the card in ESXI. You may have to load them. I didn't look at VMware's HCL for your card. Where you decide to move VMs to is really up to you and based on load (of the VM and load to the datastore). Lastly, I don't ully understand the "vCenter has the more robust datastore feature set " statement. What are you trying to state or ask?
 

Gunnyp

New Member
Oct 23, 2019
6
0
1
Thanks for the response, Peanuthead.

As far as VMware HCL for the LSI 9361-8i, it has been an issue for past 2 years. Had held off with this build until the serious driver situation appeared sorted as our storage is heavily dependent on LSI RAID. Intended to use the VUM Update Manager plugin to vCenter to install the LSI driver which seems to have addressed complaints of the past couple years of abandonment of LSI support in ESXi.

This notion of vCenter being more robust may be wrong as the same can and probably ought be done by esxcli commands, huh? Well, I am using Mastering Vmware Vsphere 6.7 as primary guidance and it emphasizes the vSphere functionality perhaps a bit much.

Still, I did expect that the NVMe drive on the X11 motherboard would certainly be visible immediately for new VM installs once ESXi was installed. Not aware of this motherboard native NVMe needing an ESXi driver but will check if such is available. (Had intended to use NVMe RAID to supplement this motherboard's single NVMe slot; however, the SuperMicro VROC card implementation of NVMe RAID is so bizarre and proprietary that one had to stick with LSI RAID SSD solution.)

The NVMe drive was among the choices available for ESXi install location. Hello, IIRC so were the LSI RAID drives, as a lsi-mr3 driver was definitely installed from the ESXi iso. In fact, fairly certain that it is the latest lsi-mr3 version dated 3/18/2019; whereas the ESXi-U3 ISO installed first appeared in recent weeks.

At this point, I am removing the RAID1 from the SATADOM. Hopefully this will at least result in a bootable installation. If so, will sally forth with installation of the Server 2016 DC & vCenter on the SATADOM for now and move these VMs when I can get the datastores issue sorted by loading any discovered additional drivers. If it doesn't, perhaps will install ESXi directly to the 1TB NVMe and re-evaluate the terrain from that vantage point.
 

Gunnyp

New Member
Oct 23, 2019
6
0
1
Changing the SATADOM to AHCI from RAID then re-installing, did remove the corrupt install notification. Now the ESXi install on SATADOM can be selected for booting.

However, only the single SATADOM datastore is available. The remaining drives are all listed under Storage - Adapters - Configure iSCSI - Status Unknown. Each drive shows as a vmHBA adapter with the appropriate driver. As LSI and SuperMicro both direct one to the VM offered drivers, fairly confident these are the correct drivers. At present it is unclear how to change their Status to Known or Active to make them accessible.
 
Last edited:

DracoDan

Not a New Member
May 26, 2016
32
9
8
41
I suggest you read my post here and run the commands in the second link to see if it's loading the driver for the controller.

https://forums.servethehome.com/ind...controllers-and-other-hardware-in-esxi.26521/

I suspect you're being limited to vSphere native drivers, for which Broadcom hasn't (and likely won't) released a driver for LSI SAS2xxx (SAS2/6gb/s) cards such as the 93xx cards. There are native drivers available for the SAS3xxx (SAS3/12gb/s) cards.

BTW, a much better solution would be to have a second system that acts as your storage server via Fiber channel or iSCSI (using Linux IO Target) or NFS. One (of many) bonuses with this is that if you want to add more hosts in the future you can use vMotion to move VMs between hosts.
 
  • Like
Reactions: Gunnyp

besterino

New Member
Apr 22, 2017
27
7
3
46
From my experience, ESXi sometimes doesn’t offer drives as datastores, which already contain „foreign“ partitions, e.g. previously Windows/NTFS-formatted ones.

For example: in my current Windows-/ESXi-Dual-Boot Machine, two NVME drives are used by windows: one as bare metal boot-drive, one as storage Passthrough’ed to a Windows-VM. Both show up in the hardware list for PCIe-Devices, but neither can be selected as native storage device for ESXi.

Try manually wiping the disks and check whether they then become available in ESXi. For example Windows‘ partdisk clean should do the trick (be careful to select the correct drive(s) first though!).
 

DracoDan

Not a New Member
May 26, 2016
32
9
8
41
From my experience, ESXi sometimes doesn’t offer drives as datastores, which already contain „foreign“ partitions, e.g. previously Windows/NTFS-formatted ones.

For example: in my current Windows-/ESXi-Dual-Boot Machine, two NVME drives are used by windows: one as bare metal boot-drive, one as storage Passthrough’ed to a Windows-VM. Both show up in the hardware list for PCIe-Devices, but neither can be selected as native storage device for ESXi.

Try manually wiping the disks and check whether they then become available in ESXi. For example Windows‘ partdisk clean should do the trick (be careful to select the correct drive(s) first though!).
Great point, this is a very common problem when using used hardware. Another option to fix it is to use dd to wipe out the first sector of the disk.
 

Gunnyp

New Member
Oct 23, 2019
6
0
1
From my experience, ESXi sometimes doesn’t offer drives as datastores, which already contain „foreign“ partitions, e.g. previously Windows/NTFS-formatted ones.

For example: in my current Windows-/ESXi-Dual-Boot Machine, two NVME drives are used by windows: one as bare metal boot-drive, one as storage Passthrough’ed to a Windows-VM. Both show up in the hardware list for PCIe-Devices, but neither can be selected as native storage device for ESXi.

Try manually wiping the disks and check whether they then become available in ESXi. For example Windows‘ partdisk clean should do the trick (be careful to select the correct drive(s) first though!).
Yes, this was precisely what I had to do for the NVMe drive to appear as a datastore. The LSI 9361-8i is a SAS3xxx 12Gbps and luckily the latest native ESXi driver does support it. I just couldn't see the buttons. Visually impaired, hyper-photophobia so I have to use the high contrast black to have a black background with gray text. All too often the buttons to click on are hidden in black-on-black. Donned the welding glasses and, Bob's your uncle, with a white background the buttons were there.
 

Gunnyp

New Member
Oct 23, 2019
6
0
1
I suggest you read my post here and run the commands in the second link to see if it's loading the driver for the controller.

https://forums.servethehome.com/ind...controllers-and-other-hardware-in-esxi.26521/

I suspect you're being limited to vSphere native drivers, for which Broadcom hasn't (and likely won't) released a driver for LSI SAS2xxx (SAS2/6gb/s) cards such as the 93xx cards. There are native drivers available for the SAS3xxx (SAS3/12gb/s) cards.

BTW, a much better solution would be to have a second system that acts as your storage server via Fiber channel or iSCSI (using Linux IO Target) or NFS. One (of many) bonuses with this is that if you want to add more hosts in the future you can use vMotion to move VMs between hosts.
Oi! Yeah, I may have screwed the pooch utterly! In a rush to become VM/HA capable, I've already converted the second system into an ESXi host and installed pfSense so it is now the router. Currently using a Netgear XS712v2 switch to setup vlans for VM/HA.

Thing is, I backed up everything onto SAS 8tb disks, but foolishly thought that access to the RAID10 drives current content would still be possible. But again lulled into stupidity by my VMworkstation lab experience, didn't realize conversion to bare metal datastores meant formatting away 21Tb of RAID10 drive content.

Further, as the macrium reflect backups are on SAS drives, can't figure out how the hell to even access the backups within an ESXi environment. Had seen on TinkerTry a partial effort to use raw drive mapping RDM to access existing data on a NAS but as you indicate seems one needs an entirely separate system to pull that off.

I'm thinking:
1. Install bootable Windows Server2016 on the unused SATADOM
2. Duplicate there AD DHCP DNS of the bare metal system
3. This restores access to the NTFS RAID10 content
4. Install Server2016 VM as File Server on the pfSense build
5. pfSense build has 12Tb SAS 6Gbps LSI 9286CV-8e that ESXi does recognize
6. Convert the 12Tb to datastores
7. Copy NTFS content to pfSense Server2016 VM
8. Reboot into ESXi HyperV convert 12GBps RAID10s to datastores
9. Copy pfSense Server2016 content back to 12Gbps drives

If there is a simpler way via RDM please do advise.