So, a preface, I followed the steps below, but then essentially scrapped the whole thing trying to get the damn UI plugin to work. I haven't had time to fix it (it's an ssl thumbprint or trust issue). What follows is pretty much how I got it to work with my VCSA. I'll try deploying to just ESXi and see if I have better results. Getting Started with vSphere Integrated Containers: First, ensure you meet the requirements: vSphere 6.X or ESXi 6.X (These take you through installing on vSphere 6.5) vSphere Enterprise Plus or vSphere Operations Management Enterprise Plus license (or Get Free Trial) Download the latest version of VIC here: Download VMware vSphere We have version 1.1.1 Deploy the OVA into your environment Select Deploy OVF template, then browse to where it downloaded. Change the name if required, choose the datacenter and host, click next, accept the license, choose the storage location, and network. In the customize template section, choose a root password, enable SSH (if desired). Set either a static IP address or let DHCP take care of it. Ensure you set a hostname/FQDN, and the additional passwords required for the registry admin and database sections. Here you also have the option of changing the ports from default, and supplying your own SSL certificate (note, this is something I wish I knew how to do, and would welcome feedback). After everything is correct, deploy it. After deployment, I logged into the https web address, and clicked “Sign up for an account” Once that was complete, I always like clicking on things, so the “Management” tab presented me with the following error. I wasn’t sure if I did something wrong or not, so I charged ahead, rebooted the VIC vm and honestly didn’t think to look back there before the end. I’m not sure if that error about not finding Admiral is a fluke, or due to it not being properly set up until later. A docker ps on the VIC host showed it running, so either I skipped a step, or I shouldn't have gone here in the first place. But, let's fix it anyway. I downloaded and extracted the tarball (note on Windows you might need to use something like 7zip) Once extracted, we have executables for Linux, OSX and Windows. Grab the previously deployed VIC ca cert. This is used to allow the VCH and VIC to just talk to each other easily and securely. Since I deployed my VIC with a self signed cert, I used the following command: # scp root@vic_appliance_address:/data/harbor/cert/ca.crt ./destination_path Next, I deployed the VCH with the following command: ./vic-machine-linux create \ --target VCSA_FQDN_OR_IP \ --user 'USERNAME’ \ --password ‘PASSWORD' \ --image-store Storage_Device \ --bridge-network vic-bridge \ --public-network VDS_PUBLIC_NETWORK \ --force --no-tlsverify --registry-ca=./ca.crt Ok, great, that worked, and used the correct VDS port group for our internet-connected side. Verify with: # docker -H IP_Address:2376 --tls info At this point, I installed the VIC UI into vSphere. Or at least I tried to. I got some partial success, it didn’t fully install. You need to edit the config file under vic/ui/VCSA. I saw the plugin icon for VCSA show up, but nothing happens when I go to it. Either way, let's go back to that management address which should be working. Add a host, it should verify with the SSL cert thumbprint we already established, no password required. At this point, I spawned a Monero instance via command line: # docker -H 192.168.1.135:2376 --tls run -it -e firstname.lastname@example.org servethehome/monero_cpu_minergate Here we can see that I should have limited the threads to the number of vcpu's that this instance has (2 by default) Here's the instance in Admiral (VMware's container manager) This is actually connecting to the VM itself that spawned. The output is on the console just like normal. So.... let's scale that up a bit. Go back to your VIC Admiral interface, and select Templates. Docker is already a registry, so search for servethehome/monero_cpu_minergate (note, you can search for the other STH containers if you wanted like I should have done for the nproc image). Click the arrow next to provision, and for some reason, you’ll have to add the command manually in here. Go over to environments and add the username and email address. Save and hit back until you’re at the screen where you can provision it. (You may have to change the view on the right side to “Templates” as the default is to show all. You’ll see the container provision. If everything is working correctly, it’ll spawn the container VM in the vApp, and start using 100% CPU Now, it only has 2 vCPUs to start, if you can change that, I didn’t look into it very much. I’d actually prefer if it only had 1, then spawn extra instances. In any case, we’re going to expand that a bit. Hit the Scale button. Now we have more! Is this easier? I’m certain it probably is not. However, I already had vmware up and running, many VMs that are critical to wife acceptance, and I paid for it through VMUG Advantage. Plus, I know a lot of VMware, and nothing of Proxmox. This would/should scale out very easily with more hosts available.