1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Deploying vSphere Integrated Containers

Discussion in 'Guides' started by Ellwood, Jun 16, 2017.

  1. Ellwood

    Ellwood Member

    Joined:
    Nov 20, 2016
    Messages:
    32
    Likes Received:
    10
    So, a preface, I followed the steps below, but then essentially scrapped the whole thing trying to get the damn UI plugin to work. I haven't had time to fix it (it's an ssl thumbprint or trust issue). What follows is pretty much how I got it to work with my VCSA. I'll try deploying to just ESXi and see if I have better results.

    Getting Started with vSphere Integrated Containers:


    First, ensure you meet the requirements:

    vSphere 6.X or ESXi 6.X (These take you through installing on vSphere 6.5)

    vSphere Enterprise Plus or vSphere Operations Management Enterprise Plus license (or Get Free Trial)


    Download the latest version of VIC here: Download VMware vSphere

    We have version 1.1.1
    upload_2017-6-16_23-30-17.png


    Deploy the OVA into your environment

    Select Deploy OVF template, then browse to where it downloaded. Change the name if required, choose the datacenter and host, click next, accept the license, choose the storage location, and network.
    In the customize template section, choose a root password, enable SSH (if desired). Set either a static IP address or let DHCP take care of it. Ensure you set a hostname/FQDN, and the additional passwords required for the registry admin and database sections. Here you also have the option of changing the ports from default, and supplying your own SSL certificate (note, this is something I wish I knew how to do, and would welcome feedback). After everything is correct, deploy it.
    upload_2017-6-16_23-32-20.png

    After deployment, I logged into the https web address, and clicked “Sign up for an account”
    upload_2017-6-16_23-33-1.png

    Once that was complete, I always like clicking on things, so the “Management” tab presented me with the following error.

    upload_2017-6-16_23-35-19.png

    I wasn’t sure if I did something wrong or not, so I charged ahead, rebooted the VIC vm and honestly didn’t think to look back there before the end. I’m not sure if that error about not finding Admiral is a fluke, or due to it not being properly set up until later. A docker ps on the VIC host showed it running, so either I skipped a step, or I shouldn't have gone here in the first place. But, let's fix it anyway.

    upload_2017-6-16_23-35-41.png

    I downloaded and extracted the tarball (note on Windows you might need to use something like 7zip)


    Once extracted, we have executables for Linux, OSX and Windows.

    Grab the previously deployed VIC ca cert. This is used to allow the VCH and VIC to just talk to each other easily and securely. Since I deployed my VIC with a self signed cert, I used the following command:

    # scp root@vic_appliance_address:/data/harbor/cert/ca.crt ./destination_path


    Next, I deployed the VCH with the following command:

    ./vic-machine-linux create \

    --target VCSA_FQDN_OR_IP \

    --user 'USERNAME’ \

    --password ‘PASSWORD' \

    --image-store Storage_Device \

    --bridge-network vic-bridge \

    --public-network VDS_PUBLIC_NETWORK \

    --force --no-tlsverify --registry-ca=./ca.crt

    upload_2017-6-16_23-36-20.png

    Ok, great, that worked, and used the correct VDS port group for our internet-connected side.

    Verify with:

    # docker -H IP_Address:2376 --tls info

    upload_2017-6-16_23-36-40.png

    At this point, I installed the VIC UI into vSphere. Or at least I tried to. I got some partial success, it didn’t fully install. You need to edit the config file under vic/ui/VCSA. I saw the plugin icon for VCSA show up, but nothing happens when I go to it.
    upload_2017-6-16_23-38-1.png

    upload_2017-6-16_23-38-37.png

    Either way, let's go back to that management address which should be working.
    upload_2017-6-16_23-39-3.png

    Add a host, it should verify with the SSL cert thumbprint we already established, no password required.

    At this point, I spawned a Monero instance via command line:
    # docker -H 192.168.1.135:2376 --tls run -it -e username=email@email.com servethehome/monero_cpu_minergate
    upload_2017-6-16_23-40-9.png
    Here we can see that I should have limited the threads to the number of vcpu's that this instance has (2 by default)

    upload_2017-6-16_23-40-51.png
    Here's the instance in Admiral (VMware's container manager)
    upload_2017-6-16_23-41-41.png

    This is actually connecting to the VM itself that spawned. The output is on the console just like normal.
    upload_2017-6-16_23-42-22.png

    So.... let's scale that up a bit.

    Go back to your VIC Admiral interface, and select Templates.

    upload_2017-6-16_23-43-3.png

    Docker is already a registry, so search for servethehome/monero_cpu_minergate (note, you can search for the other STH containers if you wanted like I should have done for the nproc image). Click the arrow next to provision, and for some reason, you’ll have to add the command manually in here.

    upload_2017-6-16_23-43-45.png
    upload_2017-6-16_23-43-53.png
    Go over to environments and add the username and email address.
    upload_2017-6-16_23-44-0.png

    Save and hit back until you’re at the screen where you can provision it. (You may have to change the view on the right side to “Templates” as the default is to show all.
    upload_2017-6-16_23-44-39.png

    upload_2017-6-16_23-45-15.png

    You’ll see the container provision.


    If everything is working correctly, it’ll spawn the container VM in the vApp, and start using 100% CPU

    upload_2017-6-16_23-45-48.png

    Now, it only has 2 vCPUs to start, if you can change that, I didn’t look into it very much. I’d actually prefer if it only had 1, then spawn extra instances. In any case, we’re going to expand that a bit. Hit the Scale button.

    upload_2017-6-16_23-46-11.png

    Now we have more!
    upload_2017-6-16_23-46-26.png

    Is this easier? I’m certain it probably is not. However, I already had vmware up and running, many VMs that are critical to wife acceptance, and I paid for it through VMUG Advantage. Plus, I know a lot of VMware, and nothing of Proxmox.

    This would/should scale out very easily with more hosts available.
     
    #1
    whitey, epicurean, Dean and 6 others like this.
  2. Dean

    Dean New Member

    Joined:
    Jun 18, 2015
    Messages:
    20
    Likes Received:
    3
    Very nice! Thank you - this will help..
     
    #2

Share This Page