Joyent Triton/SDC/SmartOS discussion

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Hi guys,

I was wondering if any of you out there are using Joyent Triton (formally SmartDataCenter) and SmartOS in production/homelab/whatever? I'm testing it out and know I'm going to run into some questions once I get past the basics.

My biggest complaint is that there aren't many places to discuss it. They have no community forums. Very few conversations around learning about it in forums/reddit/whatever. Which is unfortunate because from what I've seen the philosophy/engineering aspect of it seems to be really solid.

I've just started messing with it really. It was rather simple to get the basic head node up and running and add compute nodes (boot via PXE). Created some ubuntu LX containers and stuff, haven't tried KVM stuff yet or looked into how to create a windows VM. Had to work past some differences with my setup vs the typical tutorials where they are setting it up isolated from the rest of an existing network. This just meant configuring some specific vlan stuff on my switches so it could have the admin network on an untagged vlan without conflicting with my existing network.

One thing I find annoying is I've not seen any way to leverage ZFS functionality to allow migration of VM's from CN to CN (in the GUI) in the case where you might need to take a CN down for maintenance. Or a way (in the GUI) to zfs send a VM to another CN as a simple backup insurance method if a CN craps out. And even beyond that it would be cool if in the GUI they had a way to specify at a datacenter level disaster recovery to another datacenter. From what I gather Joyent takes the approach of redundancy should be handled at the application level. And I guess you could manually setup zfs send/receive on SmartOS (at least I assume) but would be way better if added into the operations portal.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Thanks for the links. I'm going to give them a read. Still think it would be pretty cool if someone took ZFS replication and hide some of it behind a GUI where you simply select the # of replicates and frequency to keep per VM and it handled it in the background.

So I'm reading through how to run Docker on Triton/SDC.

I've run the post-setup for cloudapi and docker;
Code:
sdcadm post-setup cloudapi
sdcadm post-setup docker
I believe I need to install the Docker toolbox on my laptop/desktop so I can connect to Triton from the docker CLI.
I'm trying to figure out if I have to setup the Triton CLI (or is it called the CloudAPI?) as well?
And finally it seems like the tutorials I'm reading want me to run an sdc-docker-setup.sh script. My local laptop is running windows and I was planning on connecting to Triton from it. I don't think I'll be able to run that script, right? I should still be able to do this though?
 
  • Like
Reactions: Patrick

JayG30

Active Member
Feb 23, 2015
232
48
28
38
It was way easier to just setup an Ubuntu container in Triton and set that up for docker. Install nodejs, json, tritoncli, docker, and run the setup scripts. Took about 15-20min once I decided trying to set it from windows was to difficult.

Got to figure out how to provision a Windows Server using KVM.
 
  • Like
Reactions: Patrick

JayG30

Active Member
Feb 23, 2015
232
48
28
38
I'm actually walking through setting up a Windows KVM image using this; How to create a KVM image
I'm using the SDC Guest Tools iso from the download link. It has the sysprep config and VirtIO SCSI and Ethernet adapter.

I mounted a remote smb share (running on freenas) on the headnode to copy the ISO images over into the blank KVM zone. Had to follow this to do it; Mounting an SMB share on a SmartOS Instance

Code:
svcadm enable rpc/bind #already running
svcadm enable idmap
svcadm enable smb/client
mkdir /smbshare

#Add to /etc/vfstab
//USER:PASS@IP_OF_SMB_SERVER/sharename - /smbshare smbfs - yes -

mount /smbshare
My freenas server is connect to Active Directory so I needed to specify the username/password for a domain account or else I would just get permission denied or failure mounting. Found how to do that in Solaris documentation.
Code:
mount -F smbfs //[workgroup;][user[:password]@]server/share mount-point
So the trick is in /etc/vfstab to specify it like this (note the domain name followed by a comma);
Code:
//DOMAIN;USER:PASS@IP_OF_SMB_SERVER/sharename - /smbshare smbfs - yes -
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
So, I'm a bit at a lose right now. Following the guide to create a windows KVM image.

At the point where you VNC to the VM, I can connect without issue (VNC runs on the admin network tag). However the defined External NIC doesn't get an IP address (set to DHCP). I figured it might just be an issue with running it on the headnode and that once I finished creating the image and deployed it that it would work. Not the case.

If I set the NIC to a static IP address it works. So it seems like a DHCP issue. It should be getting the External IP address from Triton.

Anyone have an idea?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Does networking work with a static address?

I'm also thinking maybe you should be making separate threads for some of these.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Well seems I was wrong. The NIC assigned to the external network IS getting a DHCP address and I can remote desktop into the Windows VM created from the custom image I built.

The custom windows image works when provisioned on the head node, not on the compute nodes. I believe the issue is in the networking on the compute nodes vs the head node and some "non-optimal" choices I made for testing. I attempted to use the same NIC for both admin & external networks as shown in THIS TUTORIAL.
Add a compute node
Most of the configuration is automated, but one step that isn't is adding all the NIC tags. I want this compute node to have access to the externalnetwork, so I've added that tag.
This is not recommended for production deployments.

The head node sets up the networking interfaces during the SmartDataCenter/Triton usbkey installation. The head node creates a "virtual" interface with the defined VLAN for the external network. When adding the external NIC Tag to the interface on the compute nodes, it doesn't create a vNIC, it just adds the tag. The compute nodes (unlike the head node) aren't able to route out/in the external network. You can see the difference in the images below.

head node:
headnode_net.PNG
CN01:
CN01_net.PNG
CN02:
CN02_net.PNG

I thought this was a problem, but when I provisioned a smartos container on those compute nodes they were able to route out/in on the external network. At that point I was confused and figured that was "by design".

So, I THINK the issue is that since the compute nodes doen't have a vNIC for the external network like the head node does it is causing an issue specifically with KVM/ioVirt (I'm seeing some discussion with issues of ioVirt drivers over bridged networks for instance). If I could figure out how to add a vNIC for the compute nodes then it would probably work. OR, use a dedicated NIC for the external network like they tell you to. :)

I only did this because I'm lazy and didn't want to reconfigure more switch ports and I didn't want to run more patch cables (this is just a test env right now after all).

So if anyone knows how to create the vNIC on the compute nodes that would be cool.
I don't think it is as easy as with SmartOS because these compute node images PXE boot, so I don't know if you can just run some commands on the compute node and have them persist reboots.


PS: I guess maybe I should have called this thread "My journey through learning Triton/SDC".
 
Last edited:
  • Like
Reactions: MiniKnight

cperalt1

Active Member
Feb 23, 2015
180
55
28
43
Have you asked on the smartos IRC? Those guys are pretty good at helping through issues like these. There are ways to persist config on SmartOS through the /opt directory and /usbkey . On IRC just be ready to provide the JSON you used to create the VM.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
Have you asked on the smartos IRC? Those guys are pretty good at helping through issues like these. There are ways to persist config on SmartOS through the /opt directory and /usbkey . On IRC just be ready to provide the JSON you used to create the VM.
I asked in IRC about adding a vNIC to a CN. Response was;
"the CN will not plumb an interface to external in the GZ by default; instances you spin up on the CN can have external interfaces, but we don't plumb an interface on the CN for external."

The CN themselves don't show an external IP and you can't SSH to them. You can get to them through SSH'ing from the head node though. But it seems that the behavior of my CNs networking is normal.

Every other instance I create on a CN (joyent smartos, linux lx, freebsd/linux KVM) works fine to connect to external. So it must be related specifically to the image I created. But it does boot fine on the headnode. Perhaps it is the network ovirt driver (bug).

What I find more surprising though was that I couldn't get VNC to the windows KVM working either. I expected that to work. It doesn't have anything to do with the external network (vnc is provided on the admin network).

FYI, I followed the guide from joyent exactly to do this.
Code:
{
  "brand": "kvm",
  "vcpus": 1,
  "autoboot": false,
  "ram": 4096,
  "disks": [
    {
      "boot": true,
      "model": "virtio",
      "size": 40960
    }
  ],
  "nics": [
    {
      "nic_tag": "external",
      "ip": "dhcp",
      "primary": "true",
      "model": "virtio"
    }
  ]
}

Code:
{
  "v": "2",
  "uuid": "b930d4f2-a822-11e6-9228-7b47adbaf3f0",
  "owner": "930896af-bf8c-48d4-885c-6573a94b1853",
  "name": "w2012r2",
  "description": "Windows 2012R2 1.0.0 KVM image",
  "version": "1.0.0",
  "state": "active",
  "disabled": false,
  "public": true,
  "os": "windows",
  "type": "zvol",
  "files": [
    {
      "sha1": "4795f47875cbc061d763a124599a1558988bd8bf",
      "size": 4341748896,
      "compression": "gzip"
    }
  ],
  "requirements": {
    "networks": [
      {
        "name": "net0",
        "description": "public"
      }
    ],
    "ssh_key": true
  },
  "generate_passwords": "true",
  "users": [
    {
      "name": "administrator"
    }
  ],
  "image_size": "40960",

  "disk_driver": "virtio",
  "nic_driver": "virtio",
  "cpu_type": "host"
}
 
Last edited:

JayG30

Active Member
Feb 23, 2015
232
48
28
38
So there has to be something wrong with virtio drivers in this setup.

I'm recreating the image again to test some stuff out. If I set a static IP address in W2012R2, it works. DHCP just fails though. SmartOS handles DHCP for these interfaces mind you.

I tried the virtio drivers that are signed by Joyent (older version) and the newest drivers downloaded from the instructions in the article posted by cperalt1 (HERE). I figure it must be ok to use non-joyent drivers as it sounds like others have done it.

I'm not sure how to fix this.
 

JayG30

Active Member
Feb 23, 2015
232
48
28
38
So, I figured out where my issues are, just not sure why or how to solve it just yet. It's not virtio's fault after all.

After getting Windows installed I ran SYSPREP to make that "golden image" for provisioning. Well turns out that the reason I couldn't connect to the VM (over RDP) using the external networks IP address is that I had to login to the machine 1 time for it run windows setup. On login it asks you the question about "setup network sharing on the NIC" and all that. I'm not sure if there is a way to bypass that while still using SYSPREP (which is nice by the way, joyent generates a random admin password on provisioning and enters the details in the instances internal metadata field).

So easy solution, just VNC into the machine the first time, right? Well, the only thing I can connect to on the admin network is the headnode. I can't ping/ssh/etc into any of the compute nodes. VNC attaches to the admin network. And I can't reach the CN's via the admin network from my desktop PC. On the headnode (for some reason) I can connect to the admin network for VNC.

I checked my switch and there is no difference between the ports connecting the head node or the CN's.

The reason this is confusing to me is why does it work on the headnode and not the CN. This is probably my fault and something with my networking setup.


FYI: I was able to get the RDP stuff situated. Turns out you need to add some stuff to your sysprep XML file to allow RDP by default. Once I did that and used it to build the image I could RDP. However was still not able to determine why I could use vnc on the headnode but not the compute nodes but I suspect it is something to do with my vlan and firewall settings.
 
Last edited: