Triton - the new command line experience

It was March when we announced Triton, the latest version of the container-native data center automation software suite (formerly called SmartDataCenter) that powers our public cloud and is used by individuals, enterprises, and academic institutions world-wide to power their private data centers (it's open source, try it yourself), and we've been hard at work to bring new and improved tools to Triton as well. The big news with Triton was the introduction of support for Docker, including the ability to use Docker tools to securely manage and deploy containers on multi-tenant bare metal throughout the data center. Recently, we introduced a preview of Terraform support for Triton. Today, I'd like to give you a preview of our new CLI tool to manage Triton-hosted infrastructure.

Our new triton tool provides the power of the sdc-* commands in node-smartdc with a cleaner interface to help you work faster. This project began as an internal hackathon idea, and since then Joyent has been hard at work to bring this improved Joyent experience to your terminal command line.

We're going to take a tour through triton and demonstrate how you can use it to create and destroy infrastructure containers and hardware VMs (you can create and manage Docker containers on Triton using the Docker Engine)


To run the examples below, you will need to have a Joyent account.

Before you can install the triton command line tool you'll first need to install Node.js. To install that, you can download an installer or use a package manager for your platform.


Once you have Node.js installed, you can use the npm command to install Node.js applications, like our new triton CLI tool:

$ npm install -g triton...

Let's confirm that it's installed by checking the version:

$ triton --versiontriton 4.3.1

As of this writing, I'm using version 4.3.1, but don't be surprised if you get a newer version. We're iterating on this quickly.

Next, you'll need to configure triton to access a Triton data center. The triton CLI uses "profiles" to store access information. Profiles contain the data center URL, your login name, and SSH key fingerprint so that you can switch between them conveniently. You may need to connect to different data centers, or connect to the same data center as different users, and that won't be a problem with profiles.

Let's create our first profile for us-sw-1:

$ triton profile createA profile name. A short string to identify a CloudAPI endpoint to the`triton` us-sw-1The CloudAPI endpoint URL.url: https://us-sw-1.api.joyent.comYour account login name.account: jillThe fingerprint of the SSH key you have registered for your account. You mayenter a local path to a public or private key to have the fingerprintcalculated for you.keyId: ~/.ssh/joyent.id_rsaFingerprint: 2e:c9:f9:89:ec:78:04:5d:ff:fd:74:88:f3:a5:18:a5Saved profile "us-sw-1"

Let's also ensure that this new profile is our default from now on:

$ triton profile set-current us-sw-1Set "us-sw-1" as current profile

For the CloudAPI endpoint URL you can select from any of our global data centers, or use a Triton-powered data center of your own (remember: it's open source). Later on we'll setup a profile for each data center.

You can also configure bash completions with this command:

# Mac OSX$ triton completion > /usr/local/etc/bash_completion.d/triton# Linux$ triton completion > /etc/bash_completion.d/triton# Windows bash shell$ triton completion >> ~/.bash_completion

To test the installation and configuration, let's use triton info:

$ triton infologin: jillname: Jill Exampleemail: jill@example.comurl: https://us-sw-1.api.joyent.comtotalDisk: 65.8 GiBtotalMemory: 2.0 GiBinstances: 2running: 2

The triton info output above shows that Jill's account already has two instances running.

Quick start: create an instance

With triton installed and configured, we can jump right into provisioning instances. Here's an example of provisioning an infrastructure container running Ubuntu. Think of infrastructure containers like virtual machines, only faster and more efficient. Let's run triton instance create and we'll talk about the pieces after:

$ triton instance create -w --name=server-1 ubuntu-14.04 t4-standard-1GCreating instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4, ubuntu-14.04@20160114.5, t4-standard-1G)Created instance server-1 (e9314cd2-e727-4622-ad5b-e6a6cac047d4) in 22s

Now that we have an instance, we can run triton ssh to connect to it. This is an awesome addition to our tools because it means that we don't need to copy SSH keys or even lookup the IP address of the instance.

$ triton ssh server-1Welcome to Ubuntu 14.04 (GNU/Linux 3.19.0 x86_64) * Documentation: programs included with the Ubuntu system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted byapplicable law.   __        .                   . _|  |_      | .-. .  . .-. :--. |-|_    _|     ;|   ||  |(.-' |  | |  |__|   `--'  `-' `;-| `-' '  ' `-'                   /  ;  Instance (Ubuntu 14.04 20151105)                   `-'

Instance creation options and details

In our quick start example, we ran triton instance create -w --name=server-1 ubuntu-14.04 t4-standard-1G. That command has three parameters:

  1. We gave our instance a name using --name server-1
  2. We used -w to wait the instance to be created
  3. We used ubuntu-14.04 as our image
  4. We set t4-standard-1G as our package

Let's look at each of those in detail to see how you can set the options that will work best for your needs.

Specifying the instance name

Names for instances can be up to 189 characters and include any alphanumeric character plus _, -, and .

Selecting an image

Finding our Ubuntu image is pretty easy. We use triton images to list the images and add name=~ubuntu to do a substring search for Ubuntu. It's sorted by published date so usually we'll pick the most recent. Today we'll choose 14.04 because it has wider support.

$ triton images name=~ubuntu type=lx-datasetSHORTID   NAME          VERSION   FLAGS  OS     TYPE        PUBDATE...c8d68a9e  ubuntu-14.04  20150819  P      linux  lx-dataset  2015-08-1952be84d0  ubuntu-14.04  20151005  P      linux  lx-dataset  2015-10-05ffe82a0a  ubuntu-15.04  20151105  P      linux  lx-dataset  2015-11-05

Selecting a package

Next we'll use triton package to search for a package with 1 gigabyte of RAM. We'll pick the t4-standard-1G because it's the newest.

$ triton packages memory=1024SHORTID   NAME                   DEFAULT  MEMORY  SWAP  DISK  VCPUSd9396ca5  Small 1GB              true         1G    2G   30G      111a01166  g3-standard-1-smartos  false        1G    2G   33G      185284e54  g3-standard-1-kvm      false        1G    2G   33G      -20e583d5  t4-standard-1G         false        1G    4G   25G      -

I've been trying to convince you of the magic of using the command line. However, we're missing an API that can fetch pricing details for our different packages, so you'll have to lookup the prices in or on our public pricing page. I recommend using the public pricing page because you can click on a box to learn it's API name. Today we'll use t4-standard-1G and the price is $0.026 per hour.

Bootstrapping an instance with a script

Our quick start example didn't include one of the most useful options for automating infrastructure on Triton: specifying a script for containers to run at startup.

We'll show how to use triton to run the examples from Casey's blog post on setting up Couchbase in infrastructure containers. I only want to show what the equivalent triton commands look like. We'll skip over the details, but you can read the original post to learn more.

The command below sets up a 16GB CentOS infrastructure container, and installs Couchbase. The --script file installs Couchbase, and the triton ssh runs cat /root/couchbase.txt to show the address of the Couchbase dashboard.

curl -sL -o couchbase-install-triton-centos.bash instance create \    --name=couch-bench-1 \     $(triton images name=~centos-6 type=lx-dataset -Ho id | tail -1) \    'Large 16GB' \    --wait \    --script=./couchbase-install-triton-centos.bashtriton ssh couch-bench-1 'cat /root/couchbase.txt'

Working with instances

Of course, infrastructure management isn't just about creating instances, and triton offers some of its biggest improvements in this space.

List instances

$ triton instancesSHORTID    NAME           IMG                    STATE    PRIMARYIP         AGO1fdc4b78   couch-bench-1  8a1dbc62               running   3m8367b039   server-1       ubuntu-14.04@20151005  running    3m

Wait for tasks

By default the triton tool does not wait for tasks to finish. This is great because it means that your commands return control back to you very quickly. However sometimes you'll need to wait for a task to complete before you do the next one. When this happens you can wait by using either the --wait or -w flags, or the triton instance wait command. In the example above we used --wait so that the instance would be ready by the time the triton ssh command ran.

Show instance details

Use triton instance get -j to view your instance's details as a JSON blob. To parse fields out of the blob, I recommend using json although there are many other great tools out there.

$ triton instance get -j couch-bench-1{    "id": "1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a",    "name": "couch-bench-1",    "type": "smartmachine",    "state": "running",    "image": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",    "ips": [        "",        ""    ],    "memory": 16384,    "disk": 409600,    "metadata": {        "user-script": "#!/bin/bash\n...\n\n",        "root_authorized_keys": "ssh-rsa ..."    },    "tags": {},    "created": "2015-12-18T03:44:42.314Z",    "updated": "2015-12-18T03:45:10.000Z",    "networks": [        "65ae3604-7c5c-4255-9c9f-6248e5d78900",        "56f0fd52-4df1-49bd-af0c-81c717ea8bce"    ],    "dataset": "82cf0a0a-6afc-11e5-8f79-273b6aea6443",    "primaryIp": "",    "firewall_enabled": false,    "compute_node": "44454c4c-4400-1059-804e-b5c04f383432",    "package": "t4-standard-16G"}

Up above you can see that the user-script that we ran is part of the metadata.

You can pull out individual values by piping the output to json KEYNAME. For example you could get the IP address of an instance like this:

$ triton instance get -j couch-bench-1 | json primaryIp165.225.136.140

Clean up

Let's wrap up with this container. We'll delete it using the triton instance delete command:

$ triton instance delete server-1 couch-bench-1Delete (async) instance server-1 (8367b039-759b-c6f5-a6c2-a210e1926798)Delete (async) instance couch-bench-1 (1fdc4b78-62ec-cb97-d7ff-f99feb8b3d2a)

For something a bit more dangerous you can delete all your instances using this command:

$ triton instance delete $(triton instances -Ho shortid)

Be careful this will delete all your instances regardless of whether they running or stopped. If you use docker, you'll noticed that this is equivalent to using docker rm -f $(docker ps -aq) to forcefully delete all your containers. Although triton might be faster since it deletes the machines in parallel.


If you work with multiple data centers or accounts then you'll love this last feature. You can create triton profiles to easily switch between each data center and account that you use.

Let's view our current profiles using triton profiles. So far you'll have the profile for us-sw-1 that we created for our example above. You may also have an env profile below if you've setup your environment variables too.

$ triton profilesNAME     CURR  ACCOUNT      USER  URLenv            jill         -     https://us-sw-1.api.joyent.comus-sw-1  *     jill         -

Creating profiles for each data center

Next let's make a profile for each data center. To do this we will use triton commands to make a copy of the us-sw-1 profile for each of the data center urls. Copy this snippet below to add the new profiles:

triton datacenters -H -o name | while read -r dc; do triton profile get -j us-sw-1 | sed "s/us-sw-1/$dc/g" | triton profile create -f -; done

Okay, let's run triton profiles again to check to see that it worked. We should have a new profile for each data center listed in triton datacenters:

$ triton profilesNAME       CURR  ACCOUNT      USER  URLenv              jill         -     https://us-sw-1.api.joyent.comeu-ams-1         jill         -     https://eu-ams-1.api.joyent.comus-east-1        jill         -     https://us-east-1.api.joyent.comus-east-2        jill         -     https://us-east-2.api.joyent.comus-east-3        jill         -     https://us-east-3.api.joyent.comus-sw-1    *     jill         -     https://us-sw-1.api.joyent.comus-west-1        jill         -

Using profiles

Let's try switching profiles to create instances in two different data centers. To switch profiles we can use --profile=NAME or -p NAME. Okay we're ready to create and destroy instances in us-east-1 and us-west-1:

$ triton -p us-east-1 instance create --name=east-example ubuntu-14.04 t4-standard-1GCreating instance east-example (756de80e-8378-cb55-bcdb-b4ab63012d97, ubuntu-14.04@20151005, t4-standard-1G)$ triton -p us-west-1 instance create --name=west-example ubuntu-14.04 'Small 1GB'Creating instance west-example (370f6835-0140-ee51-edf3-dede33f9cb9e, ubuntu-14.04@20151005, Small 1GB)

Run triton instances to verify that the instances exist where we expect them to:

$ triton -p us-east-1 instancesSHORTID   NAME          IMG                    STATE    PRIMARYIP        AGO756de80e  east-example  ubuntu-14.04@20151005  running   4m$ triton -p us-west-1 instancesSHORTID   NAME          IMG                    STATE    PRIMARYIP        AGO370f6835  west-example  ubuntu-14.04@20151005  running       5m

Run triton delete to clean up and remove the instances:

$ triton -p us-east-1 delete east-exampleDelete (async) instance east-example (756de80e-8378-cb55-bcdb-b4ab63012d97)$ triton -p us-west-1 delete west-exampleDelete (async) instance west-example (370f6835-0140-ee51-edf3-dede33f9cb9e)

The next step is yours

We've been hard at work improving Triton and the tooling to manage infrastructure, including the new triton CLI and Node.js library, and now we're sharing it with you now so you can try it out. I hope you'll agree that triton is a valuable improvement, or just an easy tool to use, but we need your feedback. The triton tool is fresh and new, and we know there are some rough bits still hiding inside. If you want to follow along with development and help improve triton please leave feedback at

Post written by Drew Miller