Expanded container service preview

Concurrent with our launch of Triton, our next-generation cloud infrastructure management platform, we're expanding access to the preview of Triton Elastic Container Service for Docker. We received far more applicants to our preview program than we expected. We're retooling and accelerating our plans to try to accommodate everybody, but despite those efforts and our expansion today, this is still a limited preview.

We want to make sure that the service meets our (high) standards and everything works exactly as it should. Our preview testers to this point have been incredibly helpful providing feedback and enduring the more frequent maintenance schedules while we iterate quickly.

We already know the container-native infrastructure is blazingly fast, but it's the variety and sophistication of applications this new service will support that excites us most.

Getting started

Joyent's Triton Elastic Container Service for Docker uses the native Docker API and is easy to control via the Docker CLI. Be sure you have Docker 1.4 or newer installed on your Mac or Linux environment, then execute the command below to download a helper script that will configure the Docker CLI to use the Triton remote Docker API endpoint:

<code class="language-bash">curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh</code>

Once the script is downloaded, you can execute it. Be sure to substitute the correct values for and , and then follow the instructions on screen to complete the setup.

<code class="language-bash">./sdc-docker-setup.sh -k us-east-3b.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE></code>

Note: the script generates a TLS certificate using your SSH key and writes it to a directory in your user account. The TLS certificate is what's used by the Docker client to identify and authenticate your requests to the Docker API endpoint. You may also need to unset DOCKER_TLS_VERIFY if you get errors when trying to connect.

After that, running your first container is a simple docker run. The default container size is 1GB of RAM and 1/2 vCPU, but you can specify how much RAM and CPU each container gets with the -m flag at docker run. More information about how to use the service is available in our detailed walkthrough.


The Triton Elastic Container Service is charged by the minute. All usage in excess of Joyent’s free tier will be charged according to the pricing below.

The following container sizes are available in the preview:

128M1/163Gt4-standard-128M$0.000055/minute, $0.0032/hour
256M1/86Gt4-standard-256M$0.000109/minute, $0.0065/hour
512M1/412Gt4-standard-512M$0.000219/minute, $0.0131/hour
1G1/225Gt4-standard-1G$0.000438/minute, $0.0262/hour
2G150Gt4-standard-2G$0.000875/minute, $0.0525/hour
4G2100Gt4-standard-4G$0.001750/minute, $0.105/hour
8G4200Gt4-standard-8G$0.003500/minute, $0.210/hour
16G8400Gt4-standard-16G$0.007000/minute, $0.420/hour
32G16800Gt4-standard-32G$0.014000/minute, $0.840/hour
64G321600Gt4-standard-64G$0.028000/minute, $1.680/hour

The ratio of RAM to CPU is fixed at this time, so if you specify both RAM and CPU, the smallest size container that fits the largest requested resource is what will be provisioned.

Please note: resources are preserved for all containers, including stopped containers, until they are explicitly removed or destroyed by the customer. This includes the logs, IP address(es), and filesystem contents, so they can be easily and seamlessly restarted if desired. Because those resources are not available for other customers to use, charges are assessed for all provisioned containers, not just those that are actively running. To avoid charges for stopped containers, be sure to review all provisioned containers using docker ps -a and remove containers using docker rm $uuid_or_container_name.

Triton compared to our first generation Docker container service

Joyent will continue to offer our first generation Docker container service during the preview of our Triton Elastic Container Service for Docker.

There are a number of differences between the first generation service and our new, container-native Triton container service.

QuestionFirst generation Docker serviceTriton Elastic Container Service for Docker
Where do the containers run?Docker containers run in Docker host VMs.Docker containers run on bare metal.
How do I secure each container?Weak isolation between containers on the same Docker host VM requires consideration of the security requirements and implications of each container sharing each VM.Each Docker container is individually secured in a trusted execution environment.
How do I manage and scale container performance?The performance of each Docker container is limited by the underlying Docker host VM. Provision a larger Docker host VM and re-provision containers to it to increase Docker container performance.Resources are assigned and reserved for each Docker container using the -m and -c flags in the docker run command. Specify resources as needed for performance requirements.
How do I connect to containers on the network?To connect to a container from another container on the same Docker host VM, use the IP shown in docker inspect for that container. To connect to a container on a different host VM, use the IP address for that VM and port mapping specified in docker run.Use docker inspect to see the primary IP address for each Docker container, or sdc-listmachines to see all the IP addresses (including the public IP address if requested at docker run time). Public IP addresses are accessible from the public internet, private IP addresses are only accessible to other containers owned by the same customer.
How do I access the API?Each Docker host VM presents as a separate API endpoint.The entire data center, with multiple compute nodes, are all controlled via a single API endpoint.
How are fees assessed?Fees accrue by the hour for each provisioned Docker host VM, even those without any running Docker containers.Fees are charged by the minute for each Docker container.

Learn more

Post written by Casey Bisson