Expanded container service preview
Concurrent with our launch of Triton, our next-generation cloud infrastructure management platform, we're expanding access to the preview of Triton Elastic Container Service for Docker. We received far more applicants to our preview program than we expected. We're retooling and accelerating our plans to try to accommodate everybody, but despite those efforts and our expansion today, this is still a limited preview.
We want to make sure that the service meets our (high) standards and everything works exactly as it should. Our preview testers to this point have been incredibly helpful providing feedback and enduring the more frequent maintenance schedules while we iterate quickly.
Joyent's Triton Elastic Container Service for Docker uses the native Docker API and is easy to control via the Docker CLI. Be sure you have Docker 1.4 or newer installed on your Mac or Linux environment, then execute the command below to download a helper script that will configure the Docker CLI to use the Triton remote Docker API endpoint:
<code class="language-bash">curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh</code>
Once the script is downloaded, you can execute it. Be sure to substitute the correct values for
<code class="language-bash">./sdc-docker-setup.sh -k us-east-3b.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE></code>
Note: the script generates a TLS certificate using your SSH key and writes it to a directory in your user account. The TLS certificate is what's used by the Docker client to identify and authenticate your requests to the Docker API endpoint. You may also need to
unset DOCKER_TLS_VERIFY if you get errors when trying to connect.
After that, running your first container is a simple
docker run. The default container size is 1GB of RAM and 1/2 vCPU, but you can specify how much RAM and CPU each container gets with the
-m flag at
docker run. More information about how to use the service is available in our detailed walkthrough.
The Triton Elastic Container Service is charged by the minute. All usage in excess of Joyent’s free tier will be charged according to the pricing below.
The following container sizes are available in the preview:
The ratio of RAM to CPU is fixed at this time, so if you specify both RAM and CPU, the smallest size container that fits the largest requested resource is what will be provisioned.
Please note: resources are preserved for all containers, including stopped containers, until they are explicitly removed or destroyed by the customer. This includes the logs, IP address(es), and filesystem contents, so they can be easily and seamlessly restarted if desired. Because those resources are not available for other customers to use, charges are assessed for all provisioned containers, not just those that are actively running. To avoid charges for stopped containers, be sure to review all provisioned containers using
docker ps -a and remove containers using
docker rm $uuid_or_container_name.
Triton compared to our first generation Docker container service
Joyent will continue to offer our first generation Docker container service during the preview of our Triton Elastic Container Service for Docker.
There are a number of differences between the first generation service and our new, container-native Triton container service.
|First generation Docker service
|Triton Elastic Container Service for Docker
|Where do the containers run?
|Docker containers run in Docker host VMs.
|Docker containers run on bare metal.
|How do I secure each container?
|Weak isolation between containers on the same Docker host VM requires consideration of the security requirements and implications of each container sharing each VM.
|Each Docker container is individually secured in a trusted execution environment.
|How do I manage and scale container performance?
|The performance of each Docker container is limited by the underlying Docker host VM. Provision a larger Docker host VM and re-provision containers to it to increase Docker container performance.
|Resources are assigned and reserved for each Docker container using the
-c flags in the
docker run command. Specify resources as needed for performance requirements.
|How do I connect to containers on the network?
|To connect to a container from another container on the same Docker host VM, use the IP shown in
docker inspect for that container. To connect to a container on a different host VM, use the IP address for that VM and port mapping specified in
docker inspect to see the primary IP address for each Docker container, or
sdc-listmachines to see all the IP addresses (including the public IP address if requested at
docker run time). Public IP addresses are accessible from the public internet, private IP addresses are only accessible to other containers owned by the same customer.
|How do I access the API?
|Each Docker host VM presents as a separate API endpoint.
|The entire data center, with multiple compute nodes, are all controlled via a single API endpoint.
|How are fees assessed?
|Fees accrue by the hour for each provisioned Docker host VM, even those without any running Docker containers.
|Fees are charged by the minute for each Docker container.
- Additional questions about the Triton Elastic Container Service are answered in our FAQ.
- Understand container-native infrastructure.
- Compare the performance and convenience advantages of container-native infrastructure to previous-generation Docker services.
- Learn how to run Triton, including the Container Service for Docker, on your own hardware.
- Learn more about virtualization and the benefits of OS virtualization underlying containers, and how Triton can run Docker containers securely in Bryan Cantrill's Docker and the Future of Containers in Production talk from January.
- April 6: Removed mention of CPU shares related to package sizes. CPU shares are not portable across compute nodes with differing performance and the community is split on how to solve this problem (some suggest representing it as a decimal count of vCPUs for each container, an integer count of vCPUs, 1000 vCPU count, or 1024 vCPU count). We're participating in these discussions and will amend our API support as a standard emerges.
Post written by Casey Bisson