Thank you for contacting us. We will get back to you shortly.
October 16, 2015 - by Casey Bisson
In the same way that Docker has proven to be an outstanding way to package applications from development to deploy, Mesos is becoming the de facto standard for scheduling those Dockerized applications. How much of a standard? Disney, GE, Netflix, Orbitz, PayPal each demonstrated their Docker+Mesos workflows and solutions at Dockercon recently.
The problem is, running those applications in the cloud requires VMs that sap performance and add cost and complexity. Using Mesos on-premise, on the other hand, has complexity of its own and typically requires dedicated hardware resources. In both cases, operations teams must dedicate resources for a Mesos implementation based on maximum anticipated load, not current need.
At Joyent, we've been looking at Mesos' scheduling features and at how we can leverage those without requiring dedicated resources. We take this seriously because we've seen how the increased cost and complexity of trying to manage containers inside hardware virtual machines can stop Docker efforts cold or increase costs. Bare metal container security, on the other hand, opens the door to using Mesos with shared resources and consumption model pricing. Joyent's Triton offers exactly the container-native security that's needed to do that.
Joyent's Triton container-native infrastructure offers a number of advantages for Mesos
We've made this work by changing the relationship between the Mesos agent and the cloud. Rather than running the agent inside a VM or on dedicated bare metal, the agent interacts with our cloud to provision Docker containers on bare metal across an entire data center. This eliminates the need to provision and pay for resources, either VMs or bare metal, that may go unused or inefficiently used, and allows you to simply schedule tasks and pay for them as you run them.
Mesos typically runs with an agent on every virtual machine or bare metal server under management.
That architecture is required in environments that are not container-native. That is, environments where the unit of compute is a virtual machine, or unmanaged environments without any infrastructure services.
In many ways, Triton's architecture looks similar, though there are significant differences in what's provided by the underlying host and what the Triton agents can do. For example: Triton's container hypervisor's network virtualization offers network interfaces for every container, eliminating network complexity. Triton itself automates data center management and handles everything from the moment you rack and stack the compute node and plug it in. This keeps the infrastructure secure and up to date and eliminates the need for custom and fragile patch distribution, a real win for data centers of all sizes.
Running Mesos + Triton can be as simple as plugging the Mesos agent into the Docker remote API for each data center. This allows Mesos frameworks to execute Dockerized tasks on bare metal throughout your data center. Indeed, you can easily run tasks in both environments, making it easy to scale from a private data center into the public cloud.
Because Triton is built for secure multi-tenancy, you're not limited to running a single Mesos environment or using the data center just for Mesos tasks. Any number of tenants/customers can run their own Mesos environment. And, for those workloads that aren't ready for Mesos, you can run those on the same hardware for maximum utilization and efficiency.
Now, if you've gotten this far in this post you might want to give Mesos + Triton a test drive. What follows are instructions to get a Mesos + Marathon environment, with 3 sample applications, up and running on Triton in 30 minutes or less. These instructions are aimed at running on the Triton Elastic Container Service, you can easily modify the steps to try this out in your data center using Triton Elastic Container Infrastructure software.
The instructions here work well for Unix environments like Mac OS X and Linux.
docker-compose) on your laptop or other environment, along with the Joyent CloudAPI CLI tools (including the
<code>curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh ./sdc-docker-setup.sh -k us-east-1.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE></code>
With all the prerequisites taken care of, you should be able to have Mesos + Triton running in moments.
cdinto the cloned or downloaded directory
bash start.shto start everything up
start.sh script will automatically pull the correct images and start them via Docker Compose, then output the connection details for the Mesos master and Marathon dashboards, as well as Consul. On a Mac, it will even open your browser to them.
At that point, you've got a working Mesos + Marathon environment, ready for you to start some tasks. To see how it works, go ahead and try the example tasks the script outputs. It includes a trivial Nginx example, as well as a more complex, composed application that includes a Couchbase cluster and a simple load generating client.
To make your exploration easier, there's also a simple app to set some environmental vars for Mesos and Marathon:
<code>eval "$(bash env.sh mesos)"</code>
Run that and you'll be able to more easily curl Mesos, Marathon, and Consul for details or to register new apps.
This is an environment for easy experimentation with Mesos. There are a few aspects of it that are less than ideal for actual production just now. I'll demonstrate a production-ready environment in an upcoming post, but it's worth outlining the limitations here:
As I say, I'll demonstrate a production ready environment in an upcoming post. This post is intended both to open up experimentation and demonstrate a simple implementation.
We had to make a few changes in Mesos code you just deployed to adapt it to Triton. The changes were relatively trivial, and they can all be seen in two pull requests: https://github.com/joyent/mesos/pull/1 and https://github.com/joyent/mesos/pull/3. In time we hope to find compatible ways to upstream those changes to Mesos going forward. Here's an overview of what we did:
While Triton supports any number of scheduling tools, including Kubernetes, Nomad, and Docker tools, our interest in Mesos stems from Joyent's strong belief that scheduling is firmly in the domain of the application. How, for example, can a single scheduling framework address the needs of every application? Can every application scale up and down or recover from task failure in exactly the same way? Mesos understands this, and the large number of Mesos frameworks stands as strong evidence for it.
As excited as we are about Mesos, we are even more excited about Mesos + Triton. We believe Triton can be the best infrastructure for Mesos, and Triton's container-native features can offer the best runtime environment for Mesos tasks, both in your data center and in the cloud. We're actively working internally and with the broader Mesos community to make Mesos + Triton even better. This includes first class support for new and developing features, like maintenance primitives and volumes, as well as a more convenient user experience for Mesos + Triton. We look forward to what you can build with Mesos + Triton.