Dynamic nginx upstreams with Containerbuddy
Updated example available
Our updated autopilot pattern example application offers a more complete guide to building autopilot pattern applications with automated service discovery and configuration.
The video below, however, is still a useful overview of how to automate application discovery and configuration for autopilot operations with Containerbuddy.
Containerbuddy simplifies service discovery in Docker and provides a workaround for applications not designed from the start for container-native discovery.Today I'd like to walk through an example of using Containerbuddy with my favorite web server, Nginx. You can follow along with the code on Github.
Updated version available
I've expanded on this blog post and updated its content. Please see my post on running applications on autopilot. The original example is still available at the link above.
An architecture for load-balancing
In this application, an Nginx node acts as a reverse proxy for any number of upstream application nodes, which we'll call
app. Nginx is configured with an
upstream directive to run a round-robin load balancer. The backend instances are Node.js applications serving static assets, but that's just for illustration purposes and we could be running any application server here. We're going to use Consul as a service registry, and application nodes will register themselves when they come online.
The Nginx service's Containerbuddy is configured with an
onChange handler that calls out to
consul-template to write out a new virtualhost configuration file based a template that we've stored in Consul. It then fires an
nginx -s reload signal to Nginx, which causes it to gracefully reload its configuration.
Nginx's signal handler is a great example of the kind of behavior we want from a container-native application; the application provides a control mechanism that allows us to change our topology without having to redeploy the service or restart it in a way that creates downtime.
To run this example on your own:
- Get a Joyent account and add your SSH key.
- Install the Docker Toolbox (including
docker-compose) on your laptop or other environment, as well as the Joyent CloudAPI CLI tools (including the
- Configure Docker and Docker Compose for use with Joyent:
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh./sdc-docker-setup.sh -k us-east-1.api.joyent.com
You can run the example on Triton:
cd ./examples./start.sh -p example
Or in your local Docker environment:
cd ./examplescurl -Lo containerbuddy-0.0.1-alpha.tar.gz \https://github.com/joyent/containerbuddy/releases/download/0.0.1-alpha/containerbuddy-0.0.1-alpha.tar.gztar -xf containerbuddy-0.0.1-alpha.tar.gzcp ./build/containerbuddy ./nginx/opt/containerbuddy/cp ./build/containerbuddy ./app/opt/containerbuddy/./start.sh -p example -f docker-compose-local.yml
At this point you'll see the Consul console and a web page that says what application server you've been proxied to (there's only one right now) and what nodes are marked as available in Consul. For purposes of illustration our web page automatically refreshes itself every 5 seconds so that we can see changes.
Let's scale up the number of
docker-compose -p example scale app=3
As the nodes launch and register themselves with Consul, you'll see them appear in the Consul UI. The web page that the start script opens refreshes itself every 5 seconds, so once you've added new application containers you'll start seeing the "This page served by app server:
This was just a simple example of container-native service discovery. In an upcoming post I'll demonstrate a multi-tier application including pushing service discovery data into an external DNS provider so we can have zero-downtime deploys of every piece of the stack including the load balancing tier.
The following video offers a walkthrough of how to automate application discovery and configuration using Containerbuddy and demonstrates the process in the context of a complete, Dockerized application.
Post written by Tim Gross