Autopilot Pattern Nginx with automatic SSL certificates

Nginx is a popular web server with a well-deserved reputation for performance reliability. You'll often find Nginx as the front end load balancer or reverse proxy for applications of all sizes, from small websites to some of the largest name brand web apps. The autopilotpattern/nginx image streamlines the integration of Nginx into applications that follow the Autopilot Pattern. In addition to core Nginx functionality, the image also simplifies the process of securing an application by automating SSL certificate acquisition from Let's Encrypt. To demonstrate, we'll start by taking a look at how the autopilot/wordpress image makes use of our Nginx image.

Using Nginx in your applications

Our Autopilot Pattern WordPress implementation uses Nginx as a reverse proxy, and stands as a good example for how to use and extend this Nginx image for your own applications.

In the nginx directory of the autopilot/wordpress repo you'll see the Dockerfile used to extend the image, which looks something like:

FROM autopilotpattern/nginx:1-r6.1.0COPY etc /etc

Simple enough; we use the autopilotpattern/nginx:1-r6.1.0 image and copy configurations specific to our application to form our own image. We're copying two configuration files: etc/containerpilot.json and etc/nginx/nginx.conf. For each of these files, we copied the corresponding files located in the autopilotpattern/nginx repo, and modified them to suit. In etc/containerpilot.json we've edited the backends section to add wordpress, so that we execute our Nginx reload.sh script when there's changes to the WordPress service which we proxy to. Here's the excerpt:

"backends": [    {      "name": "wordpress",      "poll": 7,      "onChange": "/usr/local/bin/reload.sh"    }  ]

In the etc/nginx/nginx.conf configuration file, we've added a section which conditionally defines our WordPress upstream:

    {{ if service "wordpress" }}    upstream WordPress {        # write the address:port pairs for each healthy WordPress node        {{range service "wordpress"}}        server {{.Address}}:{{.Port}};        {{end}}        least_conn;    }{{ end }}

We're using Consul template to generate our Nginx configuration file from the reload.sh script executed by ContainerPilot (when the Nginx container starts, and also when there's changes to the WordPress service, such as when it becomes healthy, or when it scales up/down). Consul template gives us the ability to use Golang templating to dynamically generate portions of the config based on service data in Consul, which is maintained by ContainerPilot.

Nginx image life cycle

In addition to defining the WordPress upstream within the Nginx configuration file, we also define the proxy to this upstream:

        {{ if service "wordpress" }}        rewrite ^/wp-admin/?(.*) /wordpress/wp-admin/$1;        location ^~ / {            proxy_pass http://wordpress;            proxy_set_header Host $http_host;            proxy_set_header X-Forwarded-Proto $scheme;            proxy_set_header X-Real-IP $remote_addr;            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            proxy_redirect off;        }{{end}}

Automatic SSL certificates via Let's Encrypt

Let's Encrypt is a free certificate authority which implements the ACME protocol for automating interactions between itself and web servers. This greatly reduces the friction in operating a secure site. As discussed in WordPress on Autopilot, the Nginx image offers ACME/Let's Encrypt automation as a feature. You'll notice a section in the image's nginx.conf that looks like:

        location /.well-known/acme-challenge {            alias /var/www/acme/challenge;        }

This allows Nginx to respond to ACME chalenge requests, so that the rest of the included ACME machinery automatically acquire and enable Let's Encrypt SSL certificates. All that's required to enable this feature is to set two environment variables for the autopilotpattern/nginx image. For example, your docker-compose.yml may look like:

    image: autopilotpattern/nginx    restart: always    mem_limit: 512m    env_file: _env    environment:        - ACME_DOMAIN=mydomain.com        - ACME_ENV=staging    ports:        - 80        - 443    labels:        - triton.cns.services=nginx

Here, we have set the ACME_DOMAIN and ACME_ENV environment variables, which instructs the image to acquire certificates for mydomain.com from the Let's Encrypt staging environment, which will result in invalid test certificates. This gives you the opportunity to ensure everything's working as expected without counting toward the Let's Encrypt API limits. Once you're content, change ACME_ENV=staging to ACME_ENV=production to go live with a well-recognized SSL certificate.

Note: When updating the ACME configuration of an Nginx image, prior to re-deploying you will need to clear the old ACME state from Consul, otherwise new certificates will not immediately be acquired as expected. You can do that with the following command:

docker exec -it $CONSUL_CONTAINER_NAME curl -X DELETE localhost:8500/v1/kv/nginx/acme?recurse=1

Applying this to your application

When modifying and extending the autopilotpattern/nginx image for use in your application, these will be the configuration files you'll want to modify, in much the same way as we've done in this example:

  1. Add your backend services to containerpilot.json
  2. Define an upstream for each backend service in nginx.conf
  3. Define appropriate proxy behavior for each backend service in nginx.conf

You are able to proxy any number of backend services, which makes the Nginx image a great component in an application that follows to the microservice pattern. Consider the following example in which we proxy two different routes to two different sets of upstream hosts:

In the http section of nginx.conf:

    # Define upstream service1 hosts    {{ if service "service1" }}    upstream service1 {        {{range service "service1" }}        server {{.Address}}:{{.Port}};        {{end}}        least_conn;    }{{ end }}    # Define upstream service2 hosts    {{ if service "service2" }}    upstream service2 {        {{range service "service2" }}        server {{.Address}}:{{.Port}};        {{end}}        least_conn;    }{{ end }}

For clarity, here's what the above snippet will become after Consul template completes (assuming the services are registered and healthy in Consul):

    upstream service1 {        server 172.17.0.3:5000;        least_conn;    }    upstream service2 {        server 172.17.0.5:5000;        least_conn;    }

Next, in the appropriate server section of http within nginx.conf we add:

        # When blog service is ready, proxy + load balance /blog requests to upstream hosts        {{ if service "service1" }}        location /service1 {            proxy_pass http://service1;            proxy_redirect off;        }        {{ end }}        # When about service is ready, proxy + load balance /about requests to upstream hosts        {{ if service "service2" }}        location /service2 {            proxy_pass http://service2;            proxy_redirect off;        }        {{ end }}

The above example illustrates how you may assemble multiple microservices behind Nginx, with the ability to leverage it's features, including streamlined ACME/Let's Encrypt support. Don't forget to add service1 and service2 to your containerpilot.json backends section to complete the circle.

Final thoughts

In this post we've covered how the autopilot/wordpress image leverages the autopilotpattern/nginx image, and how to do the same within your application. We also explained how to take advantage of the ACME/Let's Encrypt feature of the Nginx image, which greatly simplifies the process of getting a secure application up and running. Hopefully this has helped to demonstrate the value of the pattern and it's ability to create autonomous application clusters. We would love to hear your feedback and questions!



Post written by Jason Pincin