Matthew McKeen

Software and Systems Engineer

Docker Import And Push Support For Packer

| Comments

Over the last few weeks I’ve put together a pull request for Packer which should be releasing soon with version 0.5.2.

With my usage of Packer and Docker, I’ve always found it an annoyance to have to import the Packer built Docker image separately, using Docker import, rather than have Packer handle importing with a post-processor.

Once 0.5.2 releases, the deployment of a Packer built Docker image can be optimized through the use of the docker-push and docker-import post processors, through a build template with post-processors much like the following:

1
2
3
4
5
6
{
  "type": "docker-import",
  "repository": "mmckeen/packer",
  "tag": "0.5.2"
},
"docker-push"

These post processors will automatically import the generated Packer artifact into the Docker daemon and push it to a remote repository, simplifying a good majority of the deployment script from my previous blog post.

Advanced Provisioning With Packer For Docker And Vagrant

| Comments

When Dockerfiles Aren’t Enough

Dockerfiles are a limited solution to a complex problem: the provisioning of Docker images. Dockerfiles at their core operate much like a simple shell script, being a list of instructions to get from a certain state, or base image, to that of a final state, or output image. These instructions do not easily change, cannot take on branching logic, and at the same time are just plain ugly to look at. With all the advancement that modern configuration management systems provide, Dockerfiles are a huge roadblock when considering Docker as a serious contender for use in the repeatable deployment and configuration of complex application stacks.

Even with Dockerfiles, it is possible to hack around their limitations by installing Puppet or Chef, copying modules or cookbooks into the running container, and doing configuration via existing well proven methods of configuration management, but even then we still rely on the Dockerfile language itself to do much of the provisioning work for us. The work that one does to get Dockerfile provisioning to work cannot translate over to building out virtual and bare metal machines, making it harder to use the same Puppet or Chef code base to configure all of the various types of machines that one may rely on between development, production, and testing. That is, until we introduce the completely awesome tool that is Packer.

Packer

Packer is a tool from Mitchell Hashimoto, the same guy who brought us Vagrant, designed to allow the same provisioners, whether that be Puppet, Chef, or plain shell scripts, to spit out anything from images for Digital Ocean or EC2 to Vagrant boxes, and, you’ve guessed it, Docker images.

The benefit of this is obviously that with a small amount of additional configuration in our Packer templates, and with the same provisioning code, I can create an application image that can run anywhere from Vagrant for testing and development, to scaling out on EC2 to thousands of users. It is truly an awesome automation tool.

Packer and Docker Together At Last

Using Packer, we can improve how we configure and deploy code to our Nginx Docker image from my previous blog post. In this post, we allowed the Nginx container access to the files it must serve via a shared folder between the container and the host system. In a perfect world, the image that we run as a container will encompass all of the dependencies necessary to allow that application to run. This of course should include the application code being served, and as an added bonus of packaging the code with the image, each image iteration becomes a release of the software being served. Rollbacks become dead simple, since if we want to run an old version of the code base we simply replace the currently running container with another based on an older image. The code is contained within the image itself, so whenever we export that image, or host it up on a Docker index, the code goes along with it. Though moving the code into the image itself could have easily been done with a Dockerfile, I am going to show you how using Packer, I can take the same application and deploy it two ways: in a Vagrant box and in a Docker container with the ability for more deployment platforms only a couple of lines of code away.

We will start with the Docker container, and you’ll be amazed at how easy it is.

This is the Packer template for a Docker image that will run this very website that you are now browsing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
  "builders": [
    {
      "type": "docker",
      "image": "mmckeen/opensuse-13-1",
      "export_path": "mmckeen.net.tar"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
                  "zypper -n in nginx",
                  "echo 'daemon off;' >> /etc/nginx/nginx.conf"
                ]
    },
    {
      "type": "file",
      "source": "/srv/www/mmckeen.net/",
      "destination": "/srv/www/htdocs/"      
    }
  ]
}

A Packer template is quite a simple JSON document with this particular document having two sections: builders and provisioners. The builders section describes the output of the Packer build, and the provisioners provide the code necessary for the build itself. In the template above the simplest provisioner, which simply runs a set of shell commands, is used in very much the same way as a Dockerfile’s RUN commands, but the provisioners section also has the ability to use Puppet, Chef, and various other sources to do its provisioning, allowing for much more advanced provisioning options than simple shell commands or scripts. Also notice that the above template uses the file provisioner to copy the website code base into the final build artifact, allowing for the easy deployment of our code directly into the built Docker image.

And so you can see how this template could be used in action, below is a sample deployment script that handles the building of a new website image, as well as the replacement of the currently running Docker container with the new image.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#!/bin/bash

# Do a Packer build

rm -f mmckeen.net.tar

packer build ./packer-build-templates/docker/mmckeen.net/mmckeen.net.json

# Get the latest base image ID
LAST_IMPORT=`cat last_mmckeen_net`

# Import build artifact as Docker image
echo "Importing the newly built image into docker..."
docker import - mmckeen_net_packer < mmckeen.net.tar > last_mmckeen_net

# Delete the previous base image
echo "Removing the old base image..."
docker rmi $LAST_IMPORT

# Build a new production Docker image using metadata only Dockerfile
echo "Add in Docker metadata"
docker build -rm -t="mmckeen_net_prod"  dockerfiles/mmckeen.net/
echo "Final mmckeen_net_prod image built, ready for docker run"

echo "Restarting mmckeen.net through supervisorctl"
supervisorctl restart mmckeen.net

The script is fairly simple, doing the Packer build, importing the resultant tar archive into Docker as an image, adding Docker specific meta-data like exposed ports (a necessity that should be removed soon when Packer builds this feature in natively) to the final application image, and using supervisorctl to restart the mmckeen.net container, putting into action all the changes in the new build.

Of course, all of the above could have been done just as simply by using Docker and Dockerfiles directly, but here comes the amazing thing about Packer: its flexibility.

Vagrant

Vagrant is the quintessential tool these days for isolating developer environments. I also use it for testing of software projects, particularly in situations where a full VM is easier to use/provision than a Docker image, or I am working with others who may be on Windows or Mac systems. Say my website was a much more substantial software project, wouldn’t we want the same configuration of our application in the Docker image to be testable in a virtual machine by developers or QA personnel on non-Linux desktops and laptops? With Packer, it is a simple addition to our original build template.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
{
  "builders": [
    {
      "type": "docker",
      "image": "mmckeen/opensuse-13-1",
      "export_path": "mmckeen.net.tar"
    },
    {
      "type": "virtualbox-ovf",
      "source_path": "./openSUSE_13.1_Packer_Base-1.0.0/openSUSE_13.1_Packer_Base.x86_64-1.0.0.ovf",
      "ssh_username": "root",
      "ssh_password": "",
      "ssh_wait_timeout": "2m",
      "vboxmanage": [
        ["modifyvm", "", "--nic1", "nat"]
      ],
      "shutdown_command": "shutdown -P now"     
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
                  "zypper -n ref",
                  "zypper -n up",
                  "zypper -n in nginx"
                ]
    },
    {
      "type": "shell",
      "only": ["docker"],
      "inline": [
                  "echo 'daemon off;' >> /etc/nginx/nginx.conf"
                ]
    },
    {
      "type": "shell",
      "only": ["virtualbox-ovf"],
      "inline": [
                  "useradd vagrant",
                  "echo 'vagrant    ALL = NOPASSWD: ALL' > /etc/sudoers",
                  "mkdir -p /home/vagrant/.ssh",
                  "wget --no-check-certificate -O authorized_keys 'https://github.com/mitchellh/vagrant/raw/master/keys/vagrant.pub'",
                  "mv authorized_keys /home/vagrant/.ssh/",
                  "chown -R vagrant /home/vagrant/.ssh"
                ]
    },
    {
      "type": "file",
      "source": "/srv/www/mmckeen.net/",
      "destination": "/srv/www/htdocs/"      
    }
  ],
  "post-processors": [
    {
      "type": "vagrant",
      "only": ["virtualbox-ovf"],
      "output": "mmckeen_net_virtualbox.box",
      "vagrantfile_template": "./Vagrantfile.template"
    } 
  ]
}

Of course simple might not be a 100% accurate term, but if you look beyond the code required as setup for basically any Vagrant box, setting up Vagrant’s SSH key and sudo access, the difference between the two templates only comes from two sections: post-processors and the second builder object of type virtualbox-ovf.

1
2
3
4
5
6
7
8
9
10
11
{
  "type": "virtualbox-ovf",
  "source_path": "./openSUSE_13.1_Packer_Base-1.0.0/openSUSE_13.1_Packer_Base.x86_64-1.0.0.ovf",
  "ssh_username": "root",
  "ssh_password": "",
  "ssh_wait_timeout": "2m",
  "vboxmanage": [
    ["modifyvm", "", "--nic1", "nat"]
  ],
  "shutdown_command": "shutdown -P now"     
}
1
2
3
4
5
6
7
8
  "post-processors": [
    {
      "type": "vagrant",
      "only": ["virtualbox-ovf"],
      "output": "mmckeen_net_virtualbox.box",
      "vagrantfile_template": "./Vagrantfile.template"
    } 
  ]

Using a OVF virtual machine image that I created using the SUSE Studio service (http://susestudio.com/a/ZNpZV4/opensuse-13-1-packer-base), Packer imports the machine image into VirtualBox, logs in over SSH, and executes the commands outlined in the provisioners section of the build template. After doing this the Vagrant post-processor takes over, bundling the VirtualBox VM along with a template Vagrantfile to form a Vagrant box containing all the code, system services, and configuration to run mmckeen.net in its own dedicated virtual machine using Vagrant.

Conclusion

At this point we have one Packer template that can build both a Docker and Vagrant Box concurrently to run mmckeen.net, or either one separately. On my Digital Ocean VPS I use the Docker builder to generate each release of this website, and on my local machine, the VirtualBox builder and Vagrant post processor to generate a Vagrant Box for local testing, and use on systems where I don’t have Docker handy. Of course the configuration necessary to run this website is extremely minimal, and I don’t tend to release new versions very often, but the power of Packer’s approach comes with complex configuration driven by configuration management, continuous testing and integration tooling, and I think is a completely awesome way to fuel the construction of multiple or even single approaches to deploying, testing, and developing various types of applications.

As always, all the code used in this post, with the exception of the deployment script, is available from my Github packer-build-templates repository.

Docker All The Things: Nginx And Supervisor

| Comments

The Beginning

Recently the DevOps community has been all a hype around the potential of Docker to help change the way that we deploy applications, and its potential to help streamline LXC into a more grokable package just like Vagrant did for virtualized developer environments. With finals finally over, and the Winter break upon me, I have begun to take the time to rearchitecture some of the server applications that I use everyday: mainly my web hosting, ownCloud, and perhaps mail and database servers if I begin to feel the need. To do this, I have decided to try out the potential of applying the concepts and capabilities of Docker in order to change the way that I administer my personal applications, and gain some understanding that I can take into my professional life. Here begins my adventure into using Docker to build my infrastructure, one application at a time, in a way that I believe will make my future administration of these services a much easier task.

Digital Ocean

I have chosen to use Digital Ocean to host my applications, though the following steps should apply to any server host, Digital Ocean provides a fast and cheap VPS option that you might want to try yourself in order to replicate the steps that I have done. If you do choose Digital Ocean as your hosting provider, please do use my referral link.

Setting Up The Docker Host

Before we can start building and deploying Docker containers, we need a system with Docker installed to host and manage our running containers. Though Docker since version 0.7 (I am using 0.7.1 at this moment) should work on almost any Linux distribution, it is still officially supported only on Ubuntu releases, and works best on systems with AUFS support. Since Ubuntu already has AUFS support packaged and in its kernel, it is the natural base system to choose. I will not go into the installation process in detail, simply read the Docker guide here. Once you have Docker installed and working, continue to the next section.

Building And Running An Nginx Image

As I am an openSUSE user on the desktop, and think that it is a very well put together Linux distribution for both server and personal use, I have decided to base all of my Docker containers off of an openSUSE base. Searching the Docker index for openSUSE images initially brought no results, so I decided to build my own base image and publish it on the index, so that I and others can easily build new application images based on openSUSE. Using the distribution build tool KIWI, I built a JeOS image based on openSUSE 13.1, which also happens to be a long term support Evergreen release, and should provide for a constantly patched base to use for hosting applications.

This JeOS image contains the bare minimum of packages, not even including such programs as wget and curl. In this way, it is perfect for only running a single application per container, like Nginx or MariaDB, so that each container contains the bare system neccessary to serve a certain purpose, and no extra packages. Each container provides the smallest attack surface possible for each application being contained, as well as an easy way to bundle all the system dependencies necessary to run one, and only one application, in the same way on any Docker host.

Update (2013-12-17): As of today, the mmckeen/opensuse-13-1 image of mine on the index has been cut down to a third of its size by removing all of the graphics related packages that are still part of the JeOS KIWI template. It should now create much smaller images, and be of better use with Docker.

In order to build the Nginx image, I used the Dockerfile below:

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM mmckeen/opensuse-13-1  
MAINTAINER Matthew McKeen <matthew@mmckeen.net>

RUN zypper -n in nginx

# tell Nginx to stay foregrounded
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

# run
ENTRYPOINT /usr/sbin/nginx -c /etc/nginx/nginx.conf

# expose HTTP
EXPOSE 80

This Dockerfile builds a container based on my openSUSE 13.1 base image with preinstalled Nginx, configures Nginx to disable running as a daemon, and auto starts Nginx when the container is run. It also exposes port 80, allowing you to forward traffic to and from the Nginx server within from the host system. The mmckeen/nginx image on the Docker index is auto-built directly from my Github dockerfiles repository, at this time by using the above Dockerfile. To pull this image for your own use, simply run docker pull mmckeen/nginx.

At this point, we are ready to spin up the container and run Nginx with a simple

docker run -p 80:80 mmckeen/nginx

Using the -p option to forward the container port 80 to the host port 80, simply visiting http://localhost:80 gives us the expected Nginx 403 Forbidden page, since we have not given the server any files to serve. To do this, we can use the docker run -v option to mount a host local directory to the default Nginx document root inside the container.

docker run -v /srv/www/mmckeen.net:/srv/www/htdocs -p 80:80 mmckeen/nginx

This mounts my website code at /srv/www/mmckeen.net on the Ubuntu host to the container, served by Nginx from openSUSE. Visiting http://localhost:80 now will serve whatever files you have stored on the host machine from within the isolated environment of the container.

Supervisor And Docker = Awesome

Now that we have Nginx running in our container, we need some kind of process control to control the running of this container, enable auto respawn of the container if it should exit, and make our Nginx container run in a similar way to to a Upstart, systemd, or standard SysVinit service. If I was running on a openSUSE, Arch Linux, or Fedora server I would write a systemd unit to control the running of the container. Because I hate Upstart with a passion, and an initscript doesn’t really allow for decent autostart and dependability functions without the addition of Puppet or other services, I decided to use the simple Supervisor service to control the running of my Docker containers. On Ubuntu, the package installs it’s configuration in /etc/supervisor/supervisord.conf with user configuration in the /etc/supervisor/conf.d directory. Making the Nginx container autostart is a simple matter of putting the following into a .conf file in the conf.d directory: in my case in /etc/supervisor/conf.d/mmckeen.net.conf.

1
2
3
[program:mmckeen.net]
command=/usr/bin/docker run -a stdout -rm --name=mmckeen.net -v /srv/www/mmckeen.net:/srv/www/htdocs -p 80:80 mmckeen/nginx
autorestart=true

Notice the addition of several options to our original commands. -a stdout is used to direct the container output to stdout so that Supervisor can detect and log the output from the container. -rm deletes the container once it exists, and --name=mmckeen.net gives our container a name so that we can not have two containers existing at the same time from the same command, and if any container fails, there will be no name conflicts with the new container being created. Doing this will automatically restart the container if it fails, as well as make it start at boot. We are now successfully serving a website via Nginx from within a Docker container.

Security Considerations

Running a website this way does not completely isolate your site from the security considerations of running it directly via Nginx on the host, in fact it adds more security considerations. The most important of these is that the Docker daemon’s API does not require any kind of authentication. It is important to make sure that you have good firewall in place to isolate the host machine from the outside world, or anyone might be able to control your Docker daemon. Also take into account that Docker’s bridged networking as well as mounted filesystem support allows for possible security holes into and out of the container itself, just as these features do with traditional virtualization. Of course. the topic of container security is a massive subject, but keep in mind not to keep the security of LXC and Docker for granted.

Conclusion, For Now…

This is of course just a simple use of Docker. In this case, we are serving a static website, and don’t have to worry very much about changing application state, but it introduces the basic concepts that you will need to start using Docker to serve your own applications. As I continue to build out more services, I will add more blogs posts on my exploits. Hopefully, you will find them useful as well.