2015-06-01

I've used Vagrant to manage local development servers for several years. Vagrant is, according to its official website, a tool to "create and configure light-weight, reproducible, and portable development environments." Basically, Vagrant helps me create and provision virtual machines with unique combinations of software and configuration customized for each of my projects. This approach accomplishes three important things:

Vagrant isolates project environments to avoid software conflicts.

Vagrant provides the same software versions for each team member.

Vagrant builds a local environment that is identical to the production environment.

However, Vagrant has one large downside—it implies hardware virtualization. This means each project runs atop a full virtual machine, and each virtual machine has a complete operating system that demands a large overhead in system resources (e.g., CPU, memory, and gigabytes of disk space). I can't tell you how often I have seen this warning message when I have too many concurrent projects:

Your startup disk is almost full.

The logical solution is to run multiple projects on a single Vagrant virtual machine. Laravel Homestead is a popular Vagrant box that uses this strategy. You lose project isolation, and you can't customize your server software stack for each project. However, you regain local system resources and have less infrastructure to install and manage. If you rely on Vagrant to manage local projects, I highly recommend the one virtual machine, many projects strategy adopted by Laravel Homestead.

There is another solution, though. Have you heard of Docker? I first heard this word a year ago. It's all about containers, I was told. Awesome. What are containers?, I thought. I dug deeper, and I read all about containerization, process isolation, and union filesystems. There were so many terms and concepts flying around that my head started spinning. At the end of the day, I was still scratching my head asking what is Docker? and how can Docker help me? I've learned a lot since then, and I want to show you how Docker has changed my life as a developer.

Hello, Docker

Docker is an umbrella term that encompasses several complementary tools. Together, these tools help you segregate an application's infrastructure into logical parts (or containers). You can piece together only the parts necessary to build a portable application infrastructure that can be migrated among development, staging, and production environments.

What is a Docker Container?

A container is a single service in a larger application. For example, assume we are building a PHP application. We need a web server (e.g., nginx). We need an application server (e.g., PHP-FPM). We also need a database server (e.g., MySQL). Our application has three distinct services: a web server, an application server, and a database server. Each of these services can be separated into its own Docker container. When all three containers are linked together, we have a complete application.

What is a Docker Image?

Separating an application into containers seems like a lot of work with little reward. Or so I thought. It's actually quite brilliant for several reasons. First, containers are actually instances of a platonic, canonical image. Imagine a Docker image as a PHP class. Just like a single PHP class can be used to instantiate many unique PHP objects, so too can Docker images be used to instantiate many unique Docker containers. For example, we can reuse a single PHP-FPM Docker image to instantiate many unique PHP-FPM containers for each of our applications.

Although we can build custom Docker images, it's easier to find and share Docker images on Docker Hub. For example, we could download and use the sameersbn/mysql Docker image to create MySQL database containers for our application. Why re-invent the wheel if someone has already built a Docker image that solves our problem?

How is Docker Different from Vagrant?

How is Docker better than Vagrant if we are running multiple containers for each project? Isn't this worse than running a virtual machine with Vagrant? No, and here's why.

Docker Images Are Extendable

Docker images further resemble PHP classes because they extend parent images. For example, an Nginx Docker image might extend the phusion/baseimage Docker image, and the phusion/baseimage image extends the top-most ubuntu Docker image. Each child image includes only the content that is different from its parent image. This means the top-most ubuntu image contains only a minimal Ubuntu operating system; the phusion/baseimage image includes only tools that improve Docker container maintenance and operation; and the Nginx image includes only the Nginx web server and configuration.

Unrelated Docker images may extend the same ancestor image, too. This practice is actually encouraged because Docker images are downloaded only once. For example, if an Nginx and a PHP-FPM image both extend the same ubuntu:14.04 Docker image, we download the common Ubuntu image only once.

Docker Containers Are Lightweight

Docker containers are lightweight and require a nominal amount of local system resources to run. In fact, once you download the necessary Docker images, instantiating a Docker image into a running Docker container takes a matter of seconds. Compare that to your first vagrant up --provision command that often requires 15-30 minutes to create and provision a complete virtual machine. This efficiency is possible for two reasons. First, each Docker container is just a sandboxed system process that does only one thing. Second, all Docker containers run on top of a shared Docker host machine—either your host Linux operating system or a miniscule Linux virtual machine.

Docker containers are also ephemeral and expendable. You should be able to destroy and replace a Docker container without affecting the larger application.

If containers are ephemeral, how do we store persistent application data? I asked the same question. We persist application data on the Docker host via Docker container volumes. We'll discuss Docker container volumes when we instantiate MySQL containers later in this article.

Same Host, Parallel Universes

Unlike Vagrant, which requires a complete virtual machine, filesystem, and network stack for each project, Docker containers run on a single shared Docker host machine. How is this possible? Would Docker containers not collide when using the same file system and system resources? No, and this is Docker's pièce de résistance.

Docker is built on top of its own low-level Linux library called libcontainer. This tool sandboxes individual system processes and restricts their access to system resources using adapters for various Unix and Linux distributions. Docker's libcontainer library lets multiple system processes coexist on the same Docker host machine while accessing their own sandboxed filesystems and system resources.

That being said, Docker is not just libcontainer. Docker is an umbrella that encapsulates many utilities, including libcontainer, that enable Docker image and container creation, portability, versioning, and sharing.

Docker is the best of both worlds. Whereas Vagrant is a tool primarily concerned with hardware virtualization, Docker is more concerned with process isolation. Both Vagrant and Docker achieve the same goals for developers—they create environments to run applications in isolation. Docker does so with more efficiency and portability.

Let's Build an Application

Enough jibber-jabber. Let's use Docker to build a PHP application that sits behind an Nginx web server and communicates with a MySQL database. This is not a complex application by any stretch of the imagination. However, it is a golden opportunity to learn:

How to build a Docker image

How to instantiate Docker containers

How to persist data with Docker volumes

How to aggregate container log output

How to manage related containers with Docker Compose

First, create a project directory somewhere on your computer and make this your current working directory. All commands in the remainder of this tutorial occur beneath this project directory.

Game Plan

Before we do anything, let's put together a game plan. First, we need to install Docker. Next, we need to figure out how our application will break down into individual containers. Our application is pretty simple: we have an Nginx web server, a PHP-FPM application server, and a MySQL database server. Ergo, our application needs three Docker containers instantiated from three Docker images. We'll need to figure out if we want to build the Docker images ourselves or download existing images from Docker Hub.

We should also decide on a common base image from which all of our Docker images extend. If our Docker containers all extend the same base image, we save disk space and reduce the number of Docker image dependencies. We'll use the Ubuntu 14.04 base image in this tutorial.

After we build and/or download the necessary Docker images, we'll instantiate and run our application's Docker containers with Docker Compose. If everything goes according to plan, we'll have an on-demand PHP development environment up and running in a matter of seconds with only one bash command.

...we'll have an on-demand PHP development environment up and running in a matter of seconds with only one bash command.

Finally, we'll discuss how to aggregate container log output. Remember, containers are expendable, and we should never store data on the container itself. Instead, we'll redirect logs to their respective container's standard output and standard error file descriptors so that Docker Compose can collect and manage our containers' log data in aggregate.

Install Docker

Docker requires Linux, and it supports many different Linux distributions: Ubuntu, Debian, CentOS, CRUX, Fedora, Gentoo, RedHat, and SUSE. Take your pick. Your local Linux operating system is the Docker host on which you instantiate and run Docker containers.

Many of us run Mac OS X or Windows, but we're not out of luck. There is something called Boot2Docker. This is a tool that creates a tiny Linux virtual machine, and this virtual machine becomes the Docker host instead of our local operating system. Don't worry, this virtual machine is really small and boots in about 5 seconds.

After you install Boot2Docker, you can either double-click the Boot2Docker application icon, or you can execute these bash commands in a terminal session:

These three commands create the virtual machine (if it is not already created), start the virtual machine, and export the necessary environment variables so that your local operating system can communicate with the Docker host virtual machine.

If you're like me and dislike typing more than necessary, you can create a bash alias. Add this line to your ~/.bash_profile file:

This creates a bash alias named dockup (an abbreviation for the completely arbitrary phrase Docker Up). Now you can simply type dockup in a new terminal session to create, start, and initialize your Docker host virtual machine.

The Nginx Docker Image

Our first concern is the Nginx web server. I'm sure we can find a suitable image on Docker Hub that extends the Ubuntu 14.04 base image. However, this is an opportunity to build our own Docker image. Create a new directory at images/nginx/, and add Dockerfile and start.sh files in this directory. Your project directory should look like this:

Open the Dockerfile file in your preferred text editor and add this content:

The Dockerfile uses commands defined in the Docker documentation to construct a new Docker image. Let's walk through this file line-by-line and see what each command does.

Line 1 begins with FROM, and it specifies the name of the parent Docker image from which this new image extends. We extend the phusion/baseimage Docker image because it provides tools that simplify Docker container operation.

Line 2 begins with MAINTAINER, and it specifies your name and email. If you share this Docker image on Docker Hub, other developers will know who created the image and where they can ask questions.

Line 3 initiates built-in house-keeping tasks provided by the phusion/baseimage base image.

Lines 4-6 install the latest stable Nginx version from the Nginx community PPA (personal package archive). This PPA contains the latest stable Nginx build and is a quick way to install Nginx without building from source.

Lines 7-9 update the Nginx configuration file so that Nginx does not run in daemonized mode. These lines also symlink the Nginx access and error log files to the container's standard output and standard error file descriptors. We direct Nginx logs to the container's standard output and standard error file descriptors so that Docker can manage our application's log data in aggregate. We never want to persist any information on the container itself.

Lines 10-12 copy the start.sh file into the container. This file is invoked by the phusion/baseimage base image to start the Nginx server process.

Line 13 tells Docker to expose port 80 on all containers instantiated from this image. We need to expose port 80 so that inbound HTTP requests can be received by Nginx and handled appropriately.

Line 15 performs final house-keeping tasks provided by the phusion/baseimage base image.

Next, open the start.sh file and append this content:

This file contains the bash commands to prepare and start the Nginx web server process. Now we're ready to build our Nginx Docker image. Navigate into the images/nginx/ project directory and execute this bash command:

Docker build command

You'll see some output in your terminal as Docker builds your Nginx Docker image based on the commands in the Dockerfile file. You'll also notice that Docker downloads any parent image dependencies from Docker Hub. When the build completes, you can execute the docker images bash command to output a list of available Docker images. You should see tutorial/nginx in that list.

Docker image list

Congratulations! You've built your first Docker image. But remember, this is only a Docker image. It's helpful only if we use it to instantiate and run Docker containers. Before we do that, let's acquire Docker images for PHP-FPM and MySQL.

The PHP-FPM Docker Image

Our next concern is PHP-FPM. We won't build this Docker image. Instead, I've prepared a PHP-FPM image named nmcteam/php56, and it's available on Docker Hub. Execute this bash command to download the PHP-FPM Docker image from Docker Hub.

The MySQL Docker Image

Our last concern is MySQL. I searched Docker Hub for a MySQL Docker image that extends the Ubuntu 14.04 base image, and I found sameersbn/mysql. Execute this bash command to download the MySQL Docker image from Docker Hub.

Application Setup

We now have all of the Docker images required to run our application. It's time we instantiate our Docker images into running Docker containers. First, create this directory structure beneath your project root directory.

The src/ directory contains our application code. The src/public/ directory is our web servers's document root. The index.html file contains the text "Hello World!"

Our application will be accessible at the docker.dev domain. You should map this domain to your Docker host IP address. If you run Docker natively on your Linux operating system, use the IP address of your own computer. If you rely on Boot2Docker, find your Docker host IP address with the boot2docker ip bash command. Let's assume your Docker host IP address is 192.168.59.103. You can map the docker.dev domain name to the 192.168.59.103 IP address by appending this line to your local computer's /etc/hosts file:

The Nginx Docker Container

Before we instantiate an Nginx Docker container, we need a virtual host configuration file. Create the src/vhost.conf file beneath your project root directory with this content:

This is a rudimentary Nginx virtual host that listens for inbound HTTP requests on port 80. It answers all HTTP requests for the host name docker.dev. It sends error and access log output to the designated file paths (these are symlinks to the container's standard output and standard error file descriptors). It defines the public document root directory as /var/www/public. We'll copy this virtual host configuration file into our Docker containers during instantiation.

Execute the following bash command from your project root directory to instantiate and run a new Nginx Docker container based on our custom tutorial/nginx Docker image.

Start Nginx Docker container

We use the -d flag to run our new Docker container in the background.

We use the -p flag to map a Docker host port to a container port. In this case, we ask Docker to map the Docker host (port 8080) to the Docker container (port 80).

We use two -v flags to map local assets into the Nginx Docker container. First, we map our application's Nginx virtual host configuration file into the container's /etc/nginx/sites-enabled/ directory. Next, we map our project's local src/ directory to the Nginx container's /var/www directory. The Nginx virtual host's document root directory is /var/www/public. Coincidence? Nope. This lets us serve our project's local application files from our Nginx container. The final argument is tutorial/nginx—the name of the Docker image to instantiate.

You can verify the Nginx Docker container is running with the docker ps bash command. You should see the tutorial/nginx container instance in the resultant container list. Open a web browser and navigate to http://docker.dev:8080. You should see "Hello World".

Nginx Docker website

Find the Nginx Docker container ID with the docker ps command. Then stop and destroy the Nginx Docker container with these bash commands:

Docker Compose

Unless you live and breathe the command line, the docker run ... bash command above is probably a magical incantation. Heck, even I have trouble remembering the necessary Docker bash command flags. There's an easier way. We can manage our application's Docker containers with Docker Compose.

Instead of writing lengthy and confusing bash commands, we can define our Docker container properties in a docker-compose.yml YAML configuration file. After you install Docker Compose, create a docker-compose.yml file in your project root directory with this content:

Our docker-compose.yml file defines an Nginx Docker container identical to the Docker container we ran earlier: it instantiates the tutorial/nginx image, it maps host port 8080 to container port 80, and it mounts the /src directory and /src/vhost.conf file to the container's filesystem. This time, however, we define the Nginx Docker container properties in an easy-to-read configuration file.

Let's start a new Nginx Docker container using Docker Compose. Execute this bash command from your project root directory:

This instructs Docker Compose to instantiate the containers defined in our docker-compose.yml configuration file, and it detaches the containers so they continue running in the background. You can execute the docker ps bash command to see a list of running Docker containers.

Docker Compose Nginx container

Refresh http://docker.dev:8080 in your web browser, and you'll again see "Hello World!" Keep in mind, Docker Compose is overkill for a single Docker container; Docker Compose is designed to manage a collection of related containers. And this is exactly what we explore next.

The PHP-FPM Docker Container

Let's prepare our PHP-FPM Docker container. Append these properties to the docker-compose.yml configuration file.

First, we define a new Docker container identified by the php key. This container instantiates the nmcteam/php56 Docker image that we downloaded earlier. We map a local src/php-fpm.conf file into the Docker container. If you want to provide a custom php.ini file, you can also map a local src/php.ini file the same way.

Create the local src/php-fpm.conf file beneath your project directory with the content from this example PHP-FPM configuration file. This file instructs PHP-FPM to listen on container port 9000 and run as the same user and group as our Nginx web server.

If we were to run docker-compose up -d right now, we'd have an Nginx container and a PHP-FPM container. However, these containers would not know how to talk with each other. Docker Compose lets us link related containers. Update the Nginx container properties in the docker-compose.yml file so they look like this:

The last two lines are new, and they let us reference the PHP-FPM Docker container from the Nginx Docker container. These two lines instruct Docker to append new entries to the Nginx container's /etc/hosts file so we can reference the linked PHP-FPM Docker container with "php" (the container key specified in the docker-compose.yml configuration file) instead of an exact (and dynamically assigned) IP address.

Let's update our Nginx web server's configuration file to proxy PHP requests to our new PHP-FPM Docker container. Update the src/vhost.conf file with this content:

Notice how the second location {} block's fastcgi_pass value is the URL php:9000. Docker lets us reference the linked PHP-FPM container with the "php" name thanks to the Docker-managed /etc/host entries.

Next, create a new src/public/index.php file beneath your project directory with this content:

Run docker-compose up -d again to create and run new Nginx and PHP-FPM Docker containers. Open a web browser and navigate to http://docker.dev:8080. If you see this screen, your Nginx and PHP-FPM Docker containers are running and communicating successfully:

PHP Info

The MySQL Docker Container

The last part of our application is the MySQL database. This will be a linked Docker container, just like the PHP-FPM container. Unlike the PHP-FPM container, the MySQL container persists data using Docker volumes.

A Docker volume is a wormhole between an ephemeral Docker container and the Docker host on which it runs. Docker effectively mounts a persistent filesystem directory from the Docker host machine to the ephemeral Docker container. Even if the container is stopped, the persistent data still exists on the Docker host and will be accessible to the MySQL container when the MySQL container restarts.

Let's create our MySQL Docker container to finish our application. Append this MySQL Docker container definition to the docker-compose.yml configuration file:

This defines a new MySQL container with key db. It instantiates the sameersbn/mysql Docker image that we downloaded earlier. Our Nginx and PHP-FPM definitions use the volume property to mount local project directories and files into Docker containers (you can tell because they use a : separator between local and container filesystem paths). The MySQL container, however, does not use a : separator. This means this particular MySQL container path contains data that is persisted on the Docker host filesystem. In this example, we persist the /var/lib/mysql data directory so that our MySQL configuration and databases persist across container restarts.

The environment property is new, and it lets us specify environment variables for the MySQL docker container. The sameersbn/mysql Docker image relies on these particular environment variables to create a MySQL database and user account in each instantiated Docker container. For this tutorial, we create a new MySQL database named "demoDb", and we grant access to user "demoUser" identified by password "demoPass".

We must link our PHP-FPM and MySQL Docker containers before they can communicate. Add a new links property to the PHP-FPM container definition in the docker-compose.yml configuration file.

After we run docker-compose up -d again, we can establish a PDO database connection to our MySQL container's database in our project's src/public/index.php file:

Notice how we reference the MySQL container by name courtesy of the Docker-managed /etc/host entries. I know many of you are probably asking how do I load my database schema into the container? You can do so programmatically via the PDO connection in src/public/index.php, or you can log into the running Docker container and load your SQL schema via the MySQL CLI client. To log into a running Docker container, you'll need to find the container's ID with the docker ps bash command. When you know the Docker container ID, use this bash command to log into the running Docker container:

Now you can execute any bash commands within the running Docker container. It may be helpful to mount your local project directory into the MySQL Docker container so you have access to your project's SQL files inside the container.

Docker Logs

Once your Docker containers are running, you can review an aggregate feed of container log data with this bash command:

This is an easy way to keep tabs on all of your application containers' log files in an aggregate real-time feed. This is also why we direct each container's log files to their respective standard output or standard error file descriptors. Docker intercepts each container's standard output or standard error and aggregates that information into this feed.

Summary

That's all there is to it. We know how to build a unique Docker image. We know how to find and download pre-built Docker images from Docker Hub. We know how to manage a collection of related Docker containers with Docker Compose. And we know how to review aggregate container log data. At the end of the day, we have an on-demand PHP development environment with a single command.

You can replicate this setup in new projects, too. Just copy the docker-compose.yml configuration file into another project and docker-compose up -d. If you are running multiple applications simultaneously, be sure you assign a unique Docker host port to each project's Nginx container.

This tutorial only scratches the surface. Docker provides many more features than those mentioned in this tutorial. The best resource is the Docker documentation and CLI reference. Start there. You can also find Docker-related talks at popular development conferences or on YouTube.

https://docs.docker.com/

http://boot2docker.io/

https://hub.docker.com/

https://github.com/phusion/baseimage-docker

https://twitter.com/docker

If you have any questions, please leave a comment below and I'll try my best to help.

Show more