1   Introduction

The goal of this tutorial is to introduce you to Docker, show you what it can do, and how to get it up and running on your system, and how to use it to make your life better.

This guide is open source and available on github.com. If you would like to add to it or fix something, please fork it and submit a pull request.

Table of Contents


1   Introduction

2   What is Docker?

2.1   How are Docker's Containers Different from Virtual Machines?

3   Installing Docker

3.1   Requirements

3.2   Package Manager

3.3   Binaries

3.4   From Source

4   Docker Daemon

4.1   Starting the daemon

4.2   Configuration

4.3   Logs

5   Testing Docker install

6   Terminology

6.1   Image

6.2   Container

6.3   index / registry

6.4   Repository

7   Getting Help with Docker

8   Part 1. Getting Started

9   Part 2. Building an image

10   Part 3: Docker Index/registry

10.1   Creating an Account on the Docker Index

10.2   Search

10.3   Pulling

10.4   Pushing

10.5   Repository Description

10.6   Deleting a Repository

11   Part 4: Docker Buildfiles

12   Part 5: Advanced Usage

13   Part 6: Using a Private Registry

13.1   Using Push and Pull with a Private Registry

13.2   Installing Your Own Registry

14   Part 7: Automating Docker

14.1   Remote API

14.2   Docker Web UI's

14.3   Docker Libraries

15   What can I do to help?

16   Tips and Tricks

16.1   Remove all Docker images

16.2   Remove all Docker containers

17   Docker Commands

17.1   attach

17.2   build

17.3   commit

17.4   diff

17.5   export

17.6   history

17.7   images

17.8   import

17.9   info

17.10   inspect

17.11   kill

17.12   login

17.13   logs

17.14   port

17.15   ps

17.16   pull

17.17   push

17.18   restart

17.19   rm

17.20   rmi

17.21   run

17.22   search

17.23   start

17.24   stop

17.25   tag

17.26   version

17.27   wait

2   What is Docker?

Docker is a tool created by the folks at dotCloud to make using LinuX Containers (LXC) easier to use. Linux Containers are basically light weight Virtual Machines (VM). A linux container runs Unix processes with strong guarantees of isolation across servers. Your software runs repeatably everywhere because its Container includes all of its dependencies.

If you still don't understand what Docker is, and what it can do for you, don't worry, keep reading and it will become clear soon enough.

2.1   How are Docker's Containers Different from Virtual Machines?

Docker, which uses LinuX Containers (LXC) run in the same kernel as it's host. This allows it to share a lot of the host's resources. It also uses AuFS for the file system. It also manages the networking for you as well.

AuFS is a layered file system, so you can have a read only part, and a write part, and it merges those together. So you could have the common parts of the file system as read only, which are shared amongst all of your containers, and then give each container it's own mount for writing.

So let's say you have a container image that is 1GB in size. If you wanted to use a Full VM, you would need to have 1GB times x number of VMs you want. With LXC and AuFS you can share the bulk of the 1GB and if you have 1000 containers you still might only have a little over 1GB of space for the containers OS, assuming they are all running the same OS image.

A full virtualized system gets it's own set of resources allocated to it, and does minimal sharing. You get more isolation, but it is much heavier (requires more resources).

With LXC you get less isolation, but they are more lightweight and require less resources. So you could easily run 1000's on a host, and it doesn't even blink. Try doing that with Xen, and unless you have a really big host, I don't think it is possible.

A full virtualized system usually takes minutes to start, LXC containers take seconds, and most times less then a second.

There are pros and cons for each type of virtualized system. If you want full isolation with guaranteed resources then a full VM is the way to go. If you just want to isolate processes from each other and want to run a ton of them on a reasonably sized host, then LXC might be the way to go.

For more information check out these set of blog posts which do a good job of explaining now LXC works: http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part

3   Installing Docker

Before you can install Docker you need to decide how you want to install it. There are three ways to install it, you can install from source, download a compiled binary, or install via your systems package manager.

For detailed instructions on how to install Docker on your system for each of the following steps, check out the official Docker documentation http://docs.docker.io/en/latest/installation/

3.1   Requirements

In order for Docker to run correctly on your server, you need to have a few things. For more details on the kernel requirements see this page: see http://docs.docker.io/en/latest/installation/kernel/

Kernel version greater then 3.8 and Cgroups and namespaces must be enabled.

AUFS : AUFS is included in the kernels built by the Debian and Ubuntu distributions, but not built into the standard kernel, so if you are using another distribution you will need to add it to your kernel.

LXC : This is most likely already installed on your system and kernel, you might just need to install a system package or two. See the install instructions for your distribution to get a list of packages.

3.1.1   Kernel version

The reason why Docker needs to run in a kernel version of 3.8 or greater is because there are some kernel bugs that are in the older versions that cause problems in some cases. Some people have ran Docker fine on lower kernels, so if you can't run on 3.8, do so at your own risk. There is talk about an effort to back port the bug fixes to the older kernel trees, so that in the future they will be available on the older kernel versions. For more information about this see. https://github.com/dotcloud/docker/pull/1062

3.1.2   AUFS

Currently AUFS is the standard file system for Docker, but there is an effort underway to make the filesystem more pluggable, so that we can use different file systems with Docker. AUFS will most likely not be available in future Ubuntu releases, and UnionFS doesn't look like it will be getting added to the kernel anytime soon, so we can't add that as a replacement. The current replacement looks like BTRFS.

3.2   Package Manager

The most common way to install Docker is via your server's package manager. On Ubuntu that is as simple as running the following command sudo apt-get install lxc-docker. This is an easy way to install docker, and keep it up to date.

The package will also install an init script so that the docker daemon will start up automatically.

If you are installing on a production server, this is the recommended way to install.

3.2.1   Upgrading:

To upgrade you would upgrade the same way you upgrade any other package for your system. On Ubuntu you would run 'sudo apt-get upgrade'

3.3   Binaries

If a docker package isn't available for your package manager, you can download the binaries directly. When a new version of docker is released the binaries are uploaded to http://get.docker.io, so that you can download directly from there. Here is an example on how to download the latest docker release.

This just downloads the docker binary, to get it to run you would still need to put the binary in a good location, and create an init script so that it will start on system reboots.

3.3.1   Init script examples:

Debian init: https://github.com/dotcloud/docker/blob/master/packaging/debian/lxc-docker.init

Ubuntu Upstart: https://github.com/dotcloud/docker/blob/master/packaging/ubuntu/docker.upstart

3.3.2   Upgrading:

To upgrade you would need to download the latest version, make a backup of the current docker binary, replace the current one with the new one, and restart your daemon. The init script should be able to stay the same.

3.3.3   More information:


3.4   From Source

Installing from a package manager or from a binary is fine if you want to only install released versions. But if you want to be on the cutting edge and install some features that are either on a feature branch, or something that isn't released yet, you will need to compile from source.

Compiling from source is a little more complicated because you will need to have GO 1.1 and all other dependences install on your system, but it isn't too bad.

Here is what you need to do to get it up and running on Ubuntu:

Then run the docker daemon:

If you make any changes to the code, run the go install command (above) to recompile docker. Feel free to change the git clone command above to your own fork, to make pull request's easier.

Docker requires Go 1.1, if you have an older version it will not compile correctly.

4   Docker Daemon

The Docker daemon needs to be running on your system to control the containers. The daemon needs to be run as Root so that it can have access to everything it needs.

4.1   Starting the daemon

There are two ways to start the daemon, you can start it using an init script so that it starts on system boot, and manually starting the daemon and sending to the background. The init script is the preferred way of doing this. If you install Docker via a package manager you already have the init script on your system.

To start it manually you need to use a command like this.

When Docker starts, it will listen on to allow only local connections but you can set it to or a specific host ip to give access to everybody.

To change the host and port that docker listens to you will need to use the -H flag when starting docker.

-H accepts host and port assignment in the following format: tcp://[host][:port] or unix://path For example:

tcp://host -> tcp connection on host:4243

tcp://host:port -> tcp connection on host:port

tcp://:port -> tcp connection on

unix://path/to/socket -> unix socket located at path/to/socket

When you do this, you need to also let the docker client know what daemon you want to connect too. To do that you have to also pass in the -H flag to with the ip:port of the daemon to connect too.

You can use multiple -H, for example, if you want to listen on both tcp and a unix socket

4.2   Configuration

Currently if you want to configure the docker daemon, you can either pass in command switches to the docker daemon on startup, or you can set ENV variables that the docker daemon will pick up. I have proposed a better approach for configuring docker, the idea is to use a docker.conf file so that it is easier to set and is more obvious. Details can be found here: https://github.com/dotcloud/docker/issues/937

There are two ENV variables that you can set today, there maybe more added in the future.

4.2.1   DEBUG

This tells the Docker daemon that you want more debug information in your logs.

defaults to DEBUG=0, set to DEBUG=1 to enable.


This tells Docker which Docker index to use. You will most likely not use this setting, it is mostly used for Docker developer when they want to try things out with the test index before they release the code.

defaults to DOCKER_INDEX_URL=https://index.docker.io

4.2.3   Example

This is how you would set it if it was in an init file:

4.3   Logs

There is no official Docker log file right now, I have opened an issue and requested one: https://github.com/dotcloud/docker/issues/936 but in the meantime if you are using upstart you can use /var/log/upstart/docker.log which has some information, but not as much as I would like.

5   Testing Docker install

Now that you have Docker running, you can start to issue some Docker commands to see how things are working. The very first commands that I always run are Docker version and Docker info. These tell me quickly if I have everything working correctly.

Notice that I have two warnings for my docker info. If you use Debian or Ubuntu kernels, and want to enable memory and swap accounting, you must add the following command-line parameters to your kernel:

On Debian or Ubuntu systems, if you use the default GRUB bootloader, you can add those parameters by editing /etc/default/grub and extending GRUB_CMDLINE_LINUX. Look for the following line:

And replace it by the following one:

Then run update-grub, and reboot the server.

6   Terminology

There are going to be some terms that you hear throughout this tutorial, to make sure you understand what we are talking about, I'll explain a few of them here.

6.1   Image

An image is a read only layer used to build a container. They do not change.

6.2   Container

Is basically a self contained runtime environment that is built using one or more images. You can commit your changes to a container and create an image.

6.3   index / registry

These are public or private servers where people can upload their repositories so they can easily share what they made.

6.4   Repository

A repository is a group of images located in the docker registry. There are two types of repositories, Top level and user repositories. Top level repositories don't have a '/' in the name and they are usually reserved for base images. These Top level repositories is what most people build their repositories on top of. They are controlled by the maintainers of Docker. User repositories are repositories that anyone can upload into the registry and share with other people.

7   Getting Help with Docker

If you have a question or problem when using Docker, there are a number of different ways to help you. Here is a list of the ways, pick the one that works best for you.

IRC: #docker on freenode, There are a bunch (250+) people normally in this channel, come on in, and ask your question, we are very friendly and we don't bite. Also newbie questions are welcome.

Email: There is a google group called docker-club. Join the list, and ask any questions you might have. https://groups.google.com/d/forum/docker-club

Twitter: http://twitter.com/getdocker/ Follow along, if you aren't already, lots of great info posted every day.

StackOverflow: We love Stack Overflow, if you also enjoy it, feel free to post a question using the docker tag, and one of the many Docker fans will get back to you quickly. If you love getting points, feel free to answer questions as well.

Bugs and feature requests: If you have a bug or feature request, submit them to GitHub. http://www.github.com/dotcloud/docker

8   Part 1. Getting Started

Now that we have the boring stuff out of the way lets start playing with Docker. The very first example we are going to do is a very simple one, we will spin up a container and print hello world to the screen.

If this was your first docker command you will notice that it will need to download the base image first. It only needs to do this once, and it caches it locally so you don't need to do this again. We could have broken these out into two commands docker pull base and then the docker run command, but I was lazy and put them together, and Docker is smart enough to know what I want to do, and do it for me.

Now you might be wondering what is Docker doing here exactly. It doesn't look like much because we picked such a simple example, but here is what is happening.

Generated a new LXC container

Created a new file system

Mounted a read/write layer

Allocated network interface

Setup IP

Setup NATing

Executed the process in the container

Captured it's output

Printed to screen

Stopped the container

All in under a second!

If we run the docker images command we should see the base image in our list.

Notice how you see the same image more then once, that is because there are more then one tag for the same image.

If we want to see the container we just ran we can run the docker ps command. Since it isn't running anymore we need to use the -a flag to show us all of the image:

Lets do something a little more complicated. We are going to do the same thing, but instead of having the container exit right after we start, we want it to keep running in the background, and print hello world every second:

There we go, now lets see what the container is doing by looking at the logs for the container:

Now lets attach to the container and see the results in realtime:

Ok, enough fun for this container, lets stop it.

$ docker stop f684fc88aec3

$ docker ps

Another thing we could have done to look at the container was inspect the container, we can do this while it is running or after it stopped:

There is a lot of information there, you might not need it now, but you may need it in the future, so it is nice to have it available.

Now that you know the basics go to part 2, and learn how to build an image.

9   Part 2. Building an image

Our goal for this part is to create our own Redis server container. The first thing we will need to do is decide which image we want to build on. I usually pick the ubuntu image, but sometimes it is nice to start from something a little higher so that I don't have to recreate steps, and I can build on the shoulders of others.

We are going to run /bin/bash with the -i and the -t flags. -i tells Docker to keep stdin open even if not attached, and -t is to allocate a pseudo-tty. Once we run the command, we will be connected into the container, and all commands at this point are running from inside the container.

OK, it looks like we are in, and things are working well, now lets get to work.

We are going to update apt and then install redis:

Now we have a container with redis installed. Less see what we did to the container:

It should show you what files have changed (C) and which ones were added (A). Lets save our work so we can reuse this in the future. To do this we need to docker commit the container to create an image. In order to commit changes you need your container_id. If you don't remember it don'tw worry you can get it from docker ps -a:

It returns an image id. if we run docker images we should see it listed:

Lets run our new image and see if it works:

The -d tell docker to run it in the background, just like our Hello World daemon from the last part. -p 6379 says to use 6379 as the port for this container.

Test 1
Connect to the container with the redis-cli.

Connect to the public IP with the redis-cli.

We just proved that it is working as it should, we can now stop the container using docker stop. You have now created your first Docker image. Continue on to the next part to learn how to use that image on another host, and share it with the world.

10   Part 3: Docker Index/registry

When you create an image it is only available on that server. In the past, if you wanted to use the same image on another server, you would need to recreate the image, which isn't ideal because there is no way to guarantee that the two images are the same. To make moving images around, and sharing them easier, the Docker team created the Docker index.

The Docker Index is a public Registry where people can upload their custom images and share them with others. This is also where the base images are located and where you pull from when doing a docker pull. There are two parts to the Docker Index. There is a web component that makes it easier for you to mange your images and account with a graphical interface. There is also the API which is what the Docker client uses to interact with the index. This allows you to do some of the tasks from the command line or the web UI.

The Docker Registry is server that stores all of the images and repositories. The Index just has the metadata about the images, repositories and the user accounts, but all of the images and repositories are stored in the Docker Registry.

10.1   Creating an Account on the Docker Index

There are two ways to create an account on the Docker Index. Either way requires that you enter a valid email address and that the email address is confirmed before you can activate the account. So make sure you enter a valid email address, and then check you email after registering so that you can click the confirmation link and confirm the account.

10.1.1   Command Line

If you want to register for an account from the command line you can use the docker login command. The Docker login command will either register an account for you, or if you already have an account it will log you into the Index.

When you register via the command line, it will register you and login you in a the same time. Remember to click on the activation link in the confirmation email, or else your account isn't fully active.

10.1.2   Web site

If you prefer to register from a web browser, then go to https://index.docker.io/account/signup/ and then fill out the form, and then click on the activation link sent in the confirmation email.

Once you are activated, you will still need to login to the Docker Index from your Docker client on your server, so that you can link the two.

10.1.3   Credentials

When you login to the Docker Index from the Docker client, it will store your login information, so you don't have to enter it again. Depending on what Docker client version you are using it will either be located at ~/.dockercfg or /var/lib/docker/.dockercfg. If you are having issues logging in you, can delete this file, and it will re-prompt you for your username and password the next time you login. Running Docker login should do the same thing, so do that first, and use this for a last resort.

10.2   Search

There are a lot of Docker images in the Index, with more getting added everyday. Before you go ahead and create your own, you should see if someone has already created what you wanted. The best way to find images is via the docker search command on the command line, or via the Docker Index website.

10.3   Pulling

When you found an image that you want to pull down and try out, you would use the docker pull command. It will then connect to the Docker Index find the repository that you want, and it will let the Docker client know where in the Docker Registry it can download it.

10.4   Pushing

If you have a repository that you want to share with someone then you would need to push it into the Docker Index/Registry using the docker push command. When you do a push, it will contact the Docker Index, and make sure you are logged in, have permission to push, and that the same repository doesn't already exist. If everything looks good, it will then return a special authorization token that the Docker client will use when push up the repository to the Docker Registry.

Since the Docker Register doesn't have any concept of authorization, or user accounts, it relies on Authorization tokens to manage permissions. The nice thing about this, is that Docker hides this all from you, and you don't even need to worry about it, it will just work assuming you have permission to push.

Let's push the repository that we created in the last part, so that others can use it.

Now that it is up on the registry we can use it on any Docker host, and we just need to do a Docker pull to get it on the host, and I'll know it is going to be the same every time.

10.5   Repository Description

If you want to add a description to your repository so that it lets people know what it does, you can login to the website and edit the description there. There are two descriptions, a short one, which is what shows up in search results, and is plain text. There is also a full description which allows MarkDown and is used to give more detailed information.

10.6   Deleting a Repository

If you made a mistake and need to delete a repository, you can do this by logging into the Docker Index website, and clicking on the repository settings and clicking the delete button. Make sure this is what you want to do, because there is no turning back once you do this.

11   Part 4: Docker Buildfiles


Go over what a Docker Buildfile is, and how to make their own.

With examples

12   Part 5: Advanced Usage


docker run

limiting memory, cpu

detached vs attached

volume/bind mounting


13   Part 6: Using a Private Registry

One of the things that makes Docker so useful is how easy it is to
pull ready-to-use images from a central location, Docker's Central
Registry. It is just as easy to push your own image (or collection of
tagged images as a repository) to the same public registry so that
everyone can benefit from your newly Dockerized service.

But sometimes you can't share your repository with the world because
it contains proprietary code or confidential information. Today we are
introducing an easy way to share repositories on your own registry so
that you can control access to them and still share them among
multiple Docker daemons. You can decide if your registry is public or

You'll need the latest version of Docker (>=0.5.0)
to use this new feature, and you must run this version as both the
daemon and the client. You'll also need the Docker registry code.

13.1   Using Push and Pull with a Private Registry

You've already seen how to push and pull from the Central Registry. To
push to or pull from your own registry, you just need to add the
registry's location to the repository name. It will look like

Let's say I want to push the repository "ubuntu" to my local registry,
which runs on my local machine, on the port 5000:

Obviously, the push will fail if no registry server answer locally on
the port 5000. We'll briefly show how to start your own registry
server in the next subsection.


The punctuation in the repository name is important! Docker looks
for either a "." (domain separator) or ":" (port separator) to
learn that the first part of the repository name is a location and
not a user name. If you just had localhost without either
.localdomain or :5000 (either one would do) then Docker
would believe that localhost is a username, as in
localhost/ubuntu or samalba/hipache. It would then try to
push to the default Central Registry. Having a dot or colon in the
first part tells Docker that this name contains a hostname and that
it should push to your specified location instead.

13.2   Installing Your Own Registry

Docker-Registry is a an Open Source Python application available on Github:

You can use the Docker-Registry to provide a private or public
registry service for Docker repositories. Since it is your host, you
can control access to it by putting it on a private network or
otherwise protecting its service port. You'll want to choose the DNS
name of the host carefully, since that name will become a permanent
part of each repository's name
(e.g. my.registry.name/myrepository).

You can test out the Docker-Registry first on your local machine
(presuming you have a Python environment set up).

That sets up the Docker-Registry to listen on all your network
interfaces on port 5000. You're using the dev flavor configuration
by default, which uses local storage for the repositories. The
configuration file (config.yml) also allows you to specify other
flavors, like production, and to use other storage backends, like S3.

There is currently no authentication built into the Docker-Registry,
so if you want to keep this private, you'll need to keep the host on a
private network. We'd recommend running a production Docker-Registry
behind an Nginx server which sipplies chunked transfer encoding.

14   Part 7: Automating Docker

Running docker commands on the command line are a good way to start, but if you need to automate what you are doing, it isn't ideal. To make this better Docker provides a REST based remote API. The remote API allows you to do everything that the command line does. In fact the command line is just a client for the REST API.

14.1   Remote API

Docker provides a remote API for the docker daemon so that you can control it programmatically, for documentation on how it works check out the Docker Remote API Docs

14.2   Docker Web UI's

Docker is a completly command line experience, which is fine for hackers, but some people prefer a more graphical experience, and for those folks I would recommend checking out these projects that people have started.

14.2.1   Dockland

A ruby based Docker web UI

Code: https://github.com/dynport/dockland

14.2.2   Shipyard

A python/django based Docker web UI

Code: https://github.com/ehazlett/shipyard

14.2.3   DockerUI

An Angular.js based Docker web UI

Code: https://github.com/crosbymichael/dockerui

14.3   Docker Libraries

If you want to write some code to interact with Docker, there is most likely already a binding for your programming language. Check out the link in the documentation to find what is available. If there isn't one available for your language of choice, feel free to create your own, and let us know so we can update the documentation.

Docker Library list in the Docker Docs

15   What can I do to help?

If you are a big fan of Docker, and want to know how to help out, then look at the list below, and see if any of them are things that you can do.

Contribute to Docker, it could be as small as a bug fix, documentation update, or a new feature. Look through the docker issues, and see if anything tickles your fancy.

Tweet about how much you love Docker

Write a blog post about how you use Docker, and how others can do what you have done.

Talk at a conference or meetup. This is a good way to introduce docker to a new set of potential Docker lovers.

Create a product that uses Docker, and let everyone know how Docker made your life easier.

Make a video showing how you use Docker, and upload to YouTube/Vimeo.

Answer questions on

Stack Overflow


Mailing list

Attend the Docker hack days and meet other Docker users, and let us know how we can make Docker even better.

Get a Docker sticker, and display it proudly.

Wear your Docker shirt around town all day.

16   Tips and Tricks

This section includes some helpful tips and tricks that will make using Docker even more easier and fun.

16.1   Remove all Docker images

16.2   Remove all Docker containers

17   Docker Commands

Here is a list of all of the current Docker commands, the different parameters they might have, as well as an example or two on how to use them.

17.1   attach

Attach to a running container. To disconnect press Ctrl+P, Ctrl+Q.

17.1.1   Parameters

CONTAINER_ID: The ID for the container you want to attach too.

17.1.2   Usage

17.1.3   Example

17.2   build

Build a container from a Dockerfile

17.2.1   Parameters

PATH: Build a new container image from the source code at PATH

URL: When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context


-t="" : Tag to be applied to the resulting image in case of success.

17.2.2   Usage

17.2.3   Examples   Read the Dockerfile from the current directory

This will read the Dockerfile from the current directory. It will also send any other files and directories found in the current directory to the docker daemon. The contents of this directory would be used by ADD commands found within the Dockerfile.
This will send a lot of data to the docker daemon if the current directory contains a lot of data.
If the absolute path is provided instead of ‘.’, only the files and directories required by the ADD commands from the Dockerfile will be added to the context and transferred to the docker daemon.   Read a Dockerfile from standard in (stdin) without context

This will read a Dockerfile from Stdin without context. Due to the lack of a context, no contents of any local directory will be sent to the docker daemon. ADD doesn’t work when running in this mode due to the absence of the context, thus having no source files to copy to the container.   Build from a git repo

This will clone the github repository and use it as context. The Dockerfile at the root of the repository is used as Dockerfile.
Note that you can specify an arbitrary git repository by using the ‘git://’ schema.

17.3   commit

Save your containers state to a container image, so the state can be re-used.

When you commit your container only the differences between the image the container was created from and the current state of the container will be stored (as a diff). See which images you already have using docker images

In order to commit to the repository it is required to have committed your container to an image with your namespace.

17.3.1   Parameters

CONTAINER_ID: The container ID for the container you want to commit

REPOSITORY: The name for your image that you will save to the repository <your username>/<image name>

TAG: The tag you want to give to the commit.


-m="": Commit message

-author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"

-run="": Config automatically applied when the image is run. "+`(ex: {"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')

17.3.2   Usage

17.3.3   Examples   basic commit

This will commit a container with a message and author.


Show more