Docker is hot. This open-source project demonstrated the power of software containerization to the world. From Wikipedia, “Docker uses the resource isolation features of the Linux kernel … to allow independent containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.” What does that really mean? It means OS-level virtualization is now easy and more practical, and we can build applications in a more intuitive, manageable, and cost-effective way. Microservices. These are small, independent application components, dedicated to a single process, coupled loosely with other microservices via well-defined APIs, combined to form a composite system that is easy to scale and manage. For cloud-native applications comprised of microservices, Docker is usually at the core because it offers an excellent way to build, ship, and run Linux containers. The notion that I can build an application component on a typical workstation, then move the same component to production with no fundamental changes is very attractive. Datadog recently published this study about Docker adoption which contains some interesting results, I think. Even at IBM, several teams are now using Docker for their own purposes.
Traditional VM hypervisors compared to Linux Containers
From an architectural standpoint, microservices are simple but in reality, the approach is a complexity tradeoff. Widespread use of container platforms means more metadata to manage and more variables to control (persistent storage, port mapping, container names, networking, and more). The challenges become governance and visibility as opposed to integration and maintenance. Furthermore, most organizations do not have the luxury of going all-in on containers simply because they have so much invested elsewhere. Revisiting the drawing board is too costly when you have an age-old, monolithic application on your hands. Many development shops want to adopt Docker now but only as an extension to current architectures, as the main focus for them is enabling existing applications for the cloud in the most non-intrusive way possible. These organizations all face the challenge of extending their existing processes and flows to accommodate containers while carefully transitioning their architecture to one that is cloud-native over time.
A single microservice is simple, but the architectural approach as a whole leads to complexities such as management, visibility, and coordination.
As I said, microservices are the future but they also redirect complexity from the overall architecture of a system to the configuration and data surrounding the system. Management of all those microservices is where organizations will start to feel pain, and that pain is compounded when they have to integrate with traditional architectures and delivery pipelines. One of the core principles of the Twelve-Factor App is strict separation of configuration from code. If you adhere to this principle as you should, then the need for container orchestration is that much greater.
Container Orchestration Tools
In general, there are two schools of container orchestration tools: runtime orchestration, i.e. tools that deal with the operational aspects of container execution; and deployment orchestration of containers through a promotion model. Runtime orchestration tools for containers include the likes of Kubernetes and Apache Mesos, which offer canary deployments, scaling in both directions, and rolling updates (no downtime). These are great tools that are widely-used. Deployment orchestration tools for containers are less common. Docker Compose mitigates a lot of pain but some organizations require more. Rancher looks promising but it is not geared at all for traditional IT architectures. What if any of your application components are not running in a container? What about organizations looking to evolve over time? This is the majority of “enterprise” development organizations today and surely they need orchestration tools too. In fact, they need tools that not only support microservices AND legacy architectures, but also a strategy for transitioning at an appropriate pace.
UrbanCode Deploy is the ultimate DevOps framework. The tool has potential to consolidate disparate automation from across the enterprise and centrally govern all of it. UrbanCode Deploy complements the value of Docker Datacenter with centralized deployments, separation of duties, visibility into environment inventory, and rapid rollback. UrbanCode Deploy also manages properties and environment variables across target runtimes, which alleviates the headache of governing varied configurations for each microservice in a promotion model. In March 2015, the first set of Docker plugins for UrbanCode Deploy were released. With the Docker automation plugin for Deploy, a Docker container is just like any other application component. This is the simplest and most natural way to model containers in UrbanCode Deploy. The Docker automation plugin is also the approach for systems that are a mix of containers and traditional IT. Recently, our team of geniuses at IBM also released a Docker Compose plugin. With the Docker Compose plugin, a component in UrbanCode Deploy maps singularly to a Docker Compose file which represents your application. This means better support for applications which are solely comprised of microservices, and less repetitive work. Lest we forget, there is a myriad of other plugins for UrbanCode Deploy that basically allow organizations to build automation in the same way for every platform. If any components of your application are Docker containers, it is transparent to Deploy inside of an application process. As I said, UrbanCode Deploy is the ultimate DevOps framework.
My plan is to write a series of blog posts about “microservices management” which articulate the value of using Docker Datacenter with IBM UrbanCode Deploy. IBM itself is doing a lot with Docker now so I am likely to have plenty of interesting things to write about. In this initial commentary, I will focus on the basics. Specifically, the Docker source and Docker automation plugins for IBM UrbanCode Deploy. Install these now.
A Simple Tutorial with WordPress
We start by modeling our application in UrbanCode Deploy. There is an official Docker image for WordPress on Docker Hub. We will use that as well as MySQL which is also running in a container (there is an official image for that too).
First, create the components library/wordpress and library/mysql using the Docker Template component template that is installed with the Docker automation plugin. This is a standard naming convention for components that represent Docker containers in UrbanCode Deploy, i.e. namespace/repository. Set the Source Configuration Type to Docker Importer. Here is a screenshot of my component configuration for library/mysql:
Import versions of these two components. Unlike most component versions in UrbanCode Deploy, versions for Docker images are not copied to CodeStation (the checkbox will be ignored). The Docker source plugin will poll the registry and import all version tags using statuses. Click Import New Versions on the Versions subtab for both components, and view the output of the import process. It should look something like this:
Several versions should be listed on the Versions subtab now for each component. Each version corresponds to a version tag in the Docker image repository:
Great! We have defined the components and created some versions. Now let’s create the application in UrbanCode Deploy, as well as its environments and environment resources. Create a new application called WordPress with our two components and several environments as follows:
My resource hierarchy for the LOCAL environment looks like this:
Create a similar hierarchy for the other environments. We can use a single Docker daemon for all environments, or we can have the daemons distributed across multiple agents. Once the resources have been created for a particular environment, add those as base resources for the associated application environment:
If I click on the LOCAL resource group above, I am brought to the resource group itself. If I then switch to the Configuration subtab, I can set properties specific to resources in the LOCAL environment. For example:
The docker.opts property is referenced by the component template processes. Since I am using Docker Machine with the boot2docker VM on my Mac, I have to send several options to the Docker client in order to reach the daemon properly (docker-machine config <machine-name> will output these). The other properties are referenced in the component configuration as you may recall. Note that deployment processes may fail if these properties are not defined.
The two components in this application must also be linked using container links. Since I generally prefer not to modify out-of-the-box template processes, I recommend copying the Deploy process under the library/wordpress component and pasting it right back as a copy, then you can rename that copy to Deploy w/Link to MySQL or something similar. Modify the design of this copied process by editing the properties of the Run Docker Container step and adding a link directive to the Run Options field, as follows:
Now, take a look at the descriptions for both Docker image repositories on Docker Hub. Notice the environment variables that are used by these images. I can create Environment Property Definitions to correspond to these, flag them as required if they are, and even set default values. For example, in the library/mysql component, I created the following Environment Property Definition within the component configuration:
This property has to be fed to the docker run command for the library/mysql component. Similar to how we copied and edited the Deploy process for library/wordpress, make a copy of the Deploy process under library/mysql, rename it, then edit the Run Options field for the Run Docker Container step to include this environment variable as an option:
We are almost there. The final piece is to build and test the application process. If I were to launch these containers manually, the commands would be:
After running these commands, I should be able to hit WordPress at http://localhost:8080 (where localhost is the machine hosting the Docker engine). We will use these commands as the basis for building our application process. Create a new application process called Deploy WordPress and navigate to the process designer. Drag the Install Component step over from the palette and release, change the component to library/mysql, the component process to Deploy w/Password (or whatever name you chose), and the name of the step to Install MySQL before clicking OK. Repeat this for library/wordpress as pictured:
Finally, connect the steps from Start to Finish and save the process. This is a relatively simple application process that should look like this:
And that’s it! Now, request the Deploy WordPress process against one of your environments. An additional caveat I noticed is the “fpm” versions of library/wordpress work a bit differently, so avoid those for now. Otherwise, if all goes well, you should have a running WordPress instance to toy with now:
Please leave comments and questions if you have any.
ADDITIONAL LINKS
IBM DevOps Services project for UCD Docker Plugins
Hybrid Deployment of Docker Containers Using IBM UrbanCode Deploy
Robbie Minshall’s Blog – Be sure to check out Part 2 as well!
YouTube Video
UrbanCode Deploy and Docker Containers Connect the Dots …