A variety of companies are already using containers at scale in production. In the previous chapter, we looked at how and why Spotify, DramaFever, built.io, and the Institute of Investigation, Innovation and Postgraduate Studies in Education (IIIEPE) use containers. Now let’s dive in and take a closer look at each organization’s workflows.
Building the Application and Managing Pull Requests
One of the appeals of using containers in production is the capacity to create a seamless development-to-production environment that may originate on a developer’s laptop, and can be moved wholesale to testing and deployment without errors introduced due to changes in the infrastructure environment.
What IIIEPE Uses
Luis Elizondo, lead developer at the Institute of Investigation, Innovation and Postgraduate Studies in Education, says that, before Docker, moving apps from development to production was one of their biggest issues. Now developers build base images and publicly release them to Docker Hub. All applications have a standard structure, including application, log and files subdirectories. These are the subdirectories that the developers will mostly use, while a Dockerfile, YAML file for Docker Compose, and Makefile hide complexities about the application container environment that developers don’t necessarily need to know.
What DramaFever Uses
Video streaming site DramaFever uses Python and the Django framework for their main site and Go for microservices handling functions like image resizing, player bookmarking, and DRM. In either case, the versioned deployable artifacts are stored and deployed as container images.
For Bridget Kromhout, former operations engineer at DramaFever, using containers helped ensure a standard development environment for a team of developers located around the world. She said containers “help with the distributed nature of my team. It helps us to have fewer problems, as it reduces the number of inconsistencies that slow down our velocity.”
What Spotify Uses
At Spotify, developers are encouraged to use Java, ZooKeeper and VirtualBox on their local machines during builds. Engineers use GitHub to store and share their code and pull requests, and use Bintray to store container images.
Continuous Integration Workflow Configuration
Once developers have written code or merged pull requests, a new base image needs to be built and then tested through various stages of QA. For many of the application environments using containers in production, this is done using continuous integration tools, notably Jenkins, to automate as much of this process as possible.
What IIIEPE Uses
Elizondo is adamant that developers should not be developing outside the container environment and then building the image for deployment. “Let Jenkins do it for you,” he says.
As explained in the previous chapter, IIIEPE has two servers: one running MaestroNG, which handles stock Docker containers, and another running Jenkins to automate repetitive tasks.
“Jenkins also allows you to have a history of fails to replace your Cron submission, and I programmed Jenkins so it is running Cron on each of the applications so you have a log and a history of everything that goes wrong,” Elizondo says.
What DramaFever Uses
Like Elizondo, Kromhout used Jenkins for all container image builds. “We have weekly base builds created, as Python-Django builds can take forever,” she said. So instead, if they have a weekly base build already created, the time for the continuous integration builds for the development branch is reduced; the Dockerfile for the main www project uses “FROM www-base”. When working locally, devs pull from the master in GitHub and mount that code into the container they’ve docker pulled to have the latest build environment. Kromhout says this is faster than using Vagrant. “Jenkins just builds the one image for all of the platform, it just builds and tags one image,” she says. “It is really nice for consistency.”
What Spotify Uses
Goldin says Spotify takes images built in Bintray and then deploys them to AWS. Helios can then be used for service discovery, but is also pluggable with more specific, full-featured service discovery tools including SkyDNS and Consul.
Orchestration and Management
Orchestration and management challenges increase as companies find more uses for containers. Technically there are three processes here: first, how containers are connected is typically referred to as orchestration. Then there is the need for a scheduling and placement engine that can decide which hosts those containers should run on. And finally, container management involves noticing if containers go down, declaring the state of services when they are deployed and similar functions. However, there is often much blurring between the terms, and orchestration is often used as a catch-all for those three functions.
What IIIEPE uses
Many of Elizondo’s potential orchestration choices were curtailed by his institution’s need to choose an open source solution that could also be used with Drupal, the framework in which many of IIIEPE’s web applications were built. Shortcomings in the feature set also limited Elizondo’s choices, although he stitched together features from many options. For example, while Shipyard lacked the capacity to manage containers automatically, it is useful as a viewer and is used by IIIEPE to monitor the status of their container and Docker services, to identify when a container crashes, and to reboot dead containers.
Sponsor Note
Check out IBM Bluemix — it’s a PaaS that offers native container hosting.
Sponsor Note
Docker Swarm provides native clustering capabilities for turning a group of Docker engines into a single resource.
Sponsor Note
Pivotal is on a mission to modernize IT, bringing together a portfolio of cloud and data products with agile engineering discipline.
Sponsor Note
Build a microservices infrastructure with mantl.io
Sponsor Note
Joyent delivers container-native infrastructure, offering organizations high-performance, yet simple public cloud and private cloud software solutions for today’s demanding real-time web and mobile applications.
Sponsor Note
The Apcera trusted cloud platform is a highly secure, policy-driven, multi-cloud platform for cloud-native applications, containers, microservices and legacy applications.
Sponsor Note
VMware is the industry-leading virtualization software company. Our technologies simplify IT complexity and streamline operations, helping businesses become more agile, efficient and profitable.
MaestroNG was chosen as their core orchestration tool as it has multi-host capabilities, a command line interface, and uses a YAML file to describe everything. For security precautions, MaestroNG was installed on a server and is the only service that connects to Docker. Once Jenkins has finished testing new image builds, it is pushed to IIIEPE’s private registry where Jenkins then connects to the MaestroNG server using SSH security protocols and completes the deployment to production.
What DramaFever Uses
DramaFever’s orchestration is currently very simple. Because DramaFever is 100% in AWS, they use DNS and ELBs, with instances in a specific autoscaling group launching, doing the docker pull to run the necessary containers, and answering in a specific ELB.
New images are built by Jenkins and after automated and manual testing, are tagged for staging and then prod via fabric, a Python-based tool. So the same Docker image that passed QA is what is pushed out into production (from the distributed private Docker registry).
What Spotify Uses
Orchestration is at the core of why Spotify built and used Helios. Goldin says that Helios solves one problem, and solves it really well: essentially deciding where to spin up container instances across a cloud-clustered network. While Helios is open source, Spotify also has an internal helper tool. To aid in continuous delivery, Spotify is currently extending the feature set of Helios to be able to show visualizations of what was deployed, by whom and where.
Other Alternatives
Built.io also created their own orchestration/management layer for their mobile-backend-as-a-service (MBaaS) container architecture. This management layer uses a REST API as the main communication channel to manage setting up new containers, and can also review customer profiles and create larger containers for higher-paying customers. By using a management layer connected via an API, they can also enable their customer’s access to some orchestration and management functions direct from the built.io product’s user interface.
Service Discovery and the Load Balancer
The maintenance of applications in production needs service discovery — that is, insight into the current state of containers running across an application’s infrastructure. A load balancer then ensures that application traffic is distributed across a number of servers to ensure high performance of the application in production.
What IIIEPE Uses
Elizondo says IIIEPE uses Consul for service discovery, as etcd still requires a lot of additional learning. Consul stores the IP, port and state of an application. When Consul recognizes changes to the application, Elizondo has established a trigger to create a Consul-template, which in turn can reconfigure the load balancer configuration.
Elizondo has been using NGINX as the load balancer but plans to switch to the more complex HAProxy. “NGINX is great,” he says. “I know how to solve about 80 percent of the problems that NGINX throws at you. But NGINX is a web server. It can act as a load balancer, but is not what it does best.”
What DramaFever uses
Kromhout used a Chef client run via Packer in a Jenkins job to generate Amazon Machine Images for host instances populated with the necessary upstart template so that the right Docker images would be pulled and run, for instances launched from a specific autoscaling group as members of a specific Elastic Load Balancer (ELB). Using NGINX for proxy_pass to different upstream ELBs per microservice, Kromhout said, allowed DramaFever to defer the more complex problem of dynamic service discovery. “We used HAProxy for some database connection pooling, but ELBs + NGINX simplify a lot in the traffic-routing space. Location blocks work well when you can predict where the containers are launching,” she explained. “Of course, that meant our utilization wasn’t optimal. Today, Peter Shannon, the current head of ops at DramaFever, is looking into more dynamic service discovery options. Application container clustering is how to realize highly efficient utilization.”
Tying It All Together
It’s too early to codify best practices, but these are a few examples of emerging methodologies which show what a container production workflow could look like. Some of the tools mentioned in this article are not container-specific technologies; rather, containers make it almost trivial to deploy code.
In all cases, the final chosen workflow combines containers with some emerging ecosystem open source tools. This approach exists alongside some private workarounds and self-created solutions that address the individual enterprise’s application programming languages, data server infrastructure, security policies and continuous delivery commitments.
Docker is a sponsor of The New Stack.
Feature image via Pixabay.
The post Containers in Production, Part II: Workflows appeared first on The New Stack.