2015-07-14

A container that is used in test and development could also be used in production. But there are certain risks that come with an approach that removes certain checks or roadblocks if you will.

Some of those roadblocks should not be removed at all, argues Kevin Fishner, director of sales and marketing for application lifecycle management tool producer HashiCorp, in an interview with The New Stack. Fishner believes there’s a comparison for how to treat virtual machines that is similar to containers. It’s fundamentally unsafe for developers’ VMware virtual machines to be carried through to production. And similarly so, there’s a risk that comes if containers are carried through. “We don’t recommend that developers build Docker containers that they use in production. If a developer doesn’t realize there’s a vulnerability, and they build a Docker container with a wrong version, that’s extremely dangerous.”

Last week, HashiCorp released its Atlas ALM tool, which it categorizes as “infrastructure management.” HashiCorp already produced a tool called Packer which creates VM images for deployment to the AWS public cloud, as well as to OpenStack private clouds and VMware vSphere. Packer has recently been adapted for Docker containers. Now, Atlas automates the use of Packer to produce containers that can then be monitored through Consul, HashiCorp’s service discovery tool, thus enabling hybrid environments of VMs and containers.

What Atlas does not do is utilize Packer to produce a persistent container that transcends lifecycle phases, and Fishner is an active opponent of such a course.

“We think that the golden image that you’re putting into production should be driven by an operator, someone who is essentially managing that,” he tells us. “However, if you do feel, as a developer, you need to make a change to a Docker container, you can certainly do that locally and then submit a pull request to change the Docker configuration. At which point, the operator team — or however you organize your company — can review that pull request.”

If that request is approved, Fishner goes on, then the container is rebuilt at the production level, and then phased back toward the developer level for consistency. While developers should be given the authority to build containers, he says, golden images of any kind of virtual machine should be centrally stored and managed by an independent operator. This way, when an API enters production, everyone from developers to customers can be assured they’re using the same functions.

Freedom from Discrimination

There are different schools of thought on this issue, being brought face-to-face by the sudden rise of containerization. In systems built on Apache Mesos, the orchestration system inspired by Google’s Borg, old and new versions of containers co-exist in production systems. This is done on purpose, as Twitter engineer Bill Farner told us some months back, in order that behaviors of new code can be examined carefully, and that updates can be rolled back if behaviors degrade.

HashiCorp takes an opposing view.

If quality of customer experience is key, it argues, then consistency is paramount. You can’t have multifarious versions of the same API function provider in production, without blowing up consistency entirely, Fishner said.

If developers want to have their fun with multiple versions, they should use Packer to produce them locally, but keep them safe within the development sandbox.

A development-to-production workflow for containers could lead to a situation where one active container utilizes a patched version of OpenSSL, and another an unpatched version, Fishner said. Only a properly managed deployment environment ensures that the most recently patched version is in use throughout the production phase.

Recently, Microsoft started using the pets versus cattle analogy, symbolizing the difference between how an administrator optimally treats a virtual machine image, and how she treats a container image. At DockerCon, advocates of a complete wave of change for the data center argued that administrators should learn to treat containers as ephemeral, that they should stop bestowing them with reverence and pet names, and instead see them as temporary delivery units for small quantities of functionality.

HashiCorp does not see this model as realistic.

“We believe that VMs and containers should be treated the same,” says Fishner. “The way we’ve built our tools, we are completely infrastructure- as well as technology-agnostic, in the way you get your application from development code to running in production.”

The cattle versus pets analogy came up somewhat humorously at a dinner with a group of bankers that RedMonk’s James Governor recently wrote about:

Anyway I talked about microservices of course, and my theory that drawbridges are more important than moats. We also had fun talking about the cattle vs pets microservices distinction. While most cattle is somewhat disposable, not all of it is — think prize bulls…

As for Fishner, his implication is a serious one. For him, it is impossible for a truly open source management foundation to be agnostic about the virtual components it manages, as well as the cloud platforms it deploys those components to, if it must treat containers any differently from VMs. By using terms such as “agnostic” to describe the policies adhered to by HashiCorp’s provisioning tool Terraform, Fishner implies that partitioning VMs from containers is a form of discrimination.

“Additionally, we’re agnostic to the way you package your applications and deploy them to production,” he adds. “So if you want to be building VMs, whether those are Amazon or VMware or Google Cloud images, completely cool. If you want to be building containers, completely great. If you want to have a hybrid infrastructure, in terms of both containers and VMs — which is going to be the vast majority of people for this transition period — again, amazing, super-happy to support that.”

Microservice as a Myth

I mentioned to Fishner that the emerging best practice for containerization, as outlined during the last DockerCon, involves a much finer granularity for containers than for VMs. A container may include something as simple as a single service, packaged with the minimum library code necessary to make that service functional wherever it’s transported. So the situation where two containers utilize mismatched OpenSSL libraries would be mitigated, I argued, by containing OpenSSL separately and networking it to the other containers simultaneously.

While Fishner conceded that focusing containers on individual functions may be a laudable goal, it’s not something he sees companies actually doing.

“It’s going to depend a lot on the corporate culture,” he said. “Where in these older, larger organizations, there’s no getting around having central control of building Docker containers, and making sure they’re following the right spec. I’d be shocked. In all our conversations, that’s never really a consideration.”

In short, from HashiCorp’s perspective, if an organization has no intention of switching to an all-containerized deployment environment in one fell swoop, then it will be easier for it to manage containers following the best practices of VMs than for it to alter its practices, and perhaps its culture, in order to manage VMs following the new standards of containers.

Docker is a sponsor of The New Stack.

Feature image: “cow and dog” by Tom Maloney is licensed under CC BY-SA 2.0.

The post HashiCorp: Containers Should Be Managed Just Like VMs, So Where Does That Leave Pets and Cattle? appeared first on The New Stack.

Show more