2016-09-30

A critical aspect of any cloud-based deployment is managing the networking between the various containers. In researching our newest eBook, “Networking, Security & Storage with Docker & Containers,” we identified three styles of integrated container networking by way of plugins. In a previous article, we introduced the Container Network Model (CNM) and the Container Network Interface (CNI). Here, we’ll discuss the origin of these models, as well as a third area that includes the Apache Mesos ecosystem.

Container Network Model and Libnetwork

Docker’s extensibility model adds capability to the daemon in the way a library adds capability to an operating system. It does involve a code library, similar to Docker’s runtime, but used only as a supplement. That library is called libnetwork, originally produced as a third-party project by the development team SocketPlane, which Docker Inc. acquired in March 2015.



Scott M. Fulton, III

First there was the wheel, and you have to admit, the wheel was cool. After that, you had the boat and the hamburger, and technology was chugging right along with that whole evolution thing. Then there was the Web, and you had to wonder, after the wheel and the hamburger, how did things make such a sudden left turn and get so messed up so quickly? Displaying all the symptoms of having spent 30 years in the technology news business, Scott Fulton (often known as Scott M. Fulton, III, formerly known as D. F. Scott, sometimes known as that loud guy in the corner making the hand gestures) has taken it upon himself to move evolution back to a more sensible track. Stay in touch and see how far he gets.

Essentially, libnetwork provides a kind of plank on which developers may write network drivers. The binding principle of libnetwork is called the Container Network Model (CNM), which was conceived as a container’s bill of rights. One of those rights is equal access to all other containers in a network; partitioning, isolation, and traffic segmentation are achieved by dividing network addresses. A service discovery model provides a means for containers to contact one another.

The intention is for libnetwork to implement and use any kind of networking technology to connect and discover containers. It does not specify one preferred methodology for any network overlay scheme. Project Calico is an example of an independent, open source project to develop a vendor-neutral network scheme for Layer 3; developers have recently made Project Calico’s calicoctl library an addressable component of a Docker plugin.

ClusterHQ was one of the first companies to implement a persistent container system for databases, called Flocker. It uses libnetwork, and is addressable using Weaveworks’ Weave Net overlay. As ClusterHQ Vice President of Product Mohit Bhatnagar told us, “I think we are at a point where customers who initially thought of containers for stateless services need to realize both the need and the potential for stateful services. And we are actually very pleasantly surprised about the number of customer engagements … regarding stateful services.”

The critical architectural distinction for the Docker scheme concerns just what part is being extended. In Docker architecture, the daemon of Docker Engine runs on the host server where the applications are being staged. Docker Swarm reconfigures Docker Engine’s view of the network, replacing it with an amalgamated view of servers running in a cluster. Swarm is effectively the orchestrator, but plugins can extend Docker Engine at a lower layer.

Container Network Interface

The Kubernetes project published guidelines for implementing networked extensibility. It should be capable of addressing other containers’ IP addresses without resorting to network address translation (NAT), and should permit itself to be addressed the same way. Essentially, as long as the component is addressable with IP, Kubernetes is fine with it. In that context, theoretically anything could extend what you do with Kubernetes, but nothing had to be bound to it.

“We looked at how we were going to do networking in Kubernetes,” explained Google Engineering Manager Tim Hockin, “and it was pretty clear that there’s no way that the core system could handle every network use case out there. Every network in the world is a special snowflake; they’re all different, and there’s no way that we can build that into our system. We had to externalize it, and plugins are the way we’re going to do that.”

Then CoreOS produced its Container Network Interface (CNI). It’s more rudimentary than CNM, in that it only has two commands: create a container and remove a container. Configuration files written in JSON instantiate the container’s contents and set it up with an IP address. But since that address follows Kubernetes’ guidelines, Google decided it’s fine with CNI. As a result, Flannel and Weave Net have been implemented as Kubernetes plugins using CNI.

Hockin acknowledged that while such extensions enable new and flexible forms of networking into a Kubernetes environment, they also incur some costs. “The general Kubernetes position on overlays is, you should only use them if you really, really have to. They bring their own levels of complexity and administration and bridging. We’re finding that more and more users of Kubernetes are going directly to L3 routing, and they’re having a better time with that than they are with overlays.”

After re-evaluating the current state of extensibility frameworks, ClusterHQ concluded that which model you choose will depend, perhaps entirely, on how much integration you require between containers and pre-existing workloads.

“If your job previously ran on a VM, and your VM had an IP and could talk to the other VMs in your project,” explained ClusterHQ Senior Vice President of Engineering and Operations Sandeepan Banerjee, “you are probably better off with the CNI model and Kubernetes and Weave.” Banerjee then cited Kubernetes’ no-NAT stipulation as the key reason.

“If that is not a world that you are coming from,” he continued, “and you want to embrace the Docker framework as something you see as necessary and sufficient for you across the stack — including Docker’s networking library, Swarm as an orchestration framework, and so on — then the Docker proposal is powerful, with merits, with probably a lot more tunability overall.”

Mesosphere and Plugins from the Opposite End

Mesosphere has produced perhaps the most sophisticated commercial implementation of Mesos with DC/OS, and has built on top an effective competitor against Kubernetes in the form of its Marathon orchestrator.

As a scheduling platform, the job of extending the reach of Mesos has historically been done from the opposite side of the proverbial bridge. Enabling scheduling for big data processes in Hadoop, job management processes in Jenkins, and container deployment in Docker, has all been done from within those respective platforms.

But in the summer of 2016, Mesosphere began to take a different stance. Mesosphere began to pave the way for properly interfaced containers to extend this library by way of CNI. At the time of this writing, Mesosphere had published a document stating its intent to implement CNI support in the near term of the DC/OS roadmap.

Sponsor Note

Check out IBM Bluemix — it’s a PaaS that offers native container hosting.

Sponsor Note

Build a microservices infrastructure with mantl.io

Sponsor Note

Nuage Networks delivers massively scalable and highly programmable Software Defined Networking (SDN) solutions within and across the datacenter and out to the wide area network for large enterprises, web scale companies and cloud service providers.

Sponsor Note

Twistlock is an end-to-end security solution that addresses the number one obstacle to adoption of containers. Twistlock enables developers and security operations teams to keep container-based applications safe.

Sponsor Note

Swarm provides Docker-native clustering/scheduling for running scalable multi-container distributed apps in production on any infrastructure; leveraging APIs that are already familiar to developers.

Sponsor Note

Joyent delivers container-native infrastructure, offering organizations high-performance, yet simple public cloud and private cloud software solutions for today’s demanding real-time web and mobile applications.

“We’re in a world today where there’s enough different vendors out there, with varying interfaces and implementations for networking and storage,” said Ben Hindman, founder and chief architect at Mesosphere, “that the means of doing plugins, I think, is a pretty important part. What I think is not so clear now, is whether or not the plugins that were defined by Docker will become the universal plugins. And I think what you’re seeing in the industry already today is, that’s not the case.”

Currently, DC/OS uses an open source load balancing and service discovery system called Minuteman to connect containers to one another. It works by intercepting packets as they’re being exchanged from a container on one host to one on a different host, and rewriting them for the proper destination IPs. This accomplishes the cross-cloud scope that distinguishes DC/OS from other implementations. Alternately, DC/OS offers a mechanism for setting up a virtual extensible LAN (VXLAN), and establishing routing rules between containers in that virtual network. Mesosphere does not reinvent the wheel here at all; actually, it gives users their own choice of overlay schemes, based on performance or other factors.

Hindman told us he sees value in how Flannel, Weave, and other network overlay systems solve the problem of container networking, at a much higher level than piecing together a VXLAN. The fact that such an alternative would emerge, he said, “is just capturing the fact that we, as an industry, are sort of going through and experimenting with a couple of different ways of how we might want to do stuff. I think that we’ll probably settle on a handful of things, and overlays are still going to be there. But there are going to be some other ways in which people link together and connect up containers that are not using pre-existing, SDN-based technologies.”

Integration Towards the Future

Today, containerization is not often found as a line item in IT and data center budgets —  integration is. When the people signing the checks don’t quite understand the concepts behind the processes they are funding, integration often provides them with as much explanation as they require to invest both their faith and their capital expense. Some might think this is a watering down of the topic. In truth, integration is an elevation of the basic idea to a shared plane of discussion. Everyone understands the basic need to make old systems coexist, interface and communicate with new ones. So even though the methodologies may seem convoluted or impractical in a few years, the inspiration behind working toward a laudable goal will have made it all worth pursuing.

CoreOS, Docker, and Mesosphere are sponsors of The New Stack.

Feature image via Pixabay.

The post Three Approaches to Container Networking appeared first on The New Stack.

Show more