2016-09-22



Vivek Juneja

Vivek Juneja, an engineer based in Seoul, is focused on cloud services and microservices. He started working with cloud platforms in 2008 and was an early adopter of AWS and Eucalyptus. He’s also a technology evangelist and speaks at various technology conferences in India. He writes at www.cloudgeek.in and www.vivekjuneja.in, and loves grooming technology communities.

So you have an application that is composed around containers. You have lightweight base images, a centralized container registry, and integration with the deployment and continuous integration (CI) pipeline — everything needed to get containers working at full scale on your hardware. For running a multitier application, you spent time on using a service discovery mechanism for your application containers. You have a logging mechanism that pulls out the information from each container and ships them to a server to be indexed. Using a monitoring tool that is well suited for this era when machines are disposable, you see an aggregate of your monitoring data, giving you a view of the data grouped around container roles. Everything falls nicely into place.

You’re ready to take this to the next level by connecting your pipeline to production. The production environment is where the containers will see the most entropy. Rolling containers into production requires that you spend your time building a canary release system to implement a rolling upgrade process. Every change travels neatly from the development environment to your production environment, shutting down one container at a time and replacing them with a brand new version of your code. This is what usually comes to mind when we talk about adopting containers at a high level.

However, to the true practitioner, this is the tip of the iceberg. Doing everything mentioned earlier still does not guarantee a perfect environment for your containers. There’s still potential to have your plans derailed, and worse, create conditions that may shake your confidence in containers. We’ll explore these issues around container networking, storage and security.

Container Networking

Containers do not live in isolation; they need to connect with other services. Containers need to be discoverable and available for connection to other services. Irrespective of their location in a given fleet of machines, the goal is to reliably and quickly reach out to a destination container. Networking in the container realm is often intertwined with service discovery. While networks change across development, testing and production environments, service discovery remains consistent across environments. This means that the service discovery mechanism must remain common across the varied networks where containers are deployed.

If you have just started using containers in production, there are some key questions that need answering to help stabilize your approach:

How do you select the right network configuration for a given scenario? Should you use a bridge, overlay, underlay or another networking approach?

How does service discovery integrate with the various container network configurations?

How do you monitor a container network and identify bottlenecks with its performance?

How do you visualize a network topology running across multiple hosts?

How do you secure container networks?

How do you isolate networks when running containers belonging to varied tenants on the same physical or virtual hosts?

We will address each of these concerns before moving on to other misunderstood aspects of containers in production.

Network Configuration

It is recommended to have the containers use the host network, instead of a bridged network, if the services running in the container need to be exposed to outside users. This is primarily because the bridged network causes latency due to the virtual Ethernet (vEth) network connection. When containers use the host network, port number conflicts could be a cause of concern. To resolve that, the application service in the container is configured to run on a dynamic port provided at runtime, rather than a default port. For example, when running a Tomcat container for a Java application, the server and Apache JServ Protocol (AJP) port numbers could be supplied at runtime using the operating system (OS) environment variables.

The environment (ENV) variables SERVER_HTTP_PORT and SERVER_AJP_PORT are used as references since the Tomcat image is modified to run the Tomcat server on the defined server ports. This will prevent the containers from binding to a consistent port on the host, and will allow multiple instances of containers running the same image, at the same time, on the same host.

The host network also prevents the constant change of iptables, which is common with the bridge network. You would not want to do that change in a production environment where iptables could be used for firewall configuration. The bridged network is commonly used in development and testing environments to allow multiple concurrent containers of the same kind to run on a set of shared hosts. Port mappings are the way to allow the bridged containers to be accessed by end users.

Sponsor Note

Swarm provides Docker-native clustering/scheduling for running scalable multi-container distributed apps in production on any infrastructure; leveraging APIs that are already familiar to developers.

Sponsor Note

Check out IBM Bluemix — it’s a PaaS that offers native container hosting.

Sponsor Note

Twistlock is an end-to-end security solution that addresses the number one obstacle to adoption of containers. Twistlock enables developers and security operations teams to keep container-based applications safe.

Sponsor Note

Joyent delivers container-native infrastructure, offering organizations high-performance, yet simple public cloud and private cloud software solutions for today’s demanding real-time web and mobile applications.

Sponsor Note

Build a microservices infrastructure with mantl.io

Sponsor Note

Nuage Networks delivers massively scalable and highly programmable Software Defined Networking (SDN) solutions within and across the datacenter and out to the wide area network for large enterprises, web scale companies and cloud service providers.

Container orchestration platforms, like Kubernetes, also offer a pod model for container networks that share the same IP address. This is useful for grouping application services in containers that usually work together.

Docker features overlay networks that enable easy creation of multi-host networks with per-container internet protocol (IP) support. Other solutions, like Calico, Flannel and Weave, can integrate with Docker as a network plugin. This space is rapidly developing, so the advice is to test the performance and reliability of these technologies first before adopting them into production.

Service Discovery and Container Networking

Service discovery is usually an infrastructure concern; it allows applications deployed as containers to access transparently each other via means of domain name system (DNS). If the containers are deployed on the host network in production, then a proxy must exist that can route incoming requests to the containers. Previously, a service discovery solution based on Consul and Registrator was an easy setup mechanism for discoverable containers.

This has evolved, thanks to the introduction of overlay networks and an array of third-party plugins like Calico and Weave. Implementation requires the use of a key-value store to coordinate network updates across hosts. In some solutions, DNS is used as the basis of the service discovery. However, DNS has its own pitfalls, as local caching may affect the discovery process in cases where containers are frequently changing or moving across hosts. Other solutions include services, like HAProxy or Traefik, that can work as a reverse proxy with different orchestration backends.

A port-based service discovery is often hard to use but will work for small-scale clusters with minimal applications. Managing the lifecycle of ports while using proxy configurations for discovery could spiral out of control and is difficult to debug. If you choose to integrate a network mode that allows IP per-container, use a service discovery mechanism that is integrated into the network provider. This reduces the number of moving parts in the solution and simplifies relations between networking and service discovery.

Monitoring and Visualizing Container Networks

When dealing with containers in production, it is important to understand how they interact with each other. This is vital to help diagnose issues and alleviate the chance of misconfiguration. Thankfully, the container ecosystem actively supports this requirement. For instance, Weave Scope provides an overview of a containerized application’s interconnections across a given set of hosts. This is crucial to understanding how containers communicate with each other and with other uncontained services. Container-native monitoring tools, like Sysdig, offer a view of real-time traffic movement across container instances. When the container count increases and individual container monitoring becomes difficult, they can be grouped together.

Monitoring and performance management in container networks is the primary topic of the next and final book in The Docker and Container Ecosystem series, Monitoring & Management with Docker and Containers. We’ll go more in-depth on this topic then, as the monitoring space itself is complicated and extremely important for showing the value of containers in production.

Isolating Networks

When running multiple services belonging to different tenants on shared infrastructure, isolation needs to be created to protect the network connection between related containers. Docker addresses this need with user-defined networks. This means that containers on the same tenant are connected by a network that is separate from other tenants. Docker also provides overlay networks that span across multiple hosts. There could be any number of overlay networks, with each ideally being used for a related container.

Container Storage

Containers have become an essential component of the continuous delivery (CD) story for most organizations. Moreover, containers are the best packaging means for microservices. Running containers in a production environment entails pulling out layers of filesystem changes and piling them, one over the other, into the container — a pattern commonly used in Docker’s popular container runtime.

Some adoptees have their applications bundled as containers, but still rely on local container storage, or have some form of host-mounted volume. While this works on a small scale, this practice quickly runs out of steam as it scales across tens, hundreds or thousands of hosts. Hence, it is not surprising that most container advocates recommend steering away from ephemeral container storage and towards moving state outside of the container.

Lightweight kernels have emerged over the last few years, purpose-built for the demands of applications. This rise is because a lightweight container image contributes to a faster deployment, which then leads to rapid feedback loops for developers. Teams running containers in production environments often find themselves doing general housekeeping, which entails getting rid of old container volumes from continuous deployments of new image versions. Regular clean-up activity ensures the hosts never run out of storage space and optimizes the filesystem drivers.

Similar to networking, we want to go over some key questions that need answering before adopting storage for containers:

How do you select the right filesystem driver for a given deployment case?

How do you select the right persistent storage backend for containers?

How do you reduce the size of the container images?

How do you retire old containers to keep the filesystem under control?

Addressing the Filesystem Drivers

Many drivers are available for use with Docker. Advanced multi-layered unification filesystem (AUFS) is very common; it is stable and has had success in deployments. AUFS mounts are very fast, and for small file sizes it offers native read and write speeds. However, it can cause latency with large write operations, primarily because of its design. For large files, it is better to use a data volume from Docker.

OverlayFS is a relatively new driver, and offers faster read and write speeds compared to AUFS; however, it is important to have this tested for a certain period before moving to production, primarily due to its new existence in the Linux kernel. Docker offers OverlayFS as a driver, and the container runtime rkt also uses it in the new Linux kernels.

If you choose to use Device Mapper and are running multiple containers on a host, it is preferred to use real block devices for data and metadata.

Persistent Storage for Containers

The ideal case for a containerized application is to have its state managed completely outside the realms of its execution; this means the container is leveraging an external data source for all of its state requirements. However, you can run stateful services as containers. In that case, you could either choose between having a container-managed volume or mounting a directory from the host to the containers. You could also use a Docker volume plugin, like Flocker, thus allowing the containers to access shared storage. Utilizing plugins makes it much easier to for containers to use multiple hosts while still accessing a persistent block storage.

It is also possible to build data-only containers. However, that practice is now discouraged for production environments because named volumes provide better volume management as well as the ability to use other drivers for volume storage. Data-only containers can create problems when identifying the right data container to use, which becomes problematic if you have multiple data containers in your production environment.

One of the issues with host-managed volumes is the issue of permission conflict. Having files on the host with different ownership privileges can create issues when containerized applications access them. Then you need to change the ownership of the host-managed files to match the operating user inside the container. This could be painful to manage if the shared host volume is changed by adding new files from a different source.

A better practice is to check how often the persistent data store changes outside the context of the containerized application. If the persistent store is purely managed by the container, meaning all files are created and changed by the containerized application, then it’s prudent to use named volumes. If the change to the persistent data store is also made outside the container, then it’s better to use a data volume that uses host-mounted storage. In this case, conflicting file permissions could pose challenges. Therefore, it is required to change the file permissions before the application inside the container accesses them.

Size of Container Images

The general rule of thumb is to keep the container image size as minimal as possible. This helps to reduce the amount of data the container host has to fetch when pulling the image from the repository. One way to alleviate the problem of overgrown and bloated container images is to avoid installing unnecessary packages, especially through the package manager built inside the container. For container runtimes that use copy-on-write (COW) mechanisms, this discipline in creating Dockerfiles also helps create lightweight images. Avoid unnecessary layers — this increases the container image size — by combining run commands into one line, and making them available as one layer when building container images.

When using lightweight base images like Alpine for Docker, you need to make sure they comply with apk package manager, which may have different package names from the ones available through popular package managers like Apt.

Retiring Old Container Images, Volumes and Instances

One common issue when deploying containers is running out of disk space while running different containers and their associated volumes. Most host housekeeping mechanisms need to watch out for old container images that have been around for a while and did not need to be on the host. If your continuous delivery pipeline has been active, putting out tens or hundreds of changes a day in production, it is likely the old images are piling up. Deleting the unused container images is one tactic available through container runtime tooling. A container runtime like Docker prevents deletion of container volumes when deleting the container instances.

However, it is often the cleanup process that creates the most confusion. Data containers behave like ordinary containers and can be mistakenly flagged for deletion while performing cleaning activities. Moving to named volume containers prevents mix-up with the ordinary application containers. As for the actual named volumes, the removal of the container will not affect the named volumes, and thus the named volumes can be reused across the lifecycle of multiple containers.

Container Security

When containers first started becoming popular, one of the main concerns was whether containers were secure. This concern became more prominent when considering whether to use containers on a shared host running varied tenants. With the granular permission models now available in container runtimes, and with the isolated user, network and process namespaces, the state of container security has greatly stabilized. There are still some areas of debate and concern, however. Allowing containers to use secrets like credentials or access keys is still in contention.

Security concerns also impact the way container storage and networks are implemented. Vulnerability scans and signed container images are becoming well-known practices amongst developers. Container marketplaces like Docker’s Hub and Store, have already implemented some of these practices, which benefit users who may have limited or no understanding of these problems.

Here are some key questions that need to be answered when considering the security of container deployments in production environments:

How do you propagate critical security patches to container images in the production environment?

How do you validate container images obtained from external sources?

How do you enable verification of the container images before deploying them in production?

How do you allow containers to have access to secure keys and credentials without exposing them to the host or other co-located containers?

How do you prevent a compromised container from disturbing the host or other co-located containers?

How do you test for secure container deployments?

Integrating Security Patch to Container Base Images

Containerized applications are composed of a base image and the application level changes, which are then baked into the final image for deployment. Usually, the base image is the one that remains less prone to changes, while the application changes are constantly baked in through continuous integration. When security changes are proposed in the form of patches to the operating environment, the change is passed on to the base image.

The base image is rebuilt with the planned change and is then used as a new base image to rebuild the application containers. It is important to have a consistent image across all environments. A change in the base image is like any other change and is passed to the production environment using the continuous delivery pipeline.

Trusted Container Images and Deployment

Docker introduced Content Trust from version 1.8 onwards. It uses the developer or publisher’s private key to sign the image before pushing the image to the registry. When fetching the same image, the public key of the developer is used for verification. This mechanism can be integrated into the continuous integration and deployment pipeline. Docker also offers an enterprise-grade image storage solution called Docker Trusted Registry. CoreOS’ rkt also offers trusted builds and deployment, and can be configured to obtain the keys either from the local store or a remote metadata store.

Access Secrets from Containers

Secrets include any information that you would not want to leak out to unauthorized agents but are required by the container to access external resources. Common ways to inject these secrets has been to use environment variables or to bundle them in a build manifest.

Both approaches have inherent flaws that can expose the secrets to unauthorized agents. Deploying containers on AWS that need secret keys can take advantage of the identity and access management (IAM) roles instead of passing keys insecurely. The EJSON library from Shopify allows you to encrypt secret information, which can be committed to the source code repository. Keywhiz from Square and Vault from HashiCorp also act as key-value stores.

Separating Compromised Containers

A compromised container can potentially disarm the container host. One way to address this is to enable SELinux and SVirt. SVirt works with Docker to disallow container processes from getting access to the host system. A detailed account has been shared by Project Atomic.

Sponsor Note

Check out IBM Bluemix — it’s a PaaS that offers native container hosting.

Sponsor Note

Nuage Networks delivers massively scalable and highly programmable Software Defined Networking (SDN) solutions within and across the datacenter and out to the wide area network for large enterprises, web scale companies and cloud service providers.

Sponsor Note

Twistlock is an end-to-end security solution that addresses the number one obstacle to adoption of containers. Twistlock enables developers and security operations teams to keep container-based applications safe.

Sponsor Note

Swarm provides Docker-native clustering/scheduling for running scalable multi-container distributed apps in production on any infrastructure; leveraging APIs that are already familiar to developers.

Sponsor Note

Build a microservices infrastructure with mantl.io

Sponsor Note

Joyent delivers container-native infrastructure, offering organizations high-performance, yet simple public cloud and private cloud software solutions for today’s demanding real-time web and mobile applications.

CoreOS has also integrated SVirt with the rkt runtime; it is available by default whenever a new container is run. This is especially critical if the developer uses an untrusted container image from the public web without inspection. Alternatively, for Docker, the flag –cap-drop allows permissions to be revoked from a given container.

Secure Container Deployments

The Docker Bench program provides an automated tool that checks against all the best practices as advocated in the Center for Internet Security (CIS) Benchmark v1.11 report. This will help identify hotspots in your infrastructure that need attention before putting services into production. Another important step is to have a secure container host. The Docker security deployment guidelines is a good resource to strengthen your Docker container runtime environment.

Summary

Successfully adopting containers is a difficult task for many. Adopting the use of containers in production environments is not possible without giving an enormous amount of consideration to container networking, storage and security. As more organizations practice this art, new patterns and practices will emerge, making container deployments the de facto for any applications built in the future. The goal is to rethink how security, storage and networking will evolve when container count increases in production, running not just tens or hundreds of nodes, but thousands at a time.

CoreOS, Docker, Sysdig, and Weaveworks are sponsors of The New Stack.

Feature image via Pixabay.

The post Containerized Production Environments: Networking, Security, and Storage appeared first on The New Stack.

Show more