2016-03-11



In 2016, the concepts and advantages of Continuous Integration (CI) should be well known to most software developers. There are plenty of tools, books and blog articles that cover that topic. One of the first articles was written in 2006 by Martin Fowler, and covers good Practices of Continuous Integration. All of these best practices are still valid today and I will go through each practice later on in this article.

Update: Marcel will be a featured panelist on the “Continuous Integration Using Docker” webinar, March 16.

Every once in a while a customer comes up to us and asks us for some help to improve their existing CI infrastructure or to get them started with the concepts and right tools. At other times we give demo presentations, hold workshops or organize OpenSpaces. In all of these cases we typically need a Continuous Integration Platform that is ready to compile, test, deploy and run some piece of software.

For that reason I decided to setup a handful of Docker containers that are up and running with a single command, without having to install every single tool manually. Another nice advantage of using Docker containers is that you can easily try out new versions of the tools. All tools that I am using are Open Source and have commercial support if your company requires that.

Get me started right away

In case you can not wait and want to try out the CI tools right away, simply follow the next steps. You need to have Git and Docker installed on your computer:

Git (Distributed Version Control System)

Docker (Lightweight Container Virtualization)

After that, you can clone the GitHub repository and start all containers using docker-compose.

If you are running Docker for the first time on your computer, it might take some time to download all images. While you are waiting, this might be a good time to read-up on some Docker basics, I suggest the article by Lukas Pustina – Lightweight Virtual Machines Made Simple with Docker.

Once all Docker images have been downloaded, you will be able to access the tools locally on your machine. The IP depends on your local settings. Use docker-machine to figure out your Docker IP.

ToolLinkCredentialsJenkinshttp://${docker-machine ip default}:18080/jenkins/no login requiredSonarQubehttp://${docker-machine ip default}:19000/admin/adminNexushttp://${docker-machine ip default}:18081/nexusadmin/admin123GitLabhttp://${docker-machine ip default}:10080/root/5iveL!feSelenium Gridhttp://${docker-machine ip default}:4444/grid/consoleno login required

Tool

Link

Credentials

Jenkins

http://${docker-machine ip default}:18080/jenkins/

no login required

SonarQube

http://${docker-machine ip default}:19000/

admin/admin

Nexus

http://${docker-machine ip default}:18081/nexus

admin/admin124

GitLab

http://${docker-machine ip default}:10080/

root/5iveL!fe

Selium Grid

http://${docker-machine ip default}:4444/grid/console

no login required

Here is an overview of the tools. Each tool runs in a separate docker container.



GitLab is used for storing the source code in Git repositories and is a great alternative to GitHub.com. GitLab uses a PostgreSQL database for storing user information and Redis for storing tasks. For more details on the Redis and PostgreSQL Docker containers have a look at GitLab architecture.

Jenkins is used for automating the software development process. The Jenkins Docker container contains several example Maven build jobs that run Unit tests, execute the static source code analysis and deploy the build artifacts to Nexus.

Nexus is a typical Maven Artifact Repository. The Maven build uses Nexus as a Proxy Repository for third party libs. After the build the packaged artifacts are deployed to the Nexus Release Repository.

SonarQube is the most widespread Source Code Quality Management Tool. As part of the CI build, Jenkins triggers a static source code analysis and the results are stored in SonarQube. It uses typical code analysis frameworks like FindBugs, Checkstyle, PMD and others.

The Selenium Grid is used for managing different browser types. The Docker CI stack contains a Docker container with Firefox installed and one with Chrome. Therefore you can run your UI tests against different browsers.

Practices of Continuous Integration

Let’s have a look back at the Best Practices of Continuous Integration that Martin Fowler described in his blog. We will check which tool is used for which purpose.

Practice 1 – Maintain a Single Source Repository

Everything you need to build your source code and run your software should be kept in a version control system (VCS). According to a simple Google Trend analysis, Git is the most popular VCS around. GitHub is the most popular Git repository hosting service. If you want to host your Git repositories on your own servers, GitLab might be the right tool for you.

I have installed and used GitLab for several customers and so far we are quite happy with it. The software runs stable with moderate hardware requirements (i.e. 200 Users, 800 projects, 2 CPU, 4 GB RAM). Upgrades run smoothly. On top GitLab provides a full REST API that can be used for automating the CI process. We have used the REST API heavily together with the Jenkins JOB DSL plugin. But that’s a different blog article, https://blog.codecentric.de/en/2015/10/using-jenkins-job-dsl-for-job-lifecycle-management.



Practice 2 – Automate the Build

That’s were Jenkins comes into play. In the early days, developers either used their IDE to package artifacts or used their local command line. That caused several problems, since builds were not reproducible for other developers. For that reason a central impartial instance that takes care of compiling and packaging the build artifact is crucial. Jenkins (formerly known as Hudson) was created by Kohsuke Kawaguchi in 2005. Since then Jenkins has become to most widely used Continuous Integration server with about 1000 plugins and a vivid community. Jenkins manages all build, test and deploy jobs for all software artifacts. Every developer should have access to Jenkins to be able to access log files, test reports and the job configuration. This gives software teams full transparency of the build process.

Practice 3 – Make Your Build Self-Testing

Once you have a central automated build with Jenkins in place, testing becomes easy and reproducible. Mike Cohn introduced the Test Pyramid in his book “Succeeding with Agile”. The pyramid covers different types of tests that improve the overall quality of the software. Agile software development processes strive to deploy software not just once in a quarter but every other week or even more frequently. In order to reduce the deployment cycle time, manual testing needs to be automated as much as possible. Unit tests help with testing single modules/classes. Service/integration tests make sure that all modules work together with other systems (i.e. databases, queues, web services, …). UI tests are more business centric and test features that provide actual business value. For each section of the pyramid there are various Open Source frameworks available.

Here is just a short excerpt of tools and frameworks:

UI tests: Selenium, Robot Framework, Protractor

Service tests: JBehave, FitNesse, SoapUI, JMeter Performance Tests

Unit tests: JUnit, TestNG, Mockito (Mock Framework)

I have configured build jobs in the Jenkins Docker container that use Selenium for UI testing and JUnit for unit testing.

Practice 4 – Everyone Commits To the Mainline Every Day

In order to integrate all code changes continuously it is necessary that every developer commits their code daily to a Git repository. In case the developer computer crashes, the code changes in the main repository are at least up to date. Additionally, the CI build can verify right away if new changesets break existing code. In case problems arise, it’s much easier to roll back the latest changeset, instead of figuring out the problem in a couple weeks worth of changes.

Practice 5 – Every Commit Should Build the Mainline on an Integration Machine

As long as every developer commits to the master branch in Git, this is easy. The CI job is simply triggered using VCS post commit hooks. In Jenkins you can configure build triggers, that check your VCS regularly for changes.

Since branching and merging has become easier with Git, there are various workflows available on how to develop software with Git. The most popular one is called GitFlow. Vincent Driessen blogged about GitFlow it in 2010. In this workflow every feature, bugfix and release has its own branch, which is merged by the developer to the develop branch regularly. The develop branch is used for the Continuous Integration builds. The master branch is identical to the source code in production and only gets updated if a new version of the software is deployed to production. The problem with keeping source code separated in multiple branches is that you are not doing “continuous” integration tests of your source code. The longer your branches live, the more complicated it becomes to merge them or to find problems in larger changesets.

Here is a list of various Git workflows and comparisons that you can examine, in case you want to learn more on the topic:

Git Flow, Vincent Driessen

GitHub Flow

GitLab Flow

Atlassian Git workflow comparison

Practice 6 –  Fix Broken Builds Immediately

As part of every Jenkins job you can define post build actions. These post build actions can be used to send out emails or instant messages to the developer that broke the build or the whole team. That is part of the transparency you get with a CI tool like Jenkins. There are various plugins for integrating Jenkins with modern team communication tools like Slack or HipChat.

Jobs are displayed as broken on the Jenkins dashboard until they are fixed by your developers. It has become a hobby of many developers to create extreme feedback devices like lamps, traffic lights, illuminated bears and usb missile launcher to catch the attention of the whole development team in case a build is broken.

Practice 7 – Keep the Build Fast

There are a couple of things you should keep in mind to keep your build fast:

Please use decent hardware for Jenkins, SonarQube, Nexus and whatever other tools you use for your CI platform. Do not make the mistake and use an old PC that you found under the table.

Keep your Unit tests fast.

Split your build into separate jobs and concatenate them to a Deployment Pipeline.

Parallelize as many build steps as possible. i.e. Unit tests can run in parallel to improve overall build time.

Mock slow systems in order to speed up your tests.

In case the previous steps can not be further improved, run long-running test suites during nightly builds.

Practice 8 – Test in a Clone of the Production Environment

Most of the time it is easy to create a copy of your production environment and it is simply a matter of doubling a couple of servers. If your production environments runs on expensive hardware, requires expensive licenses or runs in a cluster with hundreds of servers, it might turn out to be a tough challenge to create an identical production clone. If that is the case in your company, try to get as close as possible to your production environment, so you are able to detect as many problems as possible in your production clone before deploying a new version to production.

Practice 9 – Make it Easy for Anyone to Get the Latest Executable

Every artifact that is built using Jenkins gets versioned and deployed to an artifact repository. The most widely used artifact repositories are Nexus from Sonatype and Artifactory from JFrog. They are both Open Source and provide a REST API for integrating them with other build tools. I am using Nexus in my Docker container here.

Just like Maven Central, your company Maven Repository should be accessible to every developer, since it will contain all third party libs and other dependencies your developers need (i.e. JAR/WAR/EAR/ZIP files).

Practice 10 – Everyone can see what’s happening

All CI tools like Jenkins, GitLab, Nexus and SonarQube should be accessible for all team members. All tools provide LDAP integration and customizable permission schemes. This adds full transparency to all of your software projects and keeps teams motivated to fix broken builds and failing unit tests.

GitLab provides a nice dashboard which displays the latest code changes

Jenkins provides a customizable dashboard with all jobs results

SonarQube provides a customizable dashboard with the latest static code analysis results

Practice 11 – Automate Deployment

Jenkins is not just great for compiling source code, it can also be used to execute shell and bash scripts on remote servers or run Puppet and Ansible scripts to provision servers. Over the last years configuration management tools like Ansible, Puppet, Chef and CFEngine have come a long way. It has become a good practice to store these configuration management scripts in Git repositories just like your source code. You should treat your infrastructure just like your source code. Therefore the operation team can use the full power of a modern version control system and does not need to keep their scripts on network shares.

To deploy your software you should use a single deployment script. This deployment script will be used for deploying to every environment. As a result you automatically test the script before doing an actual production deployment.

As you can see, all practices Martin Fowler described in his article still hold true almost 10 years later. The tools have also come a long way since then. Let’s have a look back at the Docker containers.

Let’s get back to the Docker CI Tool Stack

There are certain best practices for Docker containers that you should keep in mind. For example, each container should only contain one running process. Thedocker-compose.yml is used to define all images that are built and started when running docker-compose up. As you can see, the jenkins container is linked with the nexus, gitlab and sonar containers. Each container runs only one process. You will find the matching Docker configuration in the respective Dockerfiles.

Here is the docker-compose.yml that I am using for the Docker CI tool stack.

Prerequisites (MacOS)

In this last section I will go into more detail on how to run Docker containers and problems you might run across. Running several Docker containers that each run a JVM process requires a certain amount of memory. After you have installed the Docker Toolbox and have cloned the Git repository you are ready to go. Please follow the next steps to increase the memory of your VirtualBox image.

Step 1 – Stop the Docker VM

Step 2 – Increase Memory via VirtualBox UI

I am using 6000 MB for my VM. That is enough for the Docker CI tool stack. Here is a screenshot of the VirtualBox configuration. You can increase the memory under:

System → Motherboard → Base Memory.

Step 3 – Start VM

After you have increased the memory size you can start the VirtualBox image again.

Step 4 – Configure your shell

Step 5 – Start all containers

To get all Docker containers up and running, clone the repository and run docker-compose up.

Done.

Once all docker images have been downloaded, you will be able to access the tools locally on your machine. My Docker containers are available via 192.168.99.100. Use docker-machine ip default if you do not know your Docker IP.

Tool

Link

Jenkins

http://192.168.99.100:18080/jenkins/

SonarQube

http://192.168.99.100:19000

Nexus

http://192.168.99.100:18081/nexus

GitLab

http://192.168.99.100:10080

Screenshots of containers

Jenkins Jobs

There are several jobs preconfigured in Jenkins. The jobs cover the following typical CI tasks:

Continuous Integration Build jobs with Maven

Unit tests with JUnit

Static Source analysis with Maven Sonar Plugin

JaCoCo Test Coverage

Deployment to Nexus

Jenkins Job DSL examples

I am using the conference-app source code from the JavaLand 2015 #OpenSpace for the Jenkins jobs.

SonarQube Dashboard

Once the conference-app CI jobs are run, the static source code analysis results are stored in SonarQube. The dashboard contains several project details:

Code Violations for each project

Unit test results

JaCoCo unit test code coverage

Code duplication

Lines of code

Technical debt

Nexus Repository

Nexus is used as a proxy repository for 3rd party libs and contains all released artifacts in the release repository.

Selenium Grid

The selenium grid contains Docker containers with Firefox and Chrome preconfigured. You can configure the Selenium tests, which browser they should run against. That is very useful when testing your web application against multiple browser versions.

Summary

I hope the Docker containers help you to get started with the various CI tools. If you have ideas for improving the Docker files, feel free to fork the repository and send me merge requests

This article was originally published on codecentric, by Marcel Birkner.

Show more