2016-01-01

The number of options for organizations seeking to implement faster delivery of software to production through containerization platforms, such as Docker, grew substantially in 2015 — that much is not in doubt. But as anyone who’s used Jenkins to build a pipeline will tell you, faster delivery is not the same as continuous integration and/or continuous deployment.

Most every core concept of computing whose meaning is disputed for longer than a year, soon becomes promoted to a principle or set of principles, and thereafter becomes capitalized (not to mention capitalized upon). Perhaps you’ve noticed, when organizations fail to improve their software productivity levels, even after having decomposed their development departments into smaller teams, folks will say they may be more agile but not “doing Agile” (capital “A”). Programming languages have included concepts of objects for decades, but veterans only admit some to the ranks of capitalized “OOP,” and argue about the rest. DevOps has the luxury of two permanent capital letters.

Proper Capitalization

Capitalization is happening to CI/CD. When Chef Vice President Jez Humble and consultant Dave Farley authored the defining work on “Continuous Delivery” a mere six years ago, they actually began by re-introducing Martin Fowler’s ideal of continuous integration (lower-case, at least back then). They made the case that the goal of expediting software can only be achieved by perceiving the delivery process as a pattern, and perhaps best patterned as a chain of pipelines.

Originally, CI/CD was not a software architecture, but a means to achieve speed. As a goal, rather than a skill, responsibility for its implementation was delegated to the CIO. Since CIOs were believed to be capable of following only one agenda at a time, developers advised them to treat CI/CD as though it were the successor to Agile.

But some of the first institutions to have actually achieved CI/CD as a common way for everyone to work, were those that could apply it on a huge scale, rather than a small, experimental one. Apcera CEO Derek Collison told The New Stack, “The first time I saw it was at Google.”

“CI/CD and having the developers collectively own the deployment to production is a fairly new thing,” said Collison.

“Most enterprise shops have deep-rooted cultural biases that say these two things, dev/test and production, should always be separated via personnel, processes, etc.”

It should be no surprise to regular New Stack readers that Apcera produces a software deployment platform designed to be integrated with Docker, Git and CloudBees’ Jenkins CI/CD pipelining system. “Trusted platforms can help further this cause,” remarked Collison, “where the platform becomes the source of truth to blend the execution of policies and rules that represent both camps.”

Apcera is one of a growing number of vendors making the case that a platform is necessary to enable software architectures and engineering policies to coalesce, to make true CI/CD with all its true capital letters come about. In 2015, that case became so strong that Chef — perhaps IT’s most trusted configuration management automation system — began a monumental shift toward embracing not only containers, but Kubernetes orchestration.

This leads to the argument (as yet unproven) that true CI/CD requires not only configuration management automation, not only delivery automation, and not only containerization, but also microservices. Or to shamelessly steal from Frank Sinatra, you can’t have one without the other.

“Microservices is not new, it’s essentially SOA 3.0,” said Apcera’s Collison. “What is new is the wave of platforms and technologies that are enabling this — technologies that take on the undifferentiated heavy lifting of deploying and monitoring and managing multiple things instead of just one.

“That being said,” he continued, “hygiene around system interfaces and APIs, in the face of upgrades and deployments, has always been a challenge. Microservices expose issues and shortfalls in this hygiene in a shorter amount of time, but these have always existed. There are basic rules of engagement around how you can change and upgrade an API, which are very similar to the rules for database tables and messaging systems. These rules existed long before the term ‘microservices.’ Automated testing environments, as well as these basic rules, are crucial for overall system health.”

Manual Labor

At the heart of Farley’s and Humble’s initial argument in “Continuous Delivery” was this:

To the degree that manual processes tend to be inconsistent, their probability for failure increases.

Eliminating inconsistencies produces a completely regular process that is best suited for automation. “If the build, deploy, test, and release process is not automated, it is not repeatable,” they wrote. “Since the steps are manual, they are error-prone, and there is no way to review exactly what was done.”

While the authors’ end goal is continuous delivery, the means to that end is continuous integration, which Humble and Farley define as “the practice of building and testing your application on every check-in.” Meaning at each segment of the pipeline the application is tested in its entirety through automation.

“With fully automated processes,” their introduction continues, “your only constraint is the amount of hardware that you are able to throw at the problem.”

“If you have manual processes, you are dependent on people to get the job done. People take longer, they introduce errors, and they are not auditable. Moreover, performing manual build, test, and deployment processes is boring and repetitive — far from the best use of people.”

Farley and Humble characterized as ludicrous the age-old notion that, for a process to be truly auditable (you sense there would be a capital “A” there, if it were allowed) it must be broken down into manual checkpoints. As early as 1980, there were warnings that automating manual accounting processes removed them from human control [PDF], and therefore introduced the likelihood of errors where none existed before. If this sounds like old-world thinking to you, realize that up to the current day, BMC Software — an organization that actively promotes the adoption of Docker by enterprises — continues to define an auditable process as one that requires signature-bound handoffs.

The first true sign that an idea in computing has come to life everywhere is that someone declares it dead. Luckily for continuous integration, that happened way back in 2014. The argument against CI, at that time, was that it forced all stakeholders to concentrate their efforts on fixing each and every broken build the moment it breaks (an argument whose premise is now easily disproven by a simple demo of Jenkins at work).

The counter-argument to the greatly exaggerated death of CI is something we’ve printed here in The New Stack: a tying of CI to Agile, and a reassertion of CI as an inter-personal communications process as opposed to an automation principle.

“Integration is primarily about communication,” wrote Martin Fowler in 2006, in his introduction of CI with capital letters. “Integration allows developers to tell other developers about the changes they have made. Frequent communication allows people to know quickly as changes develop.”

So which is it? Is CI about taking everyone out of the process, or dumping everyone into the process? You’d think it can’t be both.

This is the split which opponents of the ideal, in both capital and lower-case, managed to exploit to some extent in 2015. It begins with fear of failure.

“If we are to stick to our aim of identifying errors early,” wrote Farley and Humble in 2010, “we need to focus on failing fast, so we need the commit stage to catch most of the errors that developers are likely to introduce into the application.”

Fast Failure

“You hear everyone wandering around saying, ‘Fail Fast.’ You know, the people who say, ‘Fail Fast,’ are usually the ones who don’t have to deal with the consequences of these failures,” said Paul Miller, Hewlett Packard Enterprise’s vice president for strategic marketing, during a briefing for industry influencers at the HPE Discover 2015 London conference in December.

Miller was telling a story about the feedback he says HPE was receiving from its customers, some of whom, he said, characterize “digital disruption” as something that looks good on paper but is difficult, if not possible, to actually achieve.

“All the bright-eyed people are running around saying, ‘Yea, we should disrupt stuff.’ Yeah, but if you disrupt the National Health Service, people start dying. You need to be smart about how you innovate. A lot of these organizations that think about disruption aren’t as heavily regulated or haven’t had to deal with the regulatory burden that many established organizations already have to deal with.”

So what’s all this resistance to the notion of embracing failure? That’s the question The New Stack’s Alex Williams put to Miller directly.

Miller responded by telling a story of how he asked a room full of about 75 IT executives, how the “idea economy” (HPE’s term for the global commoditization of intellectual assets) affects them. The answer he received from these IT executives was that it’s putting pressure upon them to innovate faster. New competitors are appearing out of nowhere, he said, and businesses are being defined by the service levels they’re able to maintain. That maintenance takes place in the wake of more and more government regulations.

“If you fail in a public sphere, and you cause a life to be lost, an aircraft to land in the wrong place or not land at all, [or] the release of personally identifiable information, there are consequences that are both legal and life threatening. This notion of moving fast and breaking things works to a point, depending on the use case or the application of the technology.”

Pressed further by Williams, Miller acknowledged that there were viable ways to safely automate the testing of ideas. He noted the usefulness of virtualization to that end, for effectively providing a sandbox within which production environments can be simulated or emulated. But he warned that there may not be an obvious, direct customer benefit for enterprises investing in such environments.

The real challenge, Miller said, happens when the IT department convinces the business end that it needs to invest real money in any such environment where, for instance, an A/B test of four different versions of an application will yield a guarantee of three failures. From a risk management perspective, that’s a guarantee of 75 percent negative outcomes.

“Think of going to a business person and saying, ‘I’m going to bake 75 percent failure in, or even 50 percent failure, into the discovery [development] process.’ A lot of traditional executives will look at you and say, ‘Are you crazy? We don’t have the budget to do these experiments! I want you to be pinpoint perfect every single time,’” HPE’s Miller told our Alex Williams. “So there’s another tension point between business and IT: Business wants to be innovative. Somebody said to me, ‘Everyone’s for change until it comes to themselves.’”

Containment

For the IT managers with whom Paul Miller has spoken, CI/CD is a kind of stigma which, when you stick a number on it like “minus 75 percent,” risk managers will handily reject.

What some developers who read this publication may already know is that Jenkins, with its cute butler logo and its safe-sounding metaphor of pipelines, may already be instituted in businesses whose IT managers have yet to see such a risk number yet, let alone reject it. But the real danger could be that, once managers figure out what’s going on, they could stop CI/CD innovation dead in its tracks.

This danger may already be addressed by, of all people, consultants who have an interest in establishing CI/CD both as a platform and a principle — for endowing their clients with both policies and Docker.

At the last DockerCon Europe in Barcelona last November, Martin Croker, the managing director for DevOps at enterprise consultant Accenture Technology, gave a room full of developers a peek at CI/CD using Jenkins coupled with a deployment platform built by Croker and his Accenture team. Croker perceives CI/CD and DevOps (with both its native capital letters) as essentially the same ideal. In telling stories about how he pitches the ideal to clients — the same ideal articulated by Fowler, Humble, and Farley — he calls it “DevOps.” And he makes clear that a core component of DevOps is Docker.

Croker’s clients may be some of HPE’s customers: enterprises that characterize their mission-critical elements as life-and-death services. Yet Croker advocates a very strict system of automation, using the same A/B testing regimen that Miller said his customers find “crazy.” Under this regimen, negative feedback received from users automatically triggers a rollback process on those few servers where new builds have been deployed. Enablement of further deployment only takes place when certain “quality gates” are passed, and these gates are represented by pipeline segments in Jenkins.

Scaling these quality gates requires new builds to pass automated tests that involve legacy software — the biggest causes of risk in enterprises. And it’s here where Croker’s notion of CI/CD splits from how the ideal has traditionally been presented: The Accenture platform, he says, relies on a tiered development architecture where project teams are not brought together, but kept apart.

“We have a tiered application architecture for a couple of reasons,” Croker told attendees, “mainly because it lets me scale the development to more than just myself and my team. I want the Java group to own the blueprint for Java development; I want the Siebel group to own the blueprint for Siebel. It’s about getting the right ownership to the right team, and starting to scale to a delivery team wider than just mine.”

It’s a system where failure is both embraced and contained, and where the ownership of assets is not collective, like Apcera’s Derek Collison said, but distributed. It’s this notion of containment that may have been lacking, not only from Paul Miller’s perception of the CI/CD ideal, but from the original 2010 presentation of the ideal — perhaps because it had not yet been tested at scale in organizations whose business leaders are both risk-averse and change-sensitive.

Yet it could potentially be disseminated through the same kind of platform that Collison believes was born during his time spent with Google.

“I think that is part of a trusted platform that allows both camps to meet their goals,” Collison told The New Stack. “This is what happened at Google. I felt empowered to deploy and update at will. The platform system, the Borg, empowered me to do so, but also was the same platform that Google used to make sure Search and Ads were protected, and that resources were being used and managed effectively. One platform that served both camps, that is what we need to bridge the cultural divide.”

Apcera, Docker and HPE are sponsors of The New Stack.

Cover Photo of “The Bute Docks, with Shipping” [circa 1880] from the UK’s National Media Museum, licensed under Creative Commons.

The post The Year Behind: Changing Perceptions About the Differences Between Faster Delivery and CI/CD appeared first on The New Stack.

Show more