2015-11-14

Christian HamacherThe cloud is well on its way to becoming the standard model for IT, just 16 years after it first formed. It couples flexibility, scale, and reliability to user-friendliness and ubiquity. It has created some of the world’s largest companies, as well as empowering some of the smallest. The cloud has changed the economics of providing and using services, bringing many new opportunities—and also a few teething problems of course.
Here in the British Isles, clouds are notorious for obscuring the view, but you might be surprised to learn that the technological underpinnings of cloud computing are now transparent, mature, and mostly based on open-source tech. The new, cloudy landscape is growing increasingly clear. For the first time, it’s now possible to see the nature and usage of private, public, and hybrid clouds and explore their respective strengths, weaknesses, and applicability.
Condensation nuclei
The foundations of the cloud were laid half a century ago. Books like "The Challenge of the Computer Utility" by Douglas F. Parkhill, published in 1966, noted that computers were getting powerful enough to provide information and services at scale to ordinary people, but that the machinery was so big and expensive that it would have to be remotely accessed. Utility computing was so named because it saw computing becoming as universal as power and water, delivered on demand and charged for in much the same way. In particular, people would no more need to run their own computing systems as they would own their own power generators or drill their own wells.
At the same time, two other fundamental drivers for the cloud began to condense. Future Intel co-founder Gordon Moore coined his eponymous law, saying in effect that integrated circuit technology would double computing power every two years or so. Meanwhile, Paul Baran at the RAND Corporation in the US and Donald Davies at the UK’s National Physical Laboratory independently invented packet switched networking—a much more robust, efficient, and flexible way of moving data through a common infrastructure than permanent connections of telephone-style switched circuits could manage.
In the 1970s, Ken Thompson and Dennis Ritchie at Bell Labs led the creation of Unix and the C programming language—the first credible pieces of system software designed to be easily run on a variety of platforms. Combined with open networking standards developed for ARPANET by Vint Cerf, Bob Kahn, and friends, the lining of the cloud had begun to coalesce in earnest.
Over the next two decades, the invention and popularisation of DSL (another Bell Labs marvel) and the mass-market success of Windows 95 (which supported TCP/IP) spurred the arrival of commercial ISPs, while early deployments of grid computing and application service providers (ASPs) showcased the benefits that might be had from cloud-like thinking.
Then, quite suddenly at the end of the 20th century, everything clicked into place. The technology was just about there, and the economies of scale provided by data centres were once again returning the advantage to large, centralised computing. It was time to go… to the cloud!
Enlarge / Undulatus asperitas clouds - which are so unusual that they weren't recognised as an "official cloud type" until earlier in 2015.Judy Leonard Cox
First men in the clouds
The modern cloud appeared in the shape of Salesforce.com in 1999. It sold a pure business service: customer relationship management. CRM is exactly what it sounds like. A company uses a CRM to keep track of who it’s selling to, what it’s sold, and how to keep everyone happy. Before Salesforce.com, companies bought or wrote CRM software and ran it on their own computers. Salesforce wrote CRM software, but instead of delivering copies of it to customers, it ran it on its own data centre and sold per-user access to it. The software never left the building, and its customers didn’t have to do anything more than point their Web browsers at the Salesforce.com site, set up an account, log in, and start working.
This seems entirely normal now that using Gmail seems more natural than running Outlook on your PC. Yet Salesforce.com was revolutionary when it arrived, eight years before Gmail. In the years since, the cloud has engulfed the planet.
Cloud atlas
Enlarge / The "hot aisle" in a containerised HP EcoPod. The public cloud is mostly run out of data centres like these.Clouds are simple things at heart, but they come in different shapes. Gmail, Salesforce.com, and anything you run via your Web browser is called SaaS—Software as a Service, and it’s what most users think of as “the cloud.” There are two other acronyms that often crop up to describe different cloud formations: Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Of interest primarily to people who have to build IT for other people, these roughly equate to having a cloud that will run your own application software (PaaS) or having what looks like bare hardware you can do what you like with (IaaS). Amazon Web Services (AWS) and Rackspace are good examples of IaaS; Google App Engine of PaaS.
Some companies, such as HP, IBM, and Microsoft (Azure), provide a mix of both IaaS and PaaS. Religious wars rage over exactly where the boundaries lie, but neither acronym really describes the way that the cloud is evolving into a general set of services designed for other software to use. These developments can be clearly seen if you look at the recent rapid evolution of mobile technology, which is itself largely the product of the first cloud-hosted services.
"Cloud has been one of the most significant enablers for the advancement of the mobile platform," Brian Levine, senior director of security and compliance at cloud storage services company Syncplicity, told Ars Technica in an interview. "Without cloud as the first wave, we would not have seen the second wave in the explosion of mobile applications and services. Facebook, Instagram, Snapchat, WhatsApp; none of these applications would exist without the cloud layer. On mobile, you essentially have a window into the cloud, and very little storage and computing occurs on the device. Most of the mobile processing happens in the cloud."
Enlarge / This is a very literal image.SaaS and mobile data-sharing apps are how most of us experience the public cloud, which was the first and remains the largest aspect of cloud computing. Salesforce.com runs everything, stores everything, controls everything. Its users can be anyone and everyone, and they have absolutely nothing to maintain except Web browsers and an Internet connection. They pay for what they use of the service and don’t have to pay for specialist IT staff, infrastructure, software updates, and so on.
There are also private clouds, where companies use some of the techniques and technologies of the public cloud but run it all themselves behind closed doors. Cloud systems are designed to be very quick to grow and very good at distributing data to lots of people with little effort—areas where traditional corporate IT has rarely covered itself with glory. With the private cloud, everything runs centrally and is accessed through a Web browser rather than being copied onto desktops. As a result, employees get a familiar browser-based environment to work in, and, when you run everything yourself, it’s easier to connect up older, legacy software systems that simply don’t have equivalents in the public cloud. Whatever the reasons, the private cloud remains in many respects similar to the old-style company IT approach of locked-down, controlled fiefdoms.
Some see the private cloud as a benign Trojan horse, bringing some of the more revolutionary aspects of the public cloud into the rather conservative world of enterprise IT. Those old enough to remember when software came on CDs will also remember that bug fixes and new versions came fitfully at best, and if you didn’t like something you were stuck with it. That’s not the modern experience, especially on mobile platforms, where updates are seamless and usability constantly tuned. The ability of cloud-based systems to update very quickly and at scale has become a necessity.
"Today, what needs to be understood is that in traditional IT, when you had an application, you were developing a new release of this application once or twice a year—but not more." Xavier Poisson, vice president of cloud computing for EMEA at HP, told Ars. "With mobility and the need to be more agile, you’ve got to have a completely new development cycle, and it’s essential to develop more quickly."
Enlarge / Mammatus clouds.Wikipedia
A vigorous hybrid
While the private cloud may bring these ideas within the corporate comfort zone, in practice the most common model is a mix of public cloud and private infrastructure: the hybrid cloud. This is more a term of convenience than a single technology; it’s when a company does some IT in-house, and some in the public cloud; it covers everything from running Gmail alongside a local copy of Microsoft Office, to running development versions of your global share trading platform in your R&D bunker before pushing it out to a hundred-server cluster on Microsoft Azure.
Further ReadingHow Crackdown 3 uses the cloud to make whole cities destructibleThe cloud can provide the equivalent of 13 Xbox Ones to help with complex physics.The increasing use of hybrid cloud tech is a reflection of the economic drivers that pull more and more IT, corporate and consumer, towards the public cloud. The most fundamental driver is good old economy of scale. When a public cloud company buys hardware, they pay a lot less than you do, whether you are Josephine Bloggs at home or a large retailer running a respectable data centre.
Cloud providers find it very distasteful to talk about their money, but in 2009 researchers at Berkeley University estimated that economy of scale meant that large cloud providers were paying between one-third and one-seventh as much for their networking, hardware, and power as companies did for their internal IT. With cloud providers growing at around 50 percent per year since then, that disparity is now much greater—and explains why, as research group Baird says, companies save three to four dollars on internal IT for every dollar they spend on shifting infrastructure and services to cloud.

Playing catch-up in the cloud
It’s no coincidence that the first companies to make public cloud services available were those that had already seen these economies of scale first-hand. Amazon had to build its own vast data centres to manage its inventory and e-commerce needs, creating all the tools to manage huge and ever-growing amounts of networking, storage, and computation, before realising that it had built a giant general-purpose system that could do any company’s IT. Google had to manage enormous amounts of search data and create a platform that let it deploy new software internally to manage billions of requests—and then, after a little introspection, it realised it had the ability to pull customers away from in-house IT infrastructure and products that, rather fortuitously, were sold by Google’s competitors.
Because companies such as Amazon and Google had such a head start, it can be very hard for new cloud providers to get in on the action. As John Engates, Rackspace’s chief technical officer, told Ars in an interview, "The biggest challenges have been access to scalable software to build public and private clouds and networking technologies to connect them." Rackspace started out as a hosting company, running traditional company IT in its data centres, before moving into cloud services; it found that creating software that anyone could use to build cloud-like services was a good way to get people on board. "To solve the software problem, we ended up building our own and eventually open sourcing it to create OpenStack. Today, we use that to run the largest OpenStack public cloud and numerous enterprise private clouds."
Because anyone can use OpenStack, a lot of software and hardware companies (Oracle, IBM, HP, Dell, et al.) combine it with their own products to create public cloud systems that are independent of the competition, or private cloud systems to sell to their enterprise customers. This attracts development effort and expertise within their customers and in third-party support companies, which also parlays into hybrid cloud implementations that work well with Rackspace’s own OpenStack-based public cloud. When lots of people do the same thing at a large scale, costs go down.
Enlarge / Kevin-Helmholtz wave clouds. Brad Lundgren
Let a thousand servers bloom
It’s not just that things are cheaper in the cloud, though. The cloud also has this rather marvellous ability to inspire big, original, and novel thoughts—thoughts that aren’t penned in by the conventional limitations of infrastructure scaling or logistics. If you write a mobile application and put it into an app store, you can get a million users overnight—without having to pay to create and distribute a million copies.
If you’re a company with a computing task that’ll take a thousand hours to run on a single server, then IBM or HP’s IaaS products don’t care if you use a thousand hours of one cloud-based server or a thousand cloud-based servers simultaneously for an hour. You pay the same, but with the latter case you get your results a thousand times sooner. Of course, not every computing task can be split into a thousand independent subtasks, but the impetus to start thinking about problems in ways that take advantage of this principle is huge.
This ability to partition computing into individually manageable chunks is another driver towards hybrid cloud, where running things in a standard way on both sides of the corporate firewall opens up ways to cherry-pick the advantages of either. Take storage—in particular, storage needed for backup and disaster recovery. Although Internet connectivity is fast and getting faster, it’s not on a par with the speeds a company can get within its own data centre, so it’s often important to keep working data locally in a private cloud. But backups can take place overnight, and older data is accessed less frequently, so the bulk of these can be moved into cloud storage services, such as HP Helion’s block storage. This reduces the amount of permanent storage needed locally and replaces it with services that don’t need to be bought in advance only as needed.
Plus, the cloud is distributed around the planet, so with a bit of care you can make sure that even if a meteorite hits the city where your HQ lives, the company data and as many of your systems as you’ve got in the cloud remain available. Even if you suddenly need to move a great deal of data around, more than your Internet connectivity can support, the cloud has ways. Amazon, for example, has just introduced its Snowball service—basically a large box with 50TB of storage and a 10Gb network port that turns up on at your data centre and plugs straight in. You then send the Snowball back to Amazon via your courier service of choice.
And then, of course, there’s the element of alacrity. With the hybrid cloud, you get the best of both worlds: the fast, local file transfers and processing from the private cloud, plus the vast flexibility and parallelism offered by the public cloud. “By utilising the cloud you can develop far more quickly. It can open to you new business opportunities, new geographical reaches,” HP’s Poisson told Ars.
Enlarge / A "fallstreak hole" cloud.Wikipedia
I can see clearly now the rain has gone
It all sounds ideal, but there have been and still are some big problems before cloud computing becomes the default way to do things. Although initial worries about reliability, both of the Internet links into the cloud and of the cloud providers themselves, have been mollified through experience, the safety of your data when it’s away from home is a major worry. "Security is a big area of investment because it is on the mind of every one of our customers and is more critical than ever due to the challenges presented by hackers and malicious actors." said Engates. Levine agreed: "Encryption everywhere and a solution to the username-password problem are two issues critical to cloud usage that have not been widely solved yet."
Further ReadingThe rise of the zero-day marketJust as defenders find their feet, lawmakers move to outlaw security research entirely.It’s not just passwords. "Cloud adoption is highly susceptible to perceptions of trust," explained Levine. "For example, the disclosures by Edward Snowden in 2013 significantly tarnished that trust, and as a result savvy cloud providers have been working to provide architectural enhancements and technical controls to assure that trust." That’s something that can be addressed through hybrid cloud, he said, with particularly sensitive documents being kept within the enterprise and less dangerous fare out in the public cloud.
Still, the direction of computing is going in just one way—towards the cloud. As companies and individuals learn what it can do, many new ways of working are opening up. For example, if you’re selling a service or providing an app via the cloud, all your users are permanently or very frequently connected. You can watch how they use your product, and feed that information straight back into your development cycle to alleviate points of pain or optimise and expand popular areas. That simply wasn’t possible before. You can analyse and act on real-time data to add far more intelligence to your product than your users’ devices can support, as Siri and Google Now already show. The constraints of pre-cloud computing are fading away, and the age of true utility is here.
Rupert Goodwins started out as an engineer working for Clive Sinclair, Alan Sugar, and some other 1980s startups. He is now a London-based technology journalist who's written and broadcast about the digital world for more than thirty years. You can follow him on Twitter at @rupertg.

This post originated on Ars Technica UK

Show more