2015-08-05

Computing platforms and applications continue to change the way we work and live, increasing the need to store and manipulate data. Data centres, once a stack of equipment confined in corporate basements, now fill facilities spread among far-flung geographical locations. The corporate data centre model became too limiting several years ago and is gradually being replaced with virtualised cloud facilities. The cloud is nothing more than a data centre, or set of data centres, located somewhere other than in front of the user. Because it seems remote, it’s all too easy to think of the cloud as free and limitless. But every byte of data stored in the cloud consumes space in a data centre and capacity on a network to connect it back to a user. And at the same time, it consumes energy.

Cloud data centres, network connectivity and local computing resources all need energy to operate. As the thirst for computing applications grows exponentially, so does the power consumed. In 2013, The Digital Power Group reported that global Information and Communications Technology (ICT) accounted for almost 10% of world electricity consumption, more than the equivalent energy needs of global aviation. This consumption rate is unsustainable when matched to the expected growth of data storage and virtualised computing power over the next few years. By 2020, an estimated 35 zettabytes  of data will be stored in the world’s data centres, requiring epic amounts of energy to store and connect to users. The energy cost and long term environmental impact are simply unacceptable.

It’s not that computing power or network capacity has lost energy efficiency over the past few decades. Quite the contrary, advances in computing power and network capacity have yielded significant reductions in the computing resources required per bit processed or transmitted. As Moore’s law accurately predicted, transistor density in integrated circuits doubling roughly every year led to annual doubling in computing capacity. Advances in storage media, fibre optic transmission, and software allowed computing power and networking to deliver virtualised cloud resources by 2010. But many of these systems had been built without specific design focus on energy efficiency. Design requirements most often centred on speed, capacity and application flexibility. Power-thirsty silicon devices were mounted to heat sinks and then cooled through high flow air or sealed water cooled systems.  While clever and effective, many designs were not energy efficient.

In 2010, an industry and research collaboration gathered to specifically address this problem within ICT networks. The GreenTouch  consortium commissioned a study aimed at showing a factor of 1000 improvement in network energy consumption compared to a 2010 model. Over the course of a five year study, they examined many factors including core, fixed and mobile access network topologies, optical link design, data center location, multiplexing and transmission equipment design and other factors. In taking a fresh look at every design aspect through an energy efficiency lens, GreenTouch came up with novel ideas that could save vast amounts of energy. Systems that previously ignored energy use were studied to find wasted power. For example, consider the study of fixed point-to-point optical transceivers.



Fig. 1: NFV platform.

Much like leaving a light on in an empty room, optical links are typically operated at a fixed optical transmit power level, regardless of the precise power needed to transmit error-free data over a given optical span. By designing a power control algorithm that matches needed power to span loss, and implementing circuitry “sleep modes” in various functional blocks, efficiency improvements of up to a factor of 38 were demonstrated. Many other areas were investigated, including network topology and data center location. By carefully positioning data centers such that they are optimally placed with proximity to consumers, data centre electronics and networking hardware could be operated based on daily demand cycles. In its 2015 report, GreenTouch showed ways to achieve a nearly 98% overall reduction in energy compared to the 2010 model. This research will lead to implementations by network operators, equipment suppliers and cloud providers that reduce energy consumption and carbon footprint.



Fig. 2: Network models in 2014.

One output from the GreenTouch study is a network simulation tool called Gwatt. This interactive tool takes knowledge from the study and offers a tool to visualise various assumptions of data centres, network topology, transport and routing technology and apply them to a region and data growth projection. The user can change technologies used in home or enterprise, access aggregation, metro, edge, edge, core and data centre network components, alter assumptions about deployment depth of new technology and the geographical network reach.  For example, within the data centre, one can select either a baseline server design or virtualised design that utilises rack mounted server blades, virtual machines and software defined network (SDN) components. They can then also select the percentage to which these assumptions apply. Similarly, the user can select next generation IP routing, optical transport and access network technologies, among other items. The tool then estimates energy savings compared to the baseline model. Fig. 3 shows comparisons of energy saved to comparable energy sources. In the example shown here for a North American network where next generation IP and optical transport was deployed in the access, core and data centre network segments, and virtualised data centres were heavily used, Gwatt predicts a nearly 14 000 MW power savings over a five year period. That’s equivalent to the energy exerted by 183-billion participants in the Boston marathon. An amazing comparison that leaves one important conclusion with any network or data centre operator: utilising 2010-era networking and data centre design practices have become cost prohibitive and environmentally objectionable.



Fig. 3: Energy savings 2013 – 2018.

As mentioned earlier, data centres and the networks that touch them form the cloud that enterprises seek as a means to economically virtualise their storage and computing growth. Yet despite advances in computing power, electronics efficiency and networking capacity, sheer data growth rate places an enormous challenge on the data centre to keep pace with capacity demand while also containing energy costs. New data centres have been built in locations where energy is relatively abundant and available in raw form. One of the largest energy consumers in a data centre is temperature control. All of the inefficiency of data centre electronics is dissipated as heat; heat that must be removed to maintain normal operation. Heat can be removed through air conditioning systems, consuming additional energy or directly through naturally occurring cool air or water — if it is available. Iceland is a perfect example of such a place, having both a geothermal electricity source and cool climate for natural heat removal.  While Iceland’s remoteness might seem a hindrance, undersea fibre optic networks actually position it perfectly between the commercial centres in North America and Europe.  Other localities have unique energy sources in abundant sunshine, windy weather, rapidly moving water or existing energy plants. The question then becomes, how do we string together these widely distributed “green cloud” options?

One promising example is to use global networks and advanced software control to form a green cloud — data centres interconnected to take advantage of the lowest environmental impact and lowest cost energy source at any given time. Workloads such as data replication or virtual machine operations can be moved to an optimal location through high speed optical networks under automatic control. This concept has been demonstrated in the Canadian GreenStar network where the National Research and Education Networks of several nations were used to provide connectivity among data centres located in widely dispersed places, each with differing forms of clean energy. A wind powered data center in the Netherlands can hand off workloads to a hydro-powered centre in Quebec when the wind is calm. Quebec may hand off its workload to an Australian solar powered data center at times, which it turn may hand back off to the Netherlands during the Australian night. Through a combination of very high capacity optical transport, network automation and software control, a zero-emission cloud is formed.

It’s not hard to imagine how advances in networking, storage and computing equipment, combined with the accessibility of global networks will alter the way data centres are constructed, where they are located and how people connect to them. The market will determine which of the GreenTouch study recommendations are developed into commercial products. But there is already evidence of the green touch on cloud data centers.

References
http://www.tech-pundit.com/wp-content/uploads/2013/07/Cloud_Begins_With_Coal.pdf?c761ac&c761ac
https://www.alcatel-lucent.com/blog/2015/excuse-me-zettabyte-your-data-center
www.greentouch.org
http://spectrum.ieee.org/energywise/telecom/internet/iceland-data-center-paradise

Contact Sherine Aziez, Alcatel-Lucent, sherine.aziez@alcatel-lucent.com

The post A green touch on the cloud appeared first on EE Publishers.

Show more