Welcome to our fifth Data Center Executive Roundtable, a quarterly feature showcasing the insights of thought leaders on the state of the data center industry, and where it is headed. In our Third Quarter 2016 roundtable, we will examine four topics: Advances in cooling design in new data centers, how edge computing and the Internet of Things are driving the shape of the network, the rapid growth of cloud computing, and the accompanying “speed to market” issues and how they impact the supply chain.
Here’s a look at our distinguished panel:
Robert McClary, Senior Vice President and General Manager for FORTRUST, a premier data center services provider and colocation facility. Robert is responsible for the overall supervision of business operations, high-profile construction and strategic technical direction for FORTRUST.
James Leach, the Vice President of Marketing at RagingWire Data Centers. As a marketing executive, sales leader, and systems engineer, James Leach has enjoyed a 30-year career building technology and services businesses for commercial and government organizations. For the last 15 years, Mr. Leach has been at the forefront of developing innovative internet services for enterprises.
Jack Pouchet is vice president Market Development at Emerson Network Power (soon to be Vertiv). Over the past 20 years, Jack has worked closely with major server manufacturers, large data center users, and leading mission-critical engineering firms on advanced power and cooling technologies.
David Shepard is General Manager of the BASELAYER Anywhere hardware division, and is responsible all facets of the hardware business unit including sales, marketing, product development, manufacturing and customer service.
Ted Behrens is Executive Vice President of Global Engineering, Product Management & Marketing for Chatsworth Products Inc. Ted is responsible for strategy, product management, engineering development and marketing efforts for all CPI global product platforms.
The conversation is moderated by Rich Miller, the founder and editor of Data Center Frontier. Each day this week we will present a Q&A with these executives on one of our key topics. We begin our discussion by looking at trends in designing cooling systems for new data centers.
Data Center Frontier: Over the past year, some of the industry’s largest companies have developed new designs for their data centers. In a number of cases, these redesigns have featured changes in cooling systems. What do you see as the important trends driving how data center operators are approaching cooling?
James Leach, RagingWire Data Centers
James Leach: The evolution of data center cooling systems over the last 15 years has gone from density to efficiency and now environmental.
First-generation data center cooling systems focused on density. The engineering challenge we faced was one of scale – could we remove the heat produced by rooms full of powerful blade servers and create the cooled environment specified by ASHRAE standards? Our solution was to work with manufacturers to develop data center cooling infrastructure that leveraged the mechanical systems of large commercial buildings, but maintained a form factor for a data center. For example, in our Ashburn VA1 data center we worked with our supplier partners to design custom-built 100 ton CRAH units (Computer Room Air Handler) that each produce 44,000 CFM (cubic feet per minute) of airflow. We also built specialized mechanical chase areas to house these large CRAH units without taking up space on the computer room floor. These generation one cooling systems were a key enabler for the massive wholesale data centers we have today.
Generation two of cooling was about efficiency, and the key metric was PUE (power usage effectiveness). With generation one we succeeded in building massive cooling systems. In generation two the challenge was to run those systems at peak efficiency. We introduced variable speed fans in the CRAH units designed to run at full efficiency from low output to high. We developed sophisticated control systems that allowed us to monitor environmental conditions and adjust the cooling output. The result was that PUEs at multi-tenant data centers improved by 50 to 75 percent. The savings were passed along to customers helping to create a compelling business case for businesses to use data center colocation as part of their IT platform.
Today we are in the middle of the third generation of data center cooling – environmental. Our goal is to minimize the environmental impact of the high-capacity, high-efficiency cooling systems deployed in generations one and two. This is the generation of economization. The idea is to use outside air when possible to keep the data center floor cool.
How does this work? First we perform complex computational fluid dynamics (CFD) modeling to understand and optimize the flow of hot and cold air on the computer room floor. Then we use new green cooling technologies to maintain the appropriate temperature. For example, in our Ashburn VA2 data center we use a fan wall and custom duct work to create the optimal airflow, drawing cool air from the outside. Our Dallas TX1 data center is one of the largest implementations in the U.S. of the KyotoCooling system which uses a corrugated, spinning metal flywheel to transfer cool outside air with warm data center air – and it’s water free.
DAVID SHEPARD, BASELAYER
David Shepard: One of the bigger trends we’re seeing is indirect, outside air cooling. This technology provides most of the efficiency benefits of outside air economization, without exposing critical IT equipment to the variable humidity conditions or contaminants that may be present in direct outside air applications. Economization, whether direct or indirect, allows operators to realize tremendous savings in operational cost, with little or no extra risk to the system. That’s good for the environment, and for the bottom line, which makes it possible to deploy more servers, or be profitable in a competitive market. If you were driving a large truck down a hill, would you rather let it coast and save the fuel, or ride the brakes and the gas pedal at the same time?
We’ve also been seeing containment technologies requested almost universally. One benefit of a modular deployment is that containment is essentially built into the design. Any opportunity to reduce the bypass of treated air, or send air only where it needs to go, is an opportunity to gain efficiency and reduce operating costs.
JACK POUCHET, Emerson Network Power
JACK POUCHET, Emerson Network Power
Jack Pouchet: We see two divergent trends in data center cooling, depending on the need for scalability, and the cost and availability of resources. In the first, hyperscale data center operators and a select group of others are moving to evaporative cooling – either direct or indirect — to use outside air to reduce cooling costs. This has the potential to consume a large amount of water, which is a growing problem in some areas. However, new economizer modes of operation, high-efficiency evaporative media, and sophisticated control systems are working together to optimize the performance at the building and campus level to minimize water consumption.
The net effect is that evaporative cooling systems in certain microclimates are achieving annualized WUE (Water Utilization Effectiveness) ratios of 0.2 liters/kW or better. For medium and small-scale deployments, very innovative multi-mode chiller plants can create chilled water with efficiency unheard of just a few years ago. Alternately, where there is a desire to use renewable energy or where water is scarce, the thermal management solution of choice has become Direct Expansion cooling (DX) with integrated economization.
ROBERT McCLARY, FORTRUST
Rob McClary: I think data center operators are getting better all the time in their approaches to cooling and efficiency. The types of IT environment loads that are being placed in data centers are continuing to evolve. These loads are continuing to become denser (i.e. kW/rack). Recently, however, we are seeing a new trend of IT environments that have dynamic electrical load profiles, as opposed to the fairly static load profiles that have traditionally been the norm. By dynamic loads I mean IT environment loads that drastically spin up and down over the course of a 24 hour period.
Big data analytics, cloud and other high performance computing – many of which have these types of fluctuating loads – are now prompting data centers and the subsequent electrical and cooling infrastructure to be more responsive. This is pushing operators and designers to cool their data centers more efficiently and more effectively. This trend pushes them to be able to deliver cooling to precisely match the IT hardware’s real-time requirements and to reduce it when it is not needed. Moving forward, data center infrastructure will need to be able to respond based on the need and timing of the IT environment’s load.
Looking back over the last 15 years, we started out with loads that were less than 1-2 kW per rack. Five to six years later we saw increasing voltages and amperages in branch circuits to the IT environments, and the densities increased to 4-5 kW per rack. A few years after that, 3 phase circuits with higher voltage and amperages started driving densities even higher.
Now we are seeing 10 to 25 kW per rack or greater becoming more common. I think those are the trends that will continue and are driving how people are approaching data center design and cooling. These trends are driving a higher volume of cooling to the delivery aisles, containment, alternative cooling sources and methods. Additionally, self-modulating cooling infrastructure and modular approaches in data centers are starting to gain ground in the industry. This evolution needs to be accelerated.
TED BEHRENS, Chatsworth Products
Ted Behrens: Certainly air-side economization or “free air” cooling has been a method that is proving out as a key trend across so many geographies. This has only become pervasive as the industry has challenged the “legacy standards” such as server inlet temperatures and air particulates.
I believe advancements in semiconductor technologies coupled with adoption of GPUs (graphics processing units), the industry will continue to create more headroom for existing cooling methods, whereas maybe five years ago many predicted liquid cooling would become a pervasive technology.
NEXT: Our panel looks at trends in cloud computing, including the rise of specialty clouds and the super-sizing of capacity requirements.
Keep pace with the fact-moving world of data centers and cloud computing by following us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our weekly newspaper using the form below: