About
Subscribe

Addressing the massive inefficiencies of data centre cooling

An often overlooked aspect of the data centre, the cooling system could be devouring more energy than you suspect.

Johannesburg, 11 Jul 2017

An often overlooked aspect of the data centre, the cooling system could be devouring more energy than you suspect.

Reasonably, everyone knows that cooling is a big consumer of energy. The conclusion is most often reached with air conditioners, perhaps as we watch the petrol gauge slowly creep down on very hot or cold days.

But exactly how energy-hungry is the process of cooling? In the context of a data centre, the answer may surprise you. Based on the calculations presented in Schneider Electric's white paper, Optimize Data Center Cooling with Effective Control Systems, temperature management systems can consume more than a third of the energy used by the entire data centre, the biggest consumer after IT equipment. Disregard the IT equipment and cooling systems are responsible for a whopping 75% of energy consumption.

Those are hefty, to say the least. But the paper reveals an even more astounding fact: very few data centres take this issue to heart. Perhaps the utilitarian nature of cooling has led to a lapse in paying attention to the details. In many cases, the cooling systems were not given as much thought as other aspects in data centre design.

Several factors influence the outcome of uneconomical cooling solutions. These include a habit of installing oversized capacity compared to the footprint of the facility, even though facilities often only operate at half their loads or less. Another factor is a mix of different vendor devices. A common theme soon emerges: the lack of effective control methods to adjust to the dynamic data centre environment is a key challenge.

The nuances of control methods create many issues, including human error and demand fighting. Yet savings could be realised with the right system. Getting there, though, requires a discussion around cooling, which in turn needs an understanding of the different levels of cooling control. But the benefits are manyfold. One example is how Google used artificial intelligence and automation to reduce its data centre cooling costs by 40%. Yet such advanced technology cannot be introduced without several foundational principles in place.

The core principles of effective cooling systems are creating central control, promoting interoperability, embracing automation, and simplifying maintenance. Like all principles, these should be ingrained at all levels and continually revisited. In other words, effective cooling requires a top-down response driven through a tried-and-tested framework.

Schneider Electric identifies that framework through four distinct levels in the cooling hierarchy: device level, group level, system level and finally facility level. Starting from the first, each tier adds to the environmental controls that a data centre deploys, building a comprehensive response from the local level of the device through to the entire centre.

Whereas many data centres approach their cooling systems as a box-ticking exercise, following the procedures recommended by the white paper establishes guidelines for a far more thorough and reliable cooling systems strategy. The paper also proposes a three-step approach to establishing this kind of thinking: select an appropriate cooling architecture, adopt effective cooling control systems, and then manage airflow in the IT space.

Explore the benefits and considerations for the four levels, as well as the steps that culminate in an effective cooling strategy for any type of data centre, by downloading the free Optimize Data Center Cooling with Effective Control Systems white paper today. Learn how one of the biggest cost centres in your facility can be reined in for great savings around cost, operational burdens and future investments.

Share