Subscribe
  • Home
  • /
  • Software
  • /
  • Managing trends that could harm data centre cooling

Managing trends that could harm data centre cooling


Johannesburg, 03 Jun 2014

Some data centre trends and best practices - most aimed at improving performance, efficiency and manageability under normal operating conditions - may adversely impact operating conditions following a power outage.

This is according to a published paper by Schneider Electric, titled "Data centre temperature rise during a cooling system outage", authored by Paul Lin, senior research analyst at Schneider Electric's Data Centre Science Centre; Simon Zhang, a Schneider Electric senior research engineer working on data centre design, operation and management software platforms; and James van Gilder, responsible for Schneider Electric's data centre cooling software encompassing both software development and related research.

Within the paper, the authors explore these trends and practices, focusing on:

* right-sizing cooling capacity,
* increasing power density and virtualisation
* increasing IT inlet and chiller set-point temperatures, and
* air containment of racks and rows.

Lin, Zhang and Van Gilder say that right-sizing (that is aligning capacity to the actual IT load) the capacity of the overall cooling system provides several benefits including increased energy efficiency and lower capital costs. However, excess cooling capacity is desirable when faced with unacceptably high temperatures following a power outage. In fact, if the total cooling capacity perfectly matched the heat load, the facility theoretically could never be cooled to its original state because after a power outage there would always be heat in excess of the IT load. Just as multiple window airconditioners effectively cool a bedroom more quickly than a single unit, additional computer room air handler (CRAH) or computer room air conditioner (CRAC) capacity helps return the data centre to pre-power-failure conditions quickly.

When it comes to increasing power density and virtualisation, Lin, Zhang and Van Gilder maintain that compaction of IT equipment produces increased rack power densities in the data centre. The emergence of equipment like blade severs and certain communications equipment can result in rack power densities exceeding 40 kW per rack.

They add that another technology trend, virtualisation, has greatly increased the ability to utilise and scale compute power. For example, virtualisation can increase the CPU utilisation of a typical non- virtualised server from five percent to 10% to 50% or higher.

Since both increasing the rack power density and virtualisation make it possible to dissipate more heat in a given space, they can also reduce the time available to data centre operators before the IT inlet temperatures reach critical levels following a power outage.

The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) technical committee 9.9 (mission critical facilities, technology spaces and electronic equipment) developed and expanded the recommended thermal operating envelope for data centres. Increasing the IT inlet and chilled water set point temperature results in an increased number of hours that cooling systems can operate on economiser mode, explain Lin, Zhang and Van Gilder.

It has been estimated that for every one degree Celsius increase in chiller set point temperature, about 3.5% of the chiller power can be saved. In other words, it gets increasingly expensive to cool chilled water the more the set point temperature is reduced below a fixed ambient temperature. While this applies directly to chilled-water systems, the same trend applies to air-cooled DX systems. Consequently, higher IT inlet temperatures leave less time for data centre operators to react in a power-failure scenario.

Lastly, when it comes to air containment of racks and rows, the authors say that containment can improve the predictability and efficiency of traditional data centre cooling systems such as perimeter cooling systems with raised floor or hard floor, that is, flooded supply. However, containment systems prevent air streams from mixing with the rest of the data centre room and this will affect the temperature rise during cooling outages. The temperature rise performance will vary for different containment systems depending on the connectivity of cooling equipment to backup power.

For hot-aisle containment with row-based chilled-water coolers, if the coolers are not on UPSes and containment doors remain shut during a loss of cooling airflow, then there could be a substantial amount of re-circulated hot air into the IT inlets through various leakage paths and IT inlet temperatures will rise quickly. If coolers are on UPSes, but the chilled water pumps are not, then the coolers will pump hot air into the cold aisle without providing active cooling. In this case, only the thermal mass of the cooler (including cooling coils, water inside the coil is utilised. If both coolers and chilled water pumps are on UPSes, then the temperature rise depends on the chilled water plant configuration (that is storage tank configuration, chiller start time and more).

For cold-aisle containment with perforated tiles, the thermal mass in the raised-floor plenum associated with the concrete slab, chilled water pipes and so on can help moderate temperature rise. For cold-aisle containment with row-based chilled-water coolers, if the coolers are not on UPSes, then the negative pressure in the containment system will draw in hot exhaust through the rack and containment structure leakage paths, thereby raising IT inlet temperatures. If row-based coolers are on UPSes, then the temperature rise depends on the chilled water plant configuration, that is, storage tank configuration, chiller start time and more).

Rack-air containment systems behave similarly to cold-aisle and hot-aisle containment with row-based coolers.

Despite the challenges provided by recent data centre trends, Lin, Zhang and Van Gilder state that it is possible to design the cooling system for any facility to allow for long runtimes on emergency power.

Depending on the mission of the facility, it may be more practical to maximise runtimes within the limits of the current architecture and, at the same time, plan to ultimately power down IT equipment during an extended outage. Lin, Zhang and Van Gilder recommend four strategies to slow the rate of heating: maintain adequate reserve cooling capacity; connect cooling equipment to backup power; use equipment with shorter restart times; and use thermal storage to ride out chiller-restart time.

Share

Schneider Electric

As a global specialist in energy management with operations in more than 100 countries, Schneider Electric offers integrated solutions across multiple market segments, including leadership positions in utilities and infrastructure, industries and machines manufacturers, non-residential building, data centres and networks and in residential. Focused on making energy safe, reliable, efficient, productive and green, the Group's 150 000 plus employees achieved sales of 24 billion euros in 2013, through an active commitment to help individuals and organisations make the most of their energy.

www.schneider-electric.com

Editorial contacts

Debbie Sielemann
PR Connections
(+27) 082 414 4633
schneider@pr.co.za