Subscribe

Downsize the data centre

Lezette Engelbrecht
By Lezette Engelbrecht, ITWeb online features editor
Johannesburg, 25 Mar 2011

It's one of the areas traditionally viewed as a major energy and cost drain - the data centre. However, thinking around data centre design and operation is shifting to meet the needs of a rapidly changing environment, according to global experts.

"In the world of IT, everything has cascade effects, and in data centres the traditional methods of design no longer work without understanding the outside forces that will have an impact on data centre costs, size and longevity," says Gartner VP and chief of research for infrastructure, David Cappuccio.

These forces include growing environmental pressures, the need for smarter designs, and the rise of cloud computing, according to a recent Gartner study. "However, these very forces can actually work in your favour, providing the means to apply innovative designs, reduce capital and operating costs, increase long-term scale, and keep up with the business," adds Cappuccio.

As squeezing maximum efficiency out of data centres gains importance, so the ways to do it become more creative. In a recent online seminar by research network Focus Interactive, global experts shared their views on how to tweak data centre design, operation and monitoring to transform it from an energy gorilla to gazelle.

Barry Stevens, MD of business development firm TDB America, notes that energy conservation has become an area of extreme interest and intense effort. ”Energy savings are no longer only a smart practice but a necessity for companies to improve earnings, become globally competitive, and reduce overhead expenses by lowering costs of utilities.”

As data centres continue to grow in number and size, says Stevens, energy demand dedicated to powering these facilities is also growing at an astronomical rate. ”With such high power requirements, even small decreases in power consumption can have a positive effect on an already constrained grid.”

Matthew Koerner, principal at project management company Critical Project Services, says there are essentially two categories for green trends in data centres: short- and long-term. Short-term strategies generally involve green approaches to the physical construction of the data centre, such as using regional materials and separating and recycling waste.

Long-term methods tend to focus on increasing efficiency through practices such as running chilled air distribution at higher temperatures, or allowing higher changes across the supply and return temperatures, which reduce humidification and lower power usage effectiveness, notes Koerner.

By looking at ways to increase efficiencies on both these fronts, businesses can begin making significant energy and cost savings.

Holistic by design

When it comes to a data centre project, the actual building is often a second priority, says Brian Richard, science and technology team leader at US architecture and planning firm, Kirksey Architects.

”It's important for us to consider a very holistic approach when we're putting together a building solution for a data centre,” he says, explaining that the project team is usually comprised of experts ranging from the building engineering side to the data centre technology side.

“That holistic approach of knowing the data centre needs to be an operational piece of equipment, and not just a building solution, is incredibly important.”

He adds, however, that although businesses may be spending the bulk of their money on what goes on inside one of these facilities, they still have to maintain the facility over time. “So having a sustainable solution means you're as efficient as possible from day one, and you've got a facility that you can tweak and change over time as technology progresses and adapts.”

According to KC Mares, president of data centre efficiency firm MegaWatt Consulting, smart designs end up costing less, because they involve looking at the need of the data centre in a holistic and pragmatic way.

“Look at the areas you can change in the way the data centre is designed and operated so you're not doing it in the same old way. Have a good design script from early on with input from everyone - those who manage the facilities, the network and IT operations, to hardware procurement and software tools, as well as the financing decision-makers, to find out what they need and how to design the data centre to meet those needs.”

Mercury rising

The conventional thinking around data centre temperature and humidity is also shifting, notes Mares. “In the past, we used to think that servers and processors needed to be kept very cool, but Intel themselves have specifications for temperature on their processors at 135^0C. These things can run quite high and be perfectly fine.

”It doesn't mean we should be running servers at 135^0C, but today Microsoft is running their data centres at around [32^0C] and Yahoo is pushing those temperatures to well above [27^0C]. So generally speaking, we can broaden these temperature limits.

“A roughly one degree Fahrenheit increase [0.56^0C increase] in inlet temperature of a server decreases total load on the mechanical system by about 2%, so these things make a big difference.”

“Today, less than 20% of IT managers actually see or pay the utility bill.”

KC Mares, president, MegaWatt Consulting

Another method that's very popular, notes Koerner, is the economisation of air and water systems. “A great example is the new Yahoo data centre built in New York that uses an air-side economiser for passive cooling.” He says using cold air from the surrounding environment has greatly reduced the amount of energy and water used for the centre's air conditioning systems.

“They use around 40% less power overall in the data centre and have seen a roughly 95% reduction in power attributed to cooling, and also use about 99% less water than a typical data centre.”

While using the climate as a cooling mechanism may be impossible for some, Koerner says managing heat effectively can also help rack up efficiency gains. This can be done by capturing heat from servers directly and using what's called chimneys on the IT racks, he explains.

For example, one company found that by directly capturing the exhaust heat from servers and not allowing it to mix with the air being delivered to servers, it decreased by more than 25% the amount of power used for the air conditioning system.

“In other words, they were able to simply turn off 25% of the air conditioning serving that data floor and to them it was an obvious choice to retrofit other heat capture technology. It's something that can be done in a relatively short amount of time, is not intrusive to the data centre, and is pretty inexpensive - you're talking about paybacks that are calculated in months, not years.”

A little fine-tuning can also go a long way, says Koerner, in better controlling the centre's mechanical system. “By adjusting chilled water supply and return set points and air supply and return set points for your air handling system, you can finely tune the temperature of water and air that's delivered,” he explains.

“One of the unseen benefits is that by increasing your supply air temperature and thus increasing the water temperature, you actually remove a lot less water from the system, so it requires a lot less water for humidification.”

This creates more of a balance within the system, without pumping a lot of humidification water into the system, which saves both water and considerable energy, explains Koerner.

“So certain slight modifications don't add a lot of cost, or have a big physical impact on the data centre, but could have a pretty profound impact from an energy-saving standpoint, and ultimately, an operational cost-saving standpoint.”

Old vs new

On the hardware side, Koerner says while it makes sense to replace outdated machines, it should be done with a balance between cost, efficiency and the environment in mind.

“Take a UPS, for example. Even as recently as 10 to 15 years ago, when these systems were lightly loaded their efficiencies dropped dramatically, to the 50% to 70% range. Today, some of the newer technologies allow them to be 99% efficient, which obviously saves a considerable amount, energy and cost-wise.”

However, he points out one also needs to take into consideration the carbon footprint of the manufacturing of some of this equipment. “So if you've got a two-year-old system that's 96% efficient and you want to replace it with one that's 98% efficient, and you take into account the cost and the manufacturing footprint, it may not be worth it.”

Mares adds that simply switching off unused hardware can bring major energy and cost savings. “There have been many industry studies that show about 30% of all servers in all data centres across the US are completely unused, which means less than 3% utilisation. That's at a cost of almost $25 billion a year in the US alone in electricity.”

He adds that the majority of savings coming from virtualisation are simply through the consolidation and shutting off of servers that weren't needed or aren't being used.

Finally, says Mares, it comes down to the old 'you can't fix what you don't measure' mantra. “Today, less than 20% of IT managers actually see or pay the utility bill.” He adds that often the simple task of measuring and watching energy use in the data centre, and sharing this information, makes the biggest difference in tracking it.

“The most important thing is to share this information among all the different users of the data centre. It's about combining every part of the centre into one main budget and one main management tool, so all these metrics are watched and decisions can be made together.”

Share