The new age data centre
Running a data centre, whether you're just providing the real estate or managing the servers, has become an increasingly complex and specialised business - not to mention a very big one. But at the same time, the emergence of independent data centre service providers means it's not only mega-enterprises that can afford to have one.
“If your business has a server under someone's desk with a sign on it saying 'do not switch off', it really shouldn't be there,” says Vishal Mothie, technical specialist at Novell SA.
A server that shouldn't be switched off is probably one that's doing something important for the business - and if it's important, it deserves better treatment. Moving it to a hosted data centre, and possibly virtualising it, not only provides a reliably cooled, powered and dust-free environment, it also creates new possibilities for backup and data recovery on the inevitable day when the server dies.
Multi-tenanted data centres “are able to leverage economies of scale,” says Richard Sutherland, the portfolio manager for dynamic infrastructures at Fujitsu. “It takes the complexity and the associated costs away from the organisation. Small to mid-tier organisations have been quickest to adopt because they can't justify the cost of that kind of technology otherwise.”
“Our fastest-growing customer segment is companies with around 1 000 users,” confirms Rob Gilmour, MD of data centre provider RSAWeb. “It makes no financial sense for them to try to run their own data centres because the requirement from an infrastructure point of view is huge. They just can't afford the capital cost. But virtualisation has changed everything - one physical server can support multiple customers.”
“A lot of people aren't actually buying servers anymore,” confirms Shane Chorley, Executive head of Vox Core, the infrastructure division of Vox Telecom. “Data centre and service providers are offering incredibly powerful blade and VMware environments in which you can host virtual servers. But that takes a lot of power and data centre space to deploy, so it makes sense to share a single, well-managed location.
“You will use a lot less power in a single data centre, and they deal with cooling and power management issues much better. Most of them, for example, have hot and cold aisle technologies that push cold air into the right places, which means you need a lot less cooling overall. It's much more efficient.”
Virtualisation has changed everything - one physical server can support multiple customers.Rob Gilmour, MD, RSAWeb
But, warns Gilmour, planning a move like this can be challenging.
“This is a fundamental architectural change and it's hard even to know what's possible without doing a lot of research. Offering consulting and advice has become a large part of our business. Some of the complexity comes from the fact that there's not one requirement per company - there are different requirements for each application. So e-mail servers and Web site front ends, for example, might work well in a public cloud environment; but companies might want to keep their databases and billing systems in a private cloud.”
For those who are still queasy at the thought of entrusting their IT infrastructure to an outside party, there's another option: maintaining ownership of your own physical hardware, but moving it to a properly serviced data centre environment.
“We sell space, power, cooling and security,” says Lex van Wyk, MD at Teraco Data Environments, SA's first provider of vendor-neutral data centre facilities. “We don't own any hardware.”
Equally importantly, Teraco doesn't own any networks either. Instead, customers have a choice of multiple carriers all sharing the same environment. “All of SA's top 10 telcos connect into our data centres,” says Van Wyk. “That gives customers tremendous flexibility to choose providers according to their needs, and change when necessary.”
“It's like a service provider hotel suite,” says Chorley. “As a customer you can connect to different network providers for different reasons and applications, and move your services at the drop of a hat. It gives customers a lot more choice. A number of other providers are now looking at doing the same thing as Teraco.”
Whichever option you choose, how your data centre is monitored and managed is critical. The first set of requirements relates to the physical environment, not only how it's set up, but how it is managed and monitored.
“Lots of outages are caused by human error,” says Teraco's Van Wyk. “It's down to silly things like running out of diesel for the generators. You need an SLA that offers at least 99.99% guaranteed uptime, and the provider needs to be able to show the design, processes and track record that proves they can do it.”
If your business has a server under someone's desk with a sign on it saying 'do not switch off', it really shouldn't be there.Vishal Mothie, technical specialist, Novell SA
Then there's the not-so-simple matter of managing server environments.
“A lot of people went into virtualisation thinking it was a magic bullet that would make everything simple,” says Rory Green, data centre and virtualisation product sales specialist for Cisco. “But if anything, they got an extra layer of management. Virtualisation tends to highlight any existing management pain points.”
Part of the problem is the ease with which virtual servers can be provisioned. “I've spoken to a number of customers who had great processes in place for procuring server hardware, where the business case had to be made before anything could happen. With virtualisation, it's pretty much possible to right-click and create a new server. People can very quickly get to a point where they don't actually know how many servers they have or where they are running. The fact that you don't need a purchase order doesn't mean you don't still need strict governance processes in place.”
The G factor
In fact, one of the key drivers of what is going to happen in data centres over the next few years is the King III report on corporate governance, which for the first time puts IT governance in the spotlight. The requirement to consider sustainability and implement green IT principles, for example, arguably supports a move to more centralised, hosted environments where it is easier to realise energy efficiencies (see sidebar).
But it also means that IT managers can no longer piggyback on financial compliance processes. They need to think more like accountants in their own right.
“Things have almost come to the point where you can assume the hardware will do whatever you need it to,” says Green. “Now it's more important to take care of the other stuff. The bigger organisations typically have some sort of change management process in place already so it's a smaller step for them. The smaller guys are in more danger of creating chaos in their environments. They might go to bed one night with 50 machines and wake up with 300.”
Creating (and then actually implementing) good processes is one way to keep things under control. Another is to automate as much as possible - things have moved way beyond the point where it is feasible to keep track of everything manually.
For example, says Novell's Mothie: “You want your security policies to follow workloads as they move around. There are some workloads, for example, that can burst out into the public cloud when they need extra resources, and some you need to keep private. You need to build a lot of intelligence into the environment so that resources are dynamically allocated according to a clear policy.”
A good service management tool or dashboard should offer a single view of the physical and virtual data centre environment from a business service perspective, says Mothie.
“It's important that the focus should be on business services rather than on the infrastructure itself,” he says. “In the case of an outage or any kind of problem event, you need to know exactly which services are affected and which are most important to your business, so you can deal with those first.
“It's only too common for people to discover which services depend on a particular server after they've rebooted it. You need to know beforehand. That also helps you define which workloads are critical for your business continuity.”
Business continuity and disaster recovery are, in fact, key drivers for the move towards independent data centres in their own right.
“The minimum safety net for disaster recovery should include at least one external facility in a separate location,” says Bradley Janse van Rensburg, solutions design manager for Continuity SA.
“We recommend that our clients have at least three copies of their data, one of which should be a systems snapshot. Mirroring data as it changes is great, but you can replicate errors and corruption that way as well so you need to be able to roll back.”
He also insists that the responsibility for ensuring business continuity can't be outsourced.
They might go to bed one night with 50 machines and wake up with 300.Rory Green, data centre and virtualisation product sales specialist, Cisco
“It's convenient to send your server room to another company, but the recoverability problem doesn't go away. You need to make sure your outsource provider not only has the correct processes on paper but also tests them at least twice a year and is independently audited. If your plans are well laid and well rehearsed, your business can be back up and running within an hour after almost any interruption.”
Nor are these interruptions once-in-a-lifetime events.
“We have 200 customers and about three or four disaster recovery events each month,” says Janse van Rensburg. “Most of them aren't life-threatening but they do lead to lost business: power or bandwidth outages, server failures, strikes and even silly things like sprinklers deploying accidentally or fumigation chemicals getting into the aircon system so the building is temporarily uninhabitable. Some companies live with those outages; our clients don't.”