Subscribe

Under the radar

Hyper-convergence and Web-scale IT create an invisible infrastructure that is consolidated, aligned, scalable and manageable.

Paul Ruinaard
By Paul Ruinaard, regional sales manager, sub-Saharan Africa at Nutanix.
Johannesburg, 14 Aug 2015

I often get asked the question: "What can hyper-converged solutions really do for my business that my legacy systems can't?" It's a fair question, but in order to really unpack it, I need to take a step back and explain the components that make up legacy IT environments.

Data growth is on the up, in the backend; however, there are legacy systems that are still siloed, made up of separate environments for compute, storage, networking and virtualisation. Take this a step further and in many enterprise IT scenarios, there are even data centres that serve a single purpose for a single aspect of a business - be it to host business applications or just to store data. It's turned into what the coders of old used to call spaghetti.

Uncontrollable sprawl

Everyone knows spaghetti is not easy to eat, nor is it easy to unravel when served in a single dish, so why is this misaligned approach to IT followed?

To some, it is easier to throw hardware at the problem. Looking for more scalability - add processors; needing space - throw disk; more speed - increase the network pipe. Yes, all of these will work in the immediate term, but in the long term, the animal just grows and becomes financially and operationally unmanageable.

Traditional virtualisation of data centres, while it had/has an important role, creates misaligned legacy infrastructures and workloads. The silos do not support optimisation, nor do they answer the need to support virtual servers - the result of which is overprovisioning and bloated and creaking infrastructures.

Companies are creating an environment where there is competition for resources between virtual machines and storage controllers. The result? Slower systems, the need to overprovision to facilitate speed, and ultimately, a much heftier cost - the real loser is performance.

Conversely, this environment creates unnecessary storage and redundant data. Virtual machines like to take snapshots of data; these may not be needed or may be duplicated, but if a company's systems don't support the intelligence to make this known - those who need to know will never know, which is detrimental, as today all data is seen as important. Then there is mobility to consider. Traditional virtualisation links the virtual machine to the physical data store - limiting its mobility in the environment and in turn impacting manageability.

So, why stick with it if it is clearly not working? Whether it's the fear of the unknown or a resistance to be an early adopter of new technologies, there is little support of the argument to stay with it.

Web-scale IT

This is where hyper-convergence and Web-scale IT step in. They create an "invisible infrastructure" that is consolidated, aligned, scalable and manageable.

The principles and architectures underlying Web-scale IT infrastructure are fairly well understood by some. But to clarify, the hyper-converged hardware used in Web-scale systems is essentially ordinary and readily available. It doesn't need large multi-processor servers, complex and expensive SANs or that which is deemed "proprietary". All it needs is basic x86 servers that are put together in large numbers to create massively scalable computing arrays.

In the long term, the animal just grows and becomes financially and operationally unmanageable.

The concept is simple. The intelligence now sits in the software, and from there it pools and aggregates the compute power of the individual x86 servers to create the abstraction of a compute fabric. When more capacity or power is needed, IT no longer needs to re-architect the infrastructure, or replace or upgrade existing servers - it simply has to buy more of the same and add extra compute nodes to the mix. Moreover, if one node fails, another can simply take its place, while the faulty node is either fixed, ditched or replaced.

Similarly, this now intelligent software pools and aggregates the drives within the x86 servers into a single logical pool of shared software-defined storage that can be scaled seamlessly in small increments when needed. Looking at where storage is today, it is safe to say the big cloud companies have pretty much discarded traditional SAN (storage area network), deeming it overly complicated and the source of bottlenecks for massively scalable computing platforms. It is important to justify this, by highlighting that SANs require specialist management and expertise to bridge the gap between the LUNs (logical unit numbers) used to provision storage and the virtual disks used by applications that consume them.

So, when all of this is combined, the result is a massively parallel distributed system that is resilient enough to support always-on operations and can be scaled predictably without limits. Extensive automation and rich analytics eliminate the need for manual, error-prone management, lowering costs and enabling agility.

In short, this is an environment that makes the entire infrastructure life cycle invisible and diminishes the innovation and financial burden borne by users of existing data centre solutions. This is the true role and benefit to IT of hyper-convergence and invisible infrastructures.

Share