Subscribe

Virtualisation heats up industry

Virtualisation separates the logical layer from the physical layer, simplifying the tasks associated with storage.
Adam Day
By Adam Day, Product manager at SYSDBA.
Johannesburg, 11 Sept 2008

Virtualisation is one of the hottest topics across all sectors of the IT industry at the moment.

There are discussions on the subject in almost every publication, whether about the current hiccups being faced by some of the leaders in the space, or new developments being touted on an almost daily basis to address the IT spend on these technologies.

Many industry players use the term virtualisation to describe products and technologies with very different and varying purposes, making it difficult for the average person to understand accurately what the term represents.

The simplest description of virtualisation is removing or separating the logical layer (the disk or partition seen by the host or user) from the physical layer (the underlying hardware which could comprise cache, cables and disks), thereby simplifying the tasks associated with storage.

Why use it?

Storage management and provisioning has traditionally been considered to be highly complex, and best left in the hands of those skilled in the black arts of storage provisioning and performance management.

There has been good reason for this perception, as storage devices that fall under the monolithic category - and even some modular arrays - are extremely complicated to deploy and manage. The tasks relating to effective storage deployment can require weeks or months of planning, and if anything is overlooked it can take even longer to return the infrastructure to an acceptable level of availability and performance.

Virtualised storage has been introduced to take the complex tasks relating to storage out of the hands of the overstretched IT department and allow the array to do them internally, removing the guesswork and reducing related outages and performance issues.

This benefits the IT department in a number of ways, as a reduction in complexity - removing the human factor - means less downtime and its associated costs.

Virtualised storage also cuts operating costs, as the requirement for expensive storage resources and professional services is drastically reduced.

It can also have a positive impact on business agility: the IT department can get services online quicker, and it frees them to focus on value projects relating to business expansion rather than the day-to-day maintenance and performance management of the storage environment.

It also aligns with the benefits of a virtual server infrastructure by allowing the storage to be as flexible as the server. With the advent of server OS virtualisation and bladed technology, this is commonplace in the data centre. There is no point being able to dynamically move or deploy servers if they have no storage available to them. The benefits of speed are lost if it then requires an extended period of time for the associated storage to be deployed.

The two environments need to work hand in hand, so virtualised storage really is a requirement in any virtualised and bladed server room, as well as for an IT department with limited resources.

Where to use it?

Storage management and provisioning has traditionally been considered to be highly complex, and best left in the hands of those skilled in the black arts of storage provisioning and performance management.

Adam Day is product manager at SYSDBA

There are three distinct areas where storage virtualisation can be deployed, each of which has its pros and cons. There is no one-size-fits-all solution and specific requirements should be considered to determine which solution might best suit a company's needs.

* Server-based storage virtualisation
This requires the installation of a software package at the host layer to virtualise the underlying storage hardware. This can be deployed relatively easily and invariably does not require any additional hardware. But there are overheads to system resources and the licensing costs can be high.

* Array-based virtualisation
This requires the introduction of a virtualised array that pools resources and then allocates them based on capacity and performance requirements. The benefit is in simple management and, after the initial cost of the array, the virtualisation is available to all hosts, irrespective of operating system. The downside is that the technology is only available to the devices attached to the array and, commonly, for replication purposes a similar array would be required.

* Network (Fabric)-based virtualisation
In this instance, the virtualisation is done in the storage network and removes the task from both the host and array. While this allows for a true heterogeneous environment and can be deployed across disparate hosts and arrays, there are often performance and availability issues associated with placing a device in the data path. All equipment also needs to be certified for use.

This information should point companies in the right direction as to where to introduce storage virtualisation in a data centre. The most mature and cost-effective solutions are based on virtualisation in the array, as this offers the most robust and commonly implemented solution.

Organisations should investigate these technologies and consider at which layer to introduce them. If in doubt, do research on the Internet and call in the experts to present overviews and demonstrations.

Be sure to read third-party reviews on the products available, as this will often give a more balanced view of the technology. Sites focused on storage include www.searchstorage.com and www.byteandswitch.com, and offer additional reading to track trends and technologies.

Remember, there will definitely be a solution out there to benefit companies, as long as they define and understand what they need to achieve.

* Adam Day is product manager at SYSDBA.

Share