Subscribe
  • Home
  • /
  • Computing
  • /
  • Why using virtualisation for testing is better than the real world - in virtually every way

Why using virtualisation for testing is better than the real world - in virtually every way


Johannesburg, 12 Aug 2016

Anyone who has spent much time in an IT environment has doubtlessly heard of virtualisation or virtual machines. For the unfamiliar, in essence, this just means building a virtual computer inside another computer. The traditional system most businesses employ are individual terminals, which are all standalone PCs (or Macs, laptops, tablets, etc) that share files held on servers with other machines on the network.

Virtualisation has brought down costs for an immense number of businesses by divorcing the platforms staff need access to from the hardware they are using. A typical example is when the design team uses high-end Macs, but their output needs to work on multiple devices/configurations. Testing using real-world hardware would be unnecessarily time-consuming, restricted by physical space and cost, and prone to equipment failure. By testing in a virtual environment, staff can try limitless devices and configurations, all contained in the same hardware but operating independently.

Because by definition virtualisation exists only as software, any machine capable of running the right software can be used, meaning radically reduced overhead by creating a much more energetically efficient system, which can be managed, upgraded and optimised far more easily than a suite of tens, hundreds or even thousands of terminals used in a business, and of course, at a radically reduced cost.

Most IT-savvy business managers and directors will probably be aware of this concept already, with enterprise-level virtualisation software appearing on the market as early as 2001 (although IBM did actually create the first virtual network in 1960s!) with the modern virtual desktop system coming into play in 2007 with VMware's VDI software. However, what many businesses are either yet to, or are only beginning to, embrace is virtualisation specifically for testing purposes.

According to research outlined in the SQS whitepaper: "Virtualisation - the smarter and faster way to Perform Testing", most firms reported 75% or higher virtualisation of their data centres in 2015, up from 54% only the year before. These figures infer the ability for virtual servers to address two key factors in IT strategy: reducing carbon footprint and providing high power solutions. Both these factors are fuelling the migration towards virtual services for organisations.

All technology projects are, by definition, temporary due to the inevitability of progress and obsolescence, which, combined with fierce competition in most business sectors, means rapid, cost-effective development will always be critical to success. And this is what virtualisation can deliver - a smarter, faster way to perform testing at all three phases of test organisation.

So, what are the typical challenges faced during the different testing phases, and how can virtualisation help overcome them better than traditional solutions?

The first phase is Component Functional and Integration Testing, or to put it another way: "Does it do what we want it to do, and does it still do that when put on our systems?" In standard IT infrastructures, it can be extremely difficult, if not impossible, to perform required tests for several reasons such as the component not being developed internally yet or being controlled by a third party that hasn't released a version compatible with your infrastructure or which operates on a pay-per-use basis that would create excessive costs in testing.

Particularly where integration is concerned, we are primarily interested not in what the component is doing internally, but rather, how the systems the component is integrated with monitor, instruct and respond to said component. For testing purposes, we don't need the processing, we just need the data to be delivered in a way that mimics the processing, which is what virtualisation in a testing environment can do.

'Virtual assets' are created by recording live communication among components as the system is exercised from the application under test. These virtual assets can be used to represent specific component data. For a database, this might involve listening for an SQL statement, then returning data source rows. For a Web service, this might involve listening for an XML message over http, then returning another XML message. This is then replicated ad infinitum across as many virtual environments as desired, without needing to rely on any external availability. Virtualisation also means users can develop and test on-the-fly, and as a result it starts earlier, takes less time and costs less to do.

We then come to the system testing phase.

Once we know the component works within the system, we then need to ensure the product and/or system it is to be a part of will also continue to perform correctly in every likely hardware and software environment. So, as well as the issue of third-party components as in phase one, there is an additional problem in that setting up testing environments can be a slow, laborious and expensive task, often requiring significant downtime to allow for tests, not to mention the time and brainpower of your top IT staff. Who isn't familiar with the infamous 'please shut down your PCs by 6pm for server maintenance' e-mail?

Thanks to the wide scale development of virtualisation platforms, it is completely possible to simulate virtually any software, OS, hardware or server configuration imaginable, without having to make a single live change or taking any real-world technology offline. Better still, by taking one server and partitioning it in this way, every instance is sand-boxed - shielded from everything else - resulting in streamlined disaster recovery and data continuity.

Finally, we look at the system end-to-end testing phase.

Most large organisations operate from multiple sites across the globe, and in order to remain cross-functional, are constantly changing and developing their network and data storage infrastructure for security, availability, performance and even regulatory policy changes. The physical networking hardware is simply there to shuttle packets of data around, so if the environments are all virtual and the data processing is all handled by software, it makes complete sense to take all of the network services, features and configurations necessary to provision the application's virtual network (VlAns, VRFs, firewall rules, load balancer pools and Vips, ipaM, routing, isolation, multi-tenancy, etc), decouple them from the physical network and move them into a software layer as well.

As well as networking, organisations are required to retain data for ever-extended periods of time, so storage can become an out-of-control issue very quickly. As well as the footprint (carbon and physical) of traditional data storage, there are manifold issues surrounding storage, including maintenance, disaster recovery and data mobility, all of which can be reduced through storage virtualisation.

Storage virtualisation abstracts the logical aspect of storage from the physical, allowing you to pool and share large quantities of storage among several applications and servers, regardless of the physical hardware that lies underneath. From the outside looking in, it simply operates as normal network storage, while under the hood, many strategies to distribute, store and protect data will be at play with no noticeable impact to the end-user.

Even for companies where this wide-scale data distribution is not an option, using a virtualised server can rapidly decrease response time should there be any outages. Because a virtual machine is hardware-agnostic, as long as backups of the whole VM are made, the most recent backup can be taken and dropped into any hardware running the same virtualisation platform, radically reducing the steps required to get server operations running smoothly again. A similar process can also be used to test new software and hardware configurations with real, current data but without risking the live environment.

Virtualisation, like cloud technology, is not a trend. It is a fundamental shift in the way we think about running our digital infrastructure and applies to all of us. It is a foundation-level game-changer that reduces complexity, improves agility, efficiency and scalability while also reducing costs. Many of you will know the famous 'fast-cheap-good' triangle which says you can only ever be two at once; however, virtualisation could just be the exception that proves the rule.

Share

SQS Software Quality Systems

SQS is the world's leading specialist in software quality. This position stems from over 30 years of successful consultancy operations. SQS consultants provide solutions for all aspects of quality throughout the whole software product life cycle, driven by a standardised methodology, offshore automation processes and deep domain knowledge in various industries. Headquartered in Cologne, Germany, the company now employs approximately 4 100 staff. SQS has offices in Germany, UK, US, Australia, Austria, Egypt, Finland, France, India, Ireland, Malaysia, the Netherlands, Norway, Singapore, South Africa, Sweden, Switzerland and UAE. In addition, SQS maintains a minority stake in a company in Portugal. In 2014, SQS had generated revenues of EUR268.5 million.

SQS is the first German company to have a primary listing on AIM, a market operated by the London Stock Exchange. In addition, SQS shares are also traded on the German Stock Exchange in Frankfurt am Main.

With over 8 000 completed projects under its belt, SQS has a strong client base, including half of the DAX 30, nearly a third of the STOXX 50 and 20% of the FTSE 100 companies. These include, among others, Allianz, Beazley, BP, Commerzbank, Daimler, Deutsche Post, Generali, JP Morgan, Meteor, Reuters, UBS and Volkswagen, as well as other companies from the six key industries on which SQS is focused.

Editorial contacts