Subscribe

Testing one, two, three

What makes DevOps pivotal to achieving a competitive edge?

Jaco Greyling
By Jaco Greyling, Chief technical officer, DevOps, at CA Southern Africa.
Johannesburg, 18 Oct 2013

All large companies with a reliance on IT must deal with the constraints of systems such as mainframes, components under development and sensitive data sources that delay projects. Inevitably, teams try to either create a complete test environment or 'mock up' responsive systems by coding their own versions for test purposes. This has become a costly and time-consuming process.

In essence, service virtualisation is the 'productionisation' of the practice of 'mocking and stubbing' development and test environments, with enough realism and context to push development forward faster, while starting component testing sooner in the software development life cycle (SDLC), so integration and release processes can happen faster, with higher quality and less risk.

What follows are a number of capabilities that service virtualisation can bring to the table.

These include:

* Providing development with a more lifelike environment;
* Enabling parallel development and testing;
* Virtualising test data for out-of-scope systems; and
* Enabling high-performance environments.

Over the years, developers have turned to techniques like 'stubbing' to move their component development forward. Stubbing is simply a piece of code used to stand in for some other programming functionality. As application developments move towards more composite, services-oriented architecture approaches, a much wider variety of downstream systems must be simulated by teams in their development and test environments. This is where stubbing falls short.

To stub or not to stub

Stubbing requires the developer to have an in-depth understanding of the downstream system to properly simulate the desired functionality. Secondly, most stubs are very basic in nature and only account for the most basic tasks. Thirdly, it takes valuable time to write a custom stub, which is time lost in writing business logic. In essence, it's very difficult to write a good stub.

It's no surprise that development and test environments far outstrip production.

Alternatively, when teams are working with real data scenarios and real behaviours captured as virtual services, their productivity levels are higher, as the resulting environment is far more realistic and current than stubs, which must be manually coded and maintained. Therefore, the critical technique that enables 'lifelike' environments is automation of virtual service creation and data maintenance. This enables development teams to be far more productive due to realistic virtual environments.

The second critical capability includes parallel development and testing. When development and test teams work simultaneously, the overall software life cycle reaches a whole new level of efficiency. New solutions can be delivered at great value to the organisation. In parallel development, virtual services act as the 'go-between' regarding assets between the system under development, and the system under test in a symbiotic fashion. Each parallel development cycle continues to accelerate, as each iteration of a virtual service model update happens with each new build, and feedback happens faster and faster. This is a very powerful asset to creating a robust parallel development capability.

Scoping the systems

Every team with a requirement for a test environment has some systems and associated data that are considered in scope, and others that are out of scope. An in-scope system is one in which a development change or test is being performed directly on that system. An out-of-scope system is one that is required in support of that in-scope system, but is in fact not the subject of the development or testing activity. It is considered a dependency. It is necessary, but it is not the subject of the development or test activity.

The traditional approach is to import data directly from the in-scope systems, and 'stubbing' or mocking up those out-of-scope systems by attempting to write code and import a couple of lines of data to represent the expected responses of those out-of-scope dependencies. As previously discussed, these stubs are inherently brittle, and in today's complex software environments, the effort of manual coding useful stubs has become too costly.

What is needed is a way to simulate the behaviour of those out-of-scope systems with enough intelligence so that the in-scope system believes it is talking to the live system, but is in fact not. Service virtualisation takes the approach of making all that missing data behind the in-scope system no longer a problem, by automating the capture of relevant downstream scenarios that are out of scope. Virtual models allow teams to always have on-demand access to relevant datasets for systems under test.

It's no surprise that development and test environments far outstrip production. Ironically, the greatest challenge most existing pre-production environments have is they are never the complete system. A project team purchases some number of servers, and uses them to replicate some hardware or components on VMs they have access to. So even though every team is allocated its own hardware budget, it still spends countless months of the development cycle waiting on access, and inefficiently accessing shared system resources that are not production scale.

Since every connection point in the software architecture represents a potential point of change, and presents a potential risk of failure, it becomes critical that service virtualisation provides a better way for teams to 'virtualise everything else', and thereby isolate themselves from dependencies on these heterogeneous components. Not only will that remove constraints in the SDLC, but huge cost savings will be realised through the use of less hardware and reducing an organisation's operating budget.

Share