About
Subscribe

All software testing is not equal

There are few things more frustrating than software that doesn`t work properly. Surely these things are tested before being unleashed on unsuspecting users?
By Warwick Ashford, ITWeb London correspondent
Johannesburg, 24 Mar 2006

Innovation to common software applications often prompts the question why things were not designed that way in the first place. A good example of this is the coming release of Microsoft`s new Office suite of applications, which have much simpler user interfaces than before.

When something makes such good sense, one can only wonder why no one thought to do it that way in the first place.

A similar thought crossed my mind last week during a presentation at Compuware by software testing expert Dorothy Graham on the Gauteng leg of her visit to SA at the invitation of the Western Cape chapter of the special interest group in software testing and the Computer Society of SA.

Graham, who is the founder of UK-based Grove Consulting, demonstrated the value of testing software properly, claiming it can deliver significant efficiency gains.

Smart testing

So it seems that all the frustrating software in the world probably has been tested, but according to Graham, not all testing is equal. Testing alone is not enough. It has to be smart.

To illustrate this point, Graham discussed the merits of various software testing techniques, advocating a combination of techniques to deliver optimum efficiency. Just as not all testing is equal, it would appear not all testing techniques are equal and should be used in combination.

Testing alone is not enough. It has to be smart.

Warwick Ashford, portals managing editor

These techniques make such good sense, one can only wonder why so few software development and testing facilities use them.

From an end-user point of view, it would be best if all software was tested thoroughly before release, but Graham quickly put paid to this notion. Giving an example of a simple application, Graham demonstrated that testing everything would require 480 000 individual tests. At a conservative 10 minutes per test, such a testing process would take around 40 years, assuming an eight-hour day and a five-day working week.

In other words, testing absolutely everything - even in a simple application - is impossible. Getting real about software testing means accepting that using recognised testing techniques is a much smarter way of ensuring that nearly all the bugs are found before software is released.

Testing saves money

The equation is simple. Formal technique-based software testing training improves the efficiency of testing and consequently saves money. Therefore, software testing has real value. Graham illustrated this point with a case study in which it was established that after training in technique-based testing, software testers were able to find four times as many defects as they were before.

At the end of the day, effective testing is about using a combination of proven techniques to provide information about the software application under test as well as information about the testing process itself.

Graham says an important part of testing is measuring the effectiveness of the testing process by expressing the number of defects found prior to release as a percentage of the total number of defects found, including those found after release.

This metric is known as the "defect detection percentage" or DDP. The higher the DDP, the greater the efficiency of the software testing being employed.

For example, if 150 defects were found prior to release and only 50 by users after release, the DDP would be 75%. However, if only 50 defects were found prior to release and 150 after, the DDP would drop to 25%.

Back to square one

Testing teams using the defect percentage are able to measure the effectiveness of their testing process as well as compare the effectiveness of testing for various applications and register either an improvement or decline in efficiency.

The DDP can also be used to predict the number of likely defects in current and future projects based on past projects. If the DDP is lower than predicted, developers will have an indication of how many defects are likely to have been missed.

If eliminating nearly all software bugs in an application is as simple as implementing a few tried and tested software testing techniques, it is even more difficult to understand why so few software producers appear to be using them. Back to square one.

Surely enough time and money has been wasted on inefficient testing. It`s time for software development houses to get real about testing processes and learn that a little technique goes a long way.

Share