Subscribe

How not to automate

By Mark Myburgh, a Test Manager at EOH Application Management


Johannesburg, 29 Nov 2011

Software testers are required to test more and deliver better quality, in less time. Automation is generally the chosen approach to get more done in less time, but according to Mark Myburgh, a Test Manager at EOH Application Management, this is rarely the case.

In fact, recent research found that a staggering 50% to 60% of automation projects fail. “The question to ask is why automation seldom delivers the better, faster, cheaper and more effective testing that we expect,” says Myburgh.

Test automation is often the victim of unrealistic expectations, which is coupled with poor planning and poor design. The work that the automation requires is also grossly underestimated, as is the return on investment in terms of time and resources, not to mention the underlying resistance to change.

“When you look at how not to automate, it may provide insight and understanding of the issues and common pitfalls that test teams face, essentially highlighting how to avoid and overcome these pitfalls,” explains Myburgh.

Twelve ways that test automation can go wrong:

* Do not plan - not necessarily only the lack thereof, but also planning without understanding the objective of automation in your project or not understanding the client's immediate and future intended development plans. Jump right in and start scripting without assessing anything, such as the technical environment and how it will impact on your solution. This typically happens when the tester is short of info or resources and they substitute their own assumed logical understanding and don't bother to verify it. The end result invariably is a solution that only the tester understands, maintains and even uses.

* Do not involve team members - what the tester is doing is different and special and none of the other team members know what the job is about, and they generally love it that way.

* Assume - the tester assumes that 'one size fits all' as far as solutions are concerned. The tester also has the typical attitude that they have seen this before and know exactly what the client needs.

* Solution - the tester has a solution before truly understanding what the client's challenges are. Instead of listening to what the client needs, they will rather try and sell their own solution to the client.

* Focus on the tool - if the client has a program tool, focus more on the tool and forget about the project's requirements. The tester will look at all the cool things that the tool can do, instead of focusing on the project requirements and the role that the tool plays in the successful achievement of the project goals.

* Quick-fix solution - focus on an immediate quick-fix solution and basing all efforts on it. This is usually based on the assumption that there will be no further technological changes, both immediate and in the future, of the current environment's upgrades. This attitude shows a lack of sustainability of the solution with the thinking that the team will just have to build another one from scratch. This is where automation becomes a project liability when even after implementation, more money and time has to be poured into it.

* Do not treat it like a system development exercise - have on-the-fly requirements that are only in-line with the tester's assumed solution without bothering to plan or design the client's solution. Having no requirements or any other supporting documentation. The tester needs to remember that they need proof that their software code on its own is solid enough to test someone else's code, and it is therefore necessary to treat it as any other development exercise.

* Learning new technologies - do not bother learning new technologies and how they impact on the tester's career.

* No reviews - the tester does not review or assess their work. Now that the tests are automated, let them be and when there are failures or defects, work around them. Who cares about improvements, when it works it works, and if it isn't broken, don't fix it.

* No communication - do not communicate with the team; after all, test automation is not the same as manual testing. Why should the tester be aware of how the rest of the team is designing their manual test packs?

* Limit efforts to the tool - functionalities that are not built into the tool is the limit to the tester's automation efforts. Most tools have built-in data storage components such as data tables that regularly have a fixed capacity limit. But what if you are dealing with a large data file that won't fit? The tool is fully capable of testing the application at a service layer, but does not support the graphical user interface used for application development, which is a problem.

* Ignore project history - it is the here and now that matters. It usually results in the testers having to relearn lessons that have been learned and probably documented in the past, costing the project more time and money in the process.

The purpose of testing is to save time and costs, and the question to ask is whether you can add value to your test efforts by automating. “It is crucial to define the objectives of why you want to automate, right from the start, and to focus on the most critical areas of the system first. It is also advisable to ensure that you automate end-to-end business processes, rather than individual pieces of functionality. The key is to focus on how your efforts are influencing the overall system development effort or project, and to be cognisant of the client's immediate and future plans in order to automate accordingly,” concludes Myburgh.

Share

EOH

Listed company EOH is the largest enterprise applications provider in South Africa and one of the top three IT service providers. EOH follows the consulting, technology and outsourcing model to provide high value, end-to-end solutions to its clients in all industry verticals. For more information, visit: www.eoh.co.za.

Editorial contacts

Deidre Beylis
Watt Communications
(084) 426 0410
Deidre@wattcommunications.co.za