Let's face it, very few organisations have succeeded with test automation. One good sign is that the industry is beginning to notice this. Why else would I keep getting invitations to seminars entitled “Secrets for Successful Test Automation” and “Why Test Automation Fails in Many Organisations”. Let us analyse this last question.
Sales and marketing
It is believed that the sales and marketing teams associated with various testing tools are largely to blame for this state of affairs.
For 10 years and more they have been toting the messages: “your existing test team can do the automation”; “you don't have to do anything special to implement test automation”; “you'll get an immediate return on investment”. I'm not suggesting that they have been deliberately lying; they have supported these claims with easy to use, point and click, user friendly GUIs, powerful macros, record/playback engines, wizards, slick demos and (short term) success stories. After all, it's their job to sell the products, it's up to us to make the best use of them - and they CAN be used effectively.
Once you realise that a test automation script is a software program (software to test other software), it's a no-brainer to conclude that the best people to write software programs are programmers (not testers). So let's start with this premise.
Skill levels
Most organisations have tried to train their test analysts as automaters, but even those who have selected appropriately trained programmers to do their automation, lack the in-house supervision capabilities to ensure they do the job properly. Unfortunately, test automation is not just another programming language that a skilful programmer can adapt to. There are unique aspects to automation, mainly related to the interaction with the Application Under Tests' (AUT) objects, that if not handled properly from the outset, lead to doom.
Just sending somebody on a three- or five-day training course is not enough. It's like giving someone 10 golf lessons and then putting them up against Lee Westwood - they have been set up for failure. The training must be augmented by at least three months' mentoring, probably longer. Very few organisations have anybody in a supervisory position with the experience and certifications to provide this.
So to overcome this, some organisations bring in a team of outside consultants to do the automation with the instruction: “You sit in that corner over there and do the automation, we'll carry on with our testing as usual.” That doesn't work either.
Process
Automation has to be an integral part of the test process. The test designs have to be different - having a 1:1 relationship between tests and automated scripts is never going to succeed. The test requirements often need to be more detailed. Test execution is definitely going to be different.
In fact, the whole process needs to be reviewed and revised. Another aspect that most people miss is the way we recognise the individual objects on the AUT changes. Manually we use the visual properties; in automation we are likely to use other properties.
The developers need to be aware of this and a closer relationship between the test team and the development team becomes essential rather than just desirable. Because, when they make changes to the software we need to know about it as long as possible in advance.
Maintenance
While all the above challenges are extremely important, probably the biggest reason for the failure of test automation, in the long run, is its inability to keep up with the changes to the software.
If you have addressed all the above issues and created a wonderful automated test suite that works perfectly on version 12.3.6 of the software, how long are you going to manage to keep up with each (monthly) release?
Normally, you are going to have to wait for the next version to be deployed into the test environment before you start (re)recording the scripts to handle the changes. This is likely to take a few days. How can you tell the testing team not to start with their manual testing for a couple of days while you update the automated Sanity Check/Smoke Test? ...and then you have to re-do the automated regression test. Most organisations soon fall one release behind. Then there is a major release just before the summer holidays - and you're two releases behind. Soon after that you give up!
Solution
It is not very constructive to list all these, increasingly obvious, problems without offering a solution. Fortunately, we do have one. Rubric Consulting has just patented its Rubric Unified Method for Better Automation (RUMBA). This methodology addresses ALL the above issues and has already been applied in a number of (large) client sites.
The principle of RUMBA is that it breaks all tests into three components: navigate, populate and verify.
Another cornerstone of RUMBA is that it contains NO record and playback statements - it is entirely data-driven. Once you have got it working on your system (two to four weeks effort) there is little or no maintenance and your (trained) existing test analysts can maintain and enhance it!
After successfully implementing this methodology in a number of our clients who have previously struggled with the implementation of test automation, they have experienced the following benefits:
* Better communications between developers and the test analysts.
* The ability to prepare their sanity/smoke tests prior to the receipt of the software and to execute the test on delivery of the software into the QA environment.
* A natural improvement in the test process due to the nature of the RUMBA methodology.
* Increasing the amount of testing being done.
* Completing their testing efforts for each release in less time.
* The ability to maintain the sanity, regression and functional automated tests with less skilled test automation resources.
In the words of one of our clients: “Rubric has taken test automation to the next level.”
Share
Editorial contacts