About
Subscribe

Pity the testers

Software quality is a many-splendoured thing. Although the toughest job is often reserved for testers, and the biggest stick reserved for project managers, ultimately, as always, the buck stops with directors.
Ivo Vegter
By Ivo Vegter, Contributor
Johannesburg, 14 Oct 2002

Depending on who you are in an organisation, the term "software quality" takes on shades of meaning that bring with it specific imperatives, goals and drivers.

End-users see only the result of a highly complex software development process, and the effort software architects, developers, testers and project managers - be they in-house or commercial - put into creating a high-quality product. Achieving acceptable quality, and even defining it, is a nightmare for project managers, who often find themselves unable to accurately commit to shipping deadlines; for product marketers, who can`t understand why the testers are taking so long; for developers who fail to see why standards, methodology and testability should take precedence over clever or quick ways of doing things; and for testers, who need to explain the myriad dependencies on which their work hangs.

Some aspects of software quality might surprise many directors and managers responsible for software projects. This feature looks at these, and discovers how South African software developers - both commercial and in-house - are responding to the issues that arise. These issues particularly focus on the assessment and mitigation demands the King II report on corporate places on the shoulders of directors and managers of companies.

The impact of software quality

The average employee, not concerned with board-level debates about the competing claims of the IT budget and other expenditure aspirations, relies more than ever on the ubiquitous desktop and applications, and experiences growing pressure to deliver more in less time, according to Tracey Newman, MD of FrontRange Solutions SA, a customer relationship management vendor.

"Dependence on IT applications is increasing. Tolerance of system downtime or service failures is decreasing," she says.

Customers have over the years rolled out e-mail infrastructure, Web-based customer support applications, and technology-dependent processes that achieve ever-increasing functionality and performance. But as user expectations increase, so does the impact of the reliability of the technology on which they depend.

Dependence on IT applications is increasing. Tolerance of system downtime or service failures is decreasing.

Tracey Newman, MD, FrontRange Solutions SA

One only needs to hark back to stories of the 1969 moon landing, when the onboard computer - less powerful and sophisticated than even the cheaper cellphones of today - crashed several times only minutes before the Eagle lunar module touched down, to realise the often critical nature of software quality and how important this is to the people who depend on it. Jokes about blue screens of death on space shuttles are uncomfortably close to home at Kennedy Space Centre.

In SA, an example of the impact that a software system can have can be found at the Department of Land Affairs, which implemented an Oracle object relational database management system in its efforts to process land restitution claims.

The slow pace of progress of land reform has come under severe criticism in the media and among claimants. Only when one hears Pregan Pillay, deputy director of corporate information systems in the department, talk about the complexity of the required, does one start to appreciate the difficulty of the task, the importance of a reliable information system.

Planners need to source and access geographic information, as well as data residing in legacy mainframe computing systems. This information includes cadastral data of individual land parcels from the Surveyor General in Pretoria, topographical information from the Survey and Mapping Branch in Cape Town, land ownership data, and infrastructure information relating to roads, water, power, education and healthcare.

"It`s a process that can take several months, and often, key information, such as land ownership, is out of date by the time everything is collated," says Pillay. One of the benefits of the solution the department chose for storing spatial and other data accessed from multiple platforms is the expected improvement in data integrity.

"The information - essentially real-time data - will be delivered to the desktops of the key decision-makers and officials in the land reform process via the Web," he adds. The reliability of such a system, considering the complexity and importance of the data, is paramount.

Similar projects were undertaken by Oracle and its partners for the Department of Labour, to control the disbursement of funds for skills development grants managed by the 25 sector educational training authorities (SETAs), and another at the Gauteng Provincial Government`s Department of Housing.

Nilesh Singh, an official with the housing department, says that with about 500 000 people on its waiting list, "it was not unusual" with its previous systems "for a local authority to lose a 1 000 people off a waiting list if the system crashed".

Examples such as these illustrate not only the complexity of modern software applications, but also the extent of the impact that sub-standard solutions might have.

Quality imposed from outside

The question of how and by whom the quality of software should be addressed is surprisingly complex.

Colin McCall-Peat, head of project governance at the Liberty Group, says that although his organisation has a risk department, there are often no specific steps taken on the ground. "People focus not on quality, but on deadlines, with the result that quality is imposed from the outside."

Steven Lauter, GM of solutions at iLab Project Services, a software quality consultancy that hosted the round-table discussion on which parts of this feature are based, points out that while time and cost are measured, software defects aren`t always included in risk assessment.

Says Leslie Barry, senior manager of professional services on the Sasol account at Comparex Africa: "The severity of defects in terms of financial risk isn`t measured. Instead, it`s measured in terms of lines of code."

He adds that the challenge for directors is to assimilate the risk on multiple projects. "Risk is to be assessed and managed from project conception, and everyone must be aware of the risks."

McCall-Peat agrees, noting that while risk is usually assessed at the start of a project, it is often shelved during the course of the project.

The problem of how the value of software is perceived leads to failures in risk management, according to professionals involved in the field.

Paul Meehen, executive responsible for software development at the IQ Business Group, says the value of software is measured in time to develop, instead of risk mitigation.

"People know how to measure time and deadlines, they know about costs and budgets, but usually, they can`t answer quality requirements," observes Andri Buys, senior project manager at Standard Bank.

Themi Themistocleous, director at Software Futures, an MGX subsidiary, agrees. "If you`re not writing code, you`re not succeeding, is the perception - and it`s wrong," he says.

"If you can show correct process and due diligence, you can avoid liability. We`ve had to become more prescriptive in implementing formal processes that include risk mitigation. This is difficult if you also want to give people freedom."

He says that end-to-end proof of concept is an essential part of this process.

Explains Rudi van Rensburg, a business unit manager at the same company: "As a software development company, our bread and butter is how good the quality is. By the time you hit the legal stuff, you`ve already lost. Software quality should be a customer service issue."

The pitfalls of testing

The ultimate operational impact of software quality requirements lie with testing. This area is as unsexy as it is critical to the overall success of a project.

Themistocleous trots out what he calls the 1-10-100-1000 rule. These numbers represent the order of magnitude of the cost of fixing bugs at the start of development, at the mid-point, at the end, and when the system has gone into production, respectively.

iLab`s Lauter alleges that testing is too often seen as a verification process at the end of a project, rather than as part of the process.

"The software engineering industry is not process-efficient, and this affects the international competitiveness of local software, ill-serving local business. How much is the customer losing on systems that even if they aren`t defective, roll-out too slowly? Situations when the business plan is ready, but the software is not," he asks, rhetorically.

It was not unusual for a local authority to lose a 1 000 people off a waiting list if the system crashed.

Nilesh Singh, housing department official, Gauteng Provincial Government

The life of a software tester is a hard one. This, partially, is the result of being even less understood than the life of a developer. At least the developer does something productive. Even if his code is measured in lines, and not by how superbly brilliant a particular code solution is, he`s creating something. A tester just double-checks that it works, doesn`t he?

James Bach is the founder of Satisfice Inc, a test training and consulting company. He specialises in expert testing under chaotic conditions, he says. And he lucidly paints the life of a tester in an article "Explaining Testing to Them".

The words in the title are capitalised for a reason. "Testing" is a discipline of surprisingly wide scope and complexity. It involves writing good test cases, ensuring that developers deliver code that is, in principle, "testable", and working to create a software development environment that facilitates making defects quick to find, easy to isolate, and hence possible to fix.

"Them", are software developers, project managers, and those idiots in sales who insist on shipping products yesterday.

Bach points out that setting deadlines for testers is a big problem. A tester can only estimate a likely test schedule, and many project managers who are adept at "critical paths", find "deadline estimate" a contradiction in terms.

A classic example in the testing fraternity is the challenge that was posed to a group of developers. "The developer responsible for the code in which the next testing cycle finds the fewest bugs gets next weekend off." One smart aleck placed what is known in the trade as a "blocking bug" in the logon screen. Testing duly found this bug - and was unable to proceed until this bug was fixed. He won his weekend off with his tally of one bug found. This illustrates why testing is not something that can be accurately planned for or scheduled. Some defects prevent further testing. Other defects take a long time to isolate.

On the other hand, projects do have timeframes, and testing needs to fit into this timeframe.

There are several partial solutions to the problems testers face. One is better communication with developers and project managers. When project managers understand how well-written specifications enable testers to plan test cases in advance, or when developers understand how to write code that is inherently easy to test, project flows are smoother, and testing is less likely to delay production.

There are many methodologies and tools that attempt to make this process easier to do, easier to predict, and easier to manage.

The dream of automation

Telkom, in its efforts to beef up customer service prior to facing competition - if and when it comes - has implemented a sophisticated system that allows its customer service representatives to activate or change the telephone service of customers in real-time.

The software handles up to three million transactions per month, and has become a mission-critical component of the company. Problems started arising, however, when it was found that a particular module suffered from a "memory leak" that required at least a daily restart, and started affecting the availability of the service activation system.

Checking all the usual memory management culprits, Telkom`s developers were unable to find the problem over an 18-month period. Not until they invested in Compuware`s DevPartner Studio, which includes the BoundsChecker tool for monitoring memory usage, were they able to pinpoint - within five days - which part of the program was causing the problem.

"The module has since been running without a glitch, and real-time service activation can now be provided at the acceptable norm of 99.5% availability," says Willie Marais, manager, engineering at Telkom.

Compuware`s tools have been deployed in organisations as diverse as Mintek, which provides technology for specialised mineral processing and extractive metallurgy, and Intervate, a software development company focusing on intranet-based knowledge management and collaboration solutions.

In the latter case, it facilitates defect tracking. Mike Cohn is an experienced software project manager who now writes on this subject for publications including STQE Magazine, which covers software testing and quality engineering. According to Cohn, tracking defect inflow and outflow is one of the better ways for testers to predict product release dates. Testing is slow going at first - partially due to having to get to know the software, and eliminating the kind of blocking bugs referred to earlier - but defects will come in at an increasing rate, before starting to decline. At the same time, defect "outflow", or the rate at which they are fixed, rises to a point close to the maximum capacity of the developers working on fixing them, and only once the inflow has slowed sufficiently, will this line start dropping too.

By measuring the inflow and outflow, and weighting the defects according to their severity, a judgement call can be made as to when a product is sufficiently stable to release. By monitoring the pattern over several projects, this call can be made several periods (weeks or months, depending on the size and scope of the project) before the slowdown actually occurs.

Yet, defining metrics is not easy. Consider the simple proposal of paying a bonus to the programmer whose bug fixes result in the least "comebacks" (where the problem thought to have been fixed recurs). You may end up paying the programmer who wrote the least code, tested it more exhaustively than was appropriate, and turned it over to quality assurance late.

There are many methodologies and standards for developing quality software.

The severity of defects in terms of financial risk isn`t measured. Instead, it`s measured in terms of lines of code.

Leslie Barry, senior manager: professional services (Sasol), Comparex Africa

Programming principles such as object-orientation, component software, and software patterns attempt to avoid quality problems by reusing code or software concepts that are known to work.

Alex Faul, account manager at Compuware SA, is a proponent of the Capability Maturity Model (CMM), a benchmark for measuring best practices in software development. Developed by the Carnegie Mellon University, the CMM covers key progress areas in establishing project management controls, such as requirements management, project planning, project tracking and quality assurance.

"Although the CMM does not suggest a specific toolset with which to conduct a project," says Faul, "it does suggest where tools can be used to support the development process. Implemented correctly, these automated tools increase reliability, repeatability and consistency of the processes involved in software development."

Another popular quality management standard is dubbed SPICE, a contrived acronym for Software Process Improvement and Capability dEtermination. This is an international standard for software process assessment, carried out under the auspices of an impossibly numbered working group of the International Standards Organisation (ISO/ISE JTC 1/SC 7/WG 10). Perhaps in protest against this bureaucratic nonsense, the SPICE user group is called SUGaR.

Jan Buys, independent testing services manager at Paracon, advocates a 10-step methodology that incorporates the best of several internationally recognised standards, including ISO 12207, CMM, ITIL (the IT infrastructure library, a consistent and comprehensive documentation of best practice for IT service management), and PMI (best practices as documented by the Project Management Institute).

"By combining all these areas into the test solution, the benefits are such that detecting scope changes and defects becomes so much easier, and covers the complete spectrum of system functionality and infrastructure," says Buys.

"As technology develops, so does the quality assurance environment, and the emergence of new technologies will hopefully bring us closer to producing a defect-free application," he adds.

It will ship, well, when it ships

It is a well-known truism that no software is bug-free. The remarkable discovery that this is often true even for the shortest, simplest programs is often a Road to Damascus experience for young programmers. Nasty bugs can appear in even the smallest bits of code, and be inordinately hard to track down. Worse, the last of the programmers who are able to sit with a printout and a pencil to fix these bugs were too old to be at Woodstock, and at best are now trying to flog their Cobol skills one last time before retirement.

Knowing that no software project will ever result in "perfect" software, a project manager is faced with the nettlesome question: When do you know when a software project is "done"?

One way of solving this problem is to develop, in conjunction with the developers, testers, and all the relevant decision-makers from the development company and the final customer, a set of release criteria.

People know how to measure time and deadlines, they know about costs and budgets, but usually, they can`t answer quality requirements.

Andri Buys, senior project manager, Standard Bank

When buying commercial software, there are some rules of thumb. A new version is often considered by customers not to be "done", for example. While beta testing may be formally over, customers consider the first release of a software product a kind of "commercial beta test". Better wait to see if others burn their fingers, before committing.

An example is when Microsoft released the first version (numbered, for historical reasons, 3.1) of its Windows NT operating system. It was not done. Only once the minor version number had gone through one or two iterations, did customers finally accept the software. Windows NT 3.51 (which, for those not familiar with Microsoft`s numbering system, came two releases after 3.1) was widely considered to be the first version of NT that was sufficiently stable and reliable for large-scale roll-out.

By the same token, however, projects can drag on way longer than they might have, had the release criteria been specified properly. Johanna Rothman, another writer for STQE Magazine, relates a story of a software organisation that needed a Web-enabled version of a successful client-server product. The project over-ran the deadline by several weeks. At the retrospective, the responsible vice-president explained what he thought had been necessary for the project to be completed, upon which a senior developer said: "If I`d known that was all I had to do, I could have been done a month ago."

Rothman recommends creating objective release criteria to avoid scenarios where projects are deadline-driven, rather than quality-driven. Microsoft, after a series of debacles over missed shipping deadlines and premature releases of buggy software, ceased publishing expected shipping dates. "When it`s ready", is the message one hears from Redmond nowadays. This is smart.

Smart is also the acronym for Rothman`s "criteria for release criteria". They must be specific, measurable, attainable, relevant and trackable.

She explains: "Each criterion should be specific for this product at this point in its lifecycle. When you make a criterion measurable, you`re ensuring that you can evaluate the software against the criteria. Release criteria are not the place for stretch goals, so make each criterion attainable. Make sure your criteria are relevant by evaluating this product against what the customer wants and what management wants. When you make criteria trackable, you can evaluate the state of the criteria during the project, not just during the last week."

Requirement management headaches

"The end-user is partially responsible for defining quality criteria and requirements," claims Debbie Nelson, MD of iLab Project Services.

This statement, made at the round-table discussion on software quality hosted by her company, elicited a response that goes to the heart of the problem of quality assurance.

"Requirement management is the single biggest cause of problems in our organisation," says Software Futures` Themistocleous.

He adds, however, that process models, such as those developed by Gartner, a global consultancy and research house, have made great strides towards managing requirements.

"Much has changed. There is a real strong ability nowadays to bring projects in on time and on budget. We develop use-cases - mini-processes into which you break down a project. The customer then prioritises them. The problem is that the user might not know what he wants and can get, but by prioritising and having "go" or "no-go" points at each stage, or use case, this becomes manageable," says Themistocleous. "This also means new requirements can be accommodated, and use-cases can be re-ordered or substituted."

However, Riaan Postma, technical specialist at Standard Bank, says service level agreements and testing are often not part of the requirement specification. "We do have processes to ensure that the required specifications are complete, but it falls flat because it`s not measured throughout the process."

Implemented correctly ... automated tools increase reliability, repeatability and consistency of the processes involved in software development.

Alex Faul, account manager, Compuware SA

Comparex`s Barry agrees, saying the life of requirements seems to extend only to the start of a project, and is limited to the technology - not business - requirements.

"The responsibility for defining requirements, and prioritising them, is jointly between the user and developer," say Standard Bank`s Buys. "Then accountability is defined based on the type of error."

iLab`s Lauter adds that the complexity of software projects makes it obvious that nobody can guarantee zero defects. "But we can define good processes to minimise the risk. If a process is provable, liability isn`t a problem."

Who carries the can?

In the case of bespoke applications, the customer - be they in-house or commercial - can make a significant difference to the quality of the software by contributing clear requirement specifications and ensuring over the life of the project that these are adhered to. Similarly, project managers, developers and testers play a role in prioritising what is important to the customer.

In the commercial software case, this responsibility lies squarely with the managers that oversee a project. In essence, the marketers and salespeople are the customers that define requirements and quality criteria.

In both cases, however, the question of where the buck stops arises. Who carries the can if software does not perform as advertised? Who carries the can if software breaks and costs a user millions?

Requirement management is the single biggest cause of problems in our organisation.

Themi Themistocleous, director, Software Futures

Disclaimers in licence agreements mitigate the risk for commercial (packaged) software developers, and limit their exposure to financial loss (if customers stop buying the software product) or brand impairment ("Boy, these people`s software is buggy!").

In bespoke or in-house software development, the risks are more profound, and less easily hedged. Since "people don`t know what they don`t know," as Themistocleous eloquently puts it, putting everything in contractual fine print is well-nigh impossible.

Traditionally, directors have been officers appointed by shareholders to manage and protect their investment. Recently, the King reports on corporate governance and similar trends worldwide have re-emphasised the risk management responsibility of directors.

When directors, therefore, oversee companies that are involved in software development, it would be politic for them to apprise themselves of the complexities of quality assurance in software, the risks associated with software defects, and processes and standards that can mitigate this risk.

Similarly, customers will become more stringent in their demands. As Themistocleous explains: "When you get a tender with four good respondents at the same price, how do you know which will have the lowest long-term maintenance cost? Nobody asks that question."

But as directors become more insistent on measuring and controlling risk associated with purchasing software of questionable quality, they will.

So next time the testers ask for another week, don`t just write them off as lazy. They, more than anyone, are intimately familiar with the process. When evaluating their judgement calls, look at them as the risk mitigation opportunities they are. King II demands that you do.

Share