"The communications revolution almost certainly is raising productivity," Frances Cairncross writes in her latest book, The Death of Distance 2.0: How the Communications Revolution will Change our Lives.
Cairncross, perhaps unintentionally, puts her finger right on the nub of the problem. Note the careful use of the phrase "almost certainly". It`s an old journalistic trick. Something "seems to" be the case, or "Bob claims...", or "Many believe that..."
The mantra that technology has improved our productivity seems common cause, and was never even questioned in the five-odd years during which even those with nothing but a catchy payoff line were getting filthy rich. Yet, on either side of the bubble - and even occasionally during the dot-com boom - doubts were expressed about the veracity of this claim.
Back in the early 1990s, corporate disillusionment with the millions spent on PCs created the right atmosphere for Robert Solow, Nobel Laureate, economist, and back in vogue, to pronounce famously that the computer was visible everywhere, except in the productivity statistics. This was indeed true at the time. Productivity growth had remained fairly level for 20 years.
Since then, during the time when IT investment peaked, productivity growth did increase - in the US, at least. By how much depends on your yardstick, but that there has been accelerated growth is undeniable. So, Solow was too much of a pessimist, wasn`t he?
Well, that depends. The fact that there is a correlation between the dot-com boom and higher productivity growth does not imply that one caused the other.
The degree of interdependence between IT investment and productivity affects the answers to some questions that are probably foremost in the minds of managers today: whether IT investment is worth it; if so, how much should be spent; and what measures will ensure the returns that investors, employees and customers expect.
The productivity dilemma
Productivity, in the strict economic sense, is defined as output per worker per worker-hour. This definition discounts the gains made by longer working hours, and the variation resulting from changing unemployment levels. An entire country`s productivity statistics should, of course, take these into account, and the more a country`s population produces in a year, the better the country`s official productivity statistics will look. However, to analyse the impact of technology on productivity, these variations aren`t relevant.
The long-tolerated use of the phrase "new economy" wasn`t just conjured up by IPO-sponsors and Internet-geeks. There was something to it, even if few were quite sure exactly what it was.
According to The Economist, the most distinct definition of "new economy" is a sustainable increase in the rate of growth of labour productivity as a result of the production or deployment of information technology.
Sceptics of the new economy had the productivity growth figure thrown at them. There. Explain that.
In America, the change was dramatic. After two decades of a pedestrian 1.4% rate, productivity growth accelerated in the second half of the 1990s to 2.5% (per worker per hour), culminating in an astounding 5.2% by the middle of 2000 - the biggest annual productivity gain in 17 years.
While such stellar figures are clearly not sustainable, some economists claimed that 3% annual average growth rates were sustainable as late as September last year.
The majority of IT projects land up over-running the deadlines, costing more, and thus blowing holes in any productivity gains expected.
Richard Firth, chairman and CEO, MIP Holdings
Across the pond, however, the second half of the 1990s saw much lower productivity growth. The UK registered 1.5%, even though its long-term trend line runs almost a percentage point higher than in the US, and spending on IT didn`t lag significantly behind that in the US.
Economists found themselves perplexed.
Similar worrying observations - though frequently denied on such solid grounds as "common wisdom" and "everybody knows" - appeared at organisational, and even individual level.
Some productivity gains were very obvious - particularly around the time when companies computerised their operations for the first time. More recently, however, many companies feel cheated by expensive consultants with their equally expensive implementations.
"We can`t kick IT consultants and vendors out the door fast enough," said one representative of a large corporate end-user in SA. (Not wishing to look like a fool to the vendors that profited handsomely from years of spending, he chose to remain anonymous.)
What happened to the mantra of "technology improves productivity"?
The US figures during the boom times have several explanations.
When an economy grows quickly, companies work their staff harder. "Exploit this while it lasts, and we`ll pay you a share of the profit." But the resultant gain in productivity is, because of the limits of human endurance, transient.
Higher than normal investment in fixed assets also contributed. This is one explanation for the difference between US and European productivity growth: in the US, software spend was considered investment; Europe considered it merely as intermediate consumption.
"The surge in spending on software in recent years inflates American growth, but not Europe`s," The Economist points out, noting, however, that the acceleration of American productivity growth is undeniable, from its "dismal rate in the two decades before 1995".
As a rule, technology basked in the credit for much of the balance of the unusual acceleration in productivity growth.
Some warn that the measurements used on a national level (based on gross domestic product) are misleading. One factor that influences GDP, for example, is a sudden change in the rate of capital depreciation.
Because much of the investment of the late-1990s was funnelled into shorter-lived capital goods such as computers, which depreciated over much shorter terms, a significant amount of the increased output went into replacing worn-out capital. The effect of this increase in the rate of depreciation - lowering productivity growth by half a percentage point - should be temporary, however, and should return to "real" levels when the average rate of depreciation stabilises.
Contrarily, optimists point out that on macro-economic level, productivity statistics based on GDP do not take into account consequences of technology such as improvements in the quality of products and services. These optimists argue that productivity growth rates are in fact understated in the official statistics.
The different views had it that productivity growth was either overstated or understated - depending on your measures - partly because of the cyclical effect of the dot-com boom, and partly because of unsustainable growth in IT investment.
But even revised estimates of sustainable productivity growth figures tended above 2%, and still showed a dramatic increase over the period to 1995, albeit nowhere anywhere near the miraculous "new economy" numbers touted in 1999.
It does matter...
Many casual observers would leave such a debate to the dense and inscrutable pages of highbrow publications like the aforementioned Economist. However, the reality - or otherwise - of productivity growth resulting from technology implementations is a big issue.
On the one hand, IT managers and assorted techies remain convinced (and not without reason) that technology can solve a host of problems, create a host of new revenue streams, and generally justify its existence.
On the other, investors and directors are disillusioned with the long-term promises seen in the late 1990s, and expect every project to show a guaranteed (financial) return to justify its funding.
In the fray are the customers of retailers, banks - the customers of those who buy IT - who simply expect increasingly better and smarter service from the businesses they patronise.
It is, perhaps, no surprise then, that - though many customer relationship management (CRM) projects have cost companies millions, and failed to deliver the promised returns - CRM remains one of the major focus areas for financial services institutions. As much as 64% of their IT spend goes towards improving their understanding of - and ability to market to - individual customers (according to a recent estimate by Datamonitor, a market analysis company).
People are looking for predictability, and perhaps sacrificing flexibility in order to get it.
Stephen Gardner, chairman and CEO, Peregrine Software
Despite many attempts at doing so, there is a deplorable absence of generally agreed-upon measures or accounting standards to justify investment in IT. The best measure of IT`s worth - inadequate though it is - is in perceptible productivity growth. In turn, this is broadly defined as the increase in output given the same input costs; that is, its financial return on investment (ROI).
Derek Wilcocks, executive for strategy and technology at Dimension Data South Africa, gives a clue to why companies continue to spend on IT despite the declining faith in technology. He believes IT has indeed delivered a measurable productivity increase at local companies, but says they were "often not [of] the magnitude of productivity gains initially expected".
He says, however, that evidence of productivity increases can be seen not only in macro-economic statistics, but also in structural changes in the workforce - such as higher numbers of call centre agents rather than field sales or service representatives - and in increased operating margins for service-based companies such as financial institutions.
Productivity and ROI
Russell Swanborough, who has worked for 25 years on a new way of understanding and managing information, disputes that real productivity gains have been made in the last few years. On the contrary, he claims. He quotes Paul A Strassmann, former chief information technology executive for General Foods, Kraft, Xerox and the US Department of Defense, as saying: "In 1987, $1 of staff salary in banks resulted in $7.50 worth of revenue. Now it buys $5.30."
<B>The scope of ROI</B>
Factors organisations should review when trying to determine ROI:
* Number of devices;
* Number of different types of device (eg servers, routers, different manufacturers);
* Number of locations;
* Number of end-users;
* Number of incidents;
* Number of events/alarms;
* Downtime/total outage time;
* Number of outages;
* Mean time to resolution;
* In-house-built management technology and associated support costs;
* Currently implemented commercial-off-the-shelf technology (numerous tools);
* Current staffing levels and additional staffing needs;
* Current support ratios;
* Administrators per infrastructure element (eg per server, per database, per storage device);
* Support engineers per end-user;
* Operations personnel per event/alarm;
* Timing (when are tangible effects costlier, and are these events accounted for?); and
* Impact of third-parties (eg business partners, outsourcers) should be identified and quantified.
ROI considerations
* Proposed time to ROI - should be six to twelve months;
* Impact of the pain the solution proposes to solve (eg major pain for a small audience versus moderate pain for a widespread audience); and
* Identification of metrics to measure ROI (eg costs of personnel, cost of downtime, impacted audiences, revenue impact of operational failure or slowdown, current customer satisfaction, current support ratios).
Tangible benefits
These are pure dollars that affect the bottom line:
* Hiring fewer administrators/staff;
* Fewer calls to support personnel;
* Better support ratios (fewer administrators per device or per user);
* Purchase avoidance; and
* Increased application availability - assumes cost of downtime is known, and is most typically associated with a revenue-generating application.
Intangible benefits
True but "soft" savings that are not as easily attributed to the bottom line:
* Process improvements;
* Productivity improvements;
* Efficiency gains;
* Speed;
* Goodwill benefits (eg end-user satisfaction, better business alignment);
* Reduced complexity (eg fewer tools, consolidated servers, etc) - could be tangible, but the actual complexity is hard to quantify; and
* Availability - typically as related to making end-users more productive.
Swanborough suggests that a helpful start would be to separate the concepts of quality and productivity. There are some things, nowadays, that one simply can`t do without IT, so productivity comparisons are moot. In many cases, however, IT has brought about an improvement in quality, not necessarily in productivity as it is generally measured, he says.
He insists that the only rational measure of productivity in an IT project is its ROI. And large software concerns themselves admit that the majority of implementations don`t result in measurable ROI, he says.
He believes that usually, the returns of a project are undone by the initial and ongoing costs of the technology itself.
The Meta Group, a research consultancy, distinguishes between tangible and intangible ROI. The former, often referred to as "hard dollars", includes savings resulting from reduction in headcount, hiring avoidance, technology purchase avoidance, revenue realised, or avoidance of lost revenue.
"It cannot be expected that any ROI will be realised without an understanding of internal costs," it states in a research note, adding that currently, most organisations are seeking tangible ROI.
It describes intangible ROI, or "soft dollars", as returns that cannot be tied directly to cash, but achieves cost savings nonetheless, such as process improvement, efficiency gains, productivity gains and increased end-user satisfaction.
Richard Firth, chairman and CEO of MIP Holdings, lists several factors for the frequent failure of technology implementations to deliver a measurable return.
"The majority of IT projects land up over-running the deadlines, costing more, and thus blowing holes in any productivity gains expected," he says. "Many IT companies are made up of young individuals who have never been through the mills of a big corporate, and don`t understand how big companies actually run. Too many young, good technical people are let loose on large, complex projects that look easier than they are."
Worse, Firth says many projects are done for the wrong reasons. "Each CEO wants to boast to other CEOs on the golf course about what they`re doing technologically, without understanding the true value added to the business. I had a discussion on the plane the other day with an IT manager at quite a large retail company. He explained the reason why the company was moving with a huge technology makeover: `I recommended the company goes this way as it keeps my consulting and technical skills current.` Most consulting firms make recommendations to companies based on potential billable revenue, and not based on what is best for the companies."
Most consulting firms make recommendations to companies based on potential billable revenue, and not based on what is best for the companies.
Richard Firth, chairman and CEO, MIP Holdings
This fact is evident in the experience of Eric Wilson, MD of Pick `n Pay HealthPharm. Being a former IT director, he was able to understand what was and wasn`t necessary, and what was and wasn`t possible. So instead of blindly following the recommendations of consultants, he oversaw the data integration, business intelligence, point-of-sale and inventory systems of the pharmacies himself. The project came in before deadline, and R26 million under budget.
Evan Summers, director at Obsidian Systems, claims that too many IT people favour cutting-edge technologies, that are often immature and over-hyped, rather than looking for simple, cost-effective, proven - even boring legacy - technologies.
DiData`s Wilcocks is among the more optimistic of the interviewees, claiming that only five out of ten IT investments fail to deliver anticipated productivity gains. He cites unrealistic expectations, failure to understand physical processes before automating them, and failure to anticipate "soft" issues such as changes in organisational culture and behaviour among the primary reasons for this sorry state of affairs.
Stopping the rot
Kem Tissiman, MD of management consultancy Rethink, says that after the big jump in corporate productivity when the back office was automated, he believes productivity growth in companies has levelled off. He says that although "power-users" can still gain from new technology, overall new technology can even reduce productivity because of the need for retraining and the potential for faults in the new systems.
He strongly believes that the real scope for productivity gains lies not so much with technology as with management.
"Pure process redesign would do it without technology - although most such redesigns in my experience do involve technology," he says. "With the advances in technology, people assume it can do more than it actually does, but you first need good processes and good management."
Wilcocks agrees, saying that there are many contributors to productivity improvements, including training, process re-engineering and mechanisation. "IT is a very important contributor, though."
"The primary tool [to achieve productivity gains] should be analysis," explains Anton de Wet, technical director at Obsidian Systems. "Once that is used, IT could be one of the paths to a solution."
Swanborough is more adamant, however. What can companies do to ensure that IT implementations will result in the anticipated productivity gains? "They can`t, using traditional methods," he asserts. "It`s very difficult to calculate the cost and value of IT. Using our `Absolute Informational Management Principles` it`s simple, but the industry normally, at this point, throws its hands in the air.
"You have to teach people how to handle and manage information. What we teach is knowing what information to get to people, and what they should do with it. This goes well beyond just making a slew of information available through management information systems," he says.
"First you turn a bad organisation into a good one. Then you add IT."
A recent trend is to partner with IT companies (and in-house managers) to share the risk.
Asks Firth: "How many company executives measure or reward their IT managers based on company performance, increase in revenues or performance gains related to specific IT projects?"
Stephen Gardner, chairman and CEO of Peregrine Software, suggests that if companies want totally guaranteed returns, they should probably be looking to outsourcers who`ll give them a predictable flow of cost, predictable service levels, and potentially shared benefit. "I think you`ve seen a worldwide trend back towards outsourcing for that very reason," he says. "People are looking for predictability, and perhaps sacrificing flexibility in order to get it."
Corey Ferengul, an analyst at The Meta Group, likewise predicts: "The renewed focus on ROI will give rise to other delivery models, eg management and application service providers, where vendors assume additional risk in their customers` success. We expect many companies to consider these options more carefully, because they hold the potential for faster ROI with less upfront investment. We also expect more `creative` pricing models beyond standard perpetual licences. Although term agreements have become common, a rise in shorter-term agreements (eg one to two years versus the current three to five years) and an increase in subscription pricing will occur.
Computer Associates is one software vendor that has changed its own accounting to reflect annuity revenue from contracts, rather than accounting for once-off contracts to make quarterly targets. This results in better predictability of its own revenue, and it more closely matches what customers demand, explains Dan van der Westhuizen, CA`s South African MD and regional manager for Africa and the Middle East.
With the advances in technology, people assume it can do more than it actually does, but you first need good processes and good management.
Kem Tissiman, MD, Rethink
Gardner adds that in-house IT projects inherently involve some degree of risk. "The way you mitigate those risks first of all is to make sure that you`re dealing with vendors that have good references and established credibility; where you can talk to their other customers who`ve had success with their products."
He explains that his company - while being delighted with big contracts when they come - has built its business model on providing value and proving its worth one step at a time. "I think companies that have that approach are less of a risk for customers than ones that are looking for a massive upfront engagement," he says. "It`s very pragmatic - it`s not particularly wild and innovative."
Firth succinctly summarises the attributes of a project that will likely result in measurable productivity gains. "The skill set around a good project is business process re-engineering, user requirement specification, system and database design to support the process, technical design and specifications, a good decision on whether to develop or buy, detailed project management and planning, and good code."
"A big productivity step up happens with major process change," adds Kissiman. "When you just make incremental improvements, it`s difficult to justify on a productivity level. In spite of all the technology, there`s still big scope for improvement in productivity by better management."
Share