Subscribe

A bad idea whose time has come

Moshe Kam floats the really bad idea of regulating the profession of software development.

Ivo Vegter
By Ivo Vegter, Contributor
Johannesburg, 18 May 2012

The intention is good and the question valid: how can we know that someone who designs the software on which businesses, and sometimes lives, depend, really know what they're doing? Moshe Kam, head of electrical and computer engineering at Drexel University in the US, argues that they don't, and that better accreditation or government regulation might do for software development what it did for the medical industry. If there isn't a licence that proves you know this, that, or the other, you can't operate on patients, or offer your software to the public.

Would a formal licensing requirement improve matters, or would this, in the words of Roger Hislop, who interviewed Kam on the subject, invite the “cold, dead hand of regulation”?

The answer seems simple. Hislop is right. The price of regulation is to stifle innovation, raise prices, protect market incumbents, and prohibit rational risk-taking.

Do we really want to require software developers to be licensed like doctors and pharmacists?

Ivo Vegter, ITWeb contributor

In the highly influential sociology text: Risk Society: Towards a New Modernity, Ulrich Beck posits a society in which the strictures of state or church control over human activity have been rejected, and individual agency and risk have become the dominant principles of social organisation. It is in this world of risk and individualism that governments are often asked to re-enter our lives with coercive measures designed to mitigate risk that is seen as threatening to public well-being.

Kam is right that one of the best examples of extensive regulation is in the medical industry. There's a reason doctors are so expensive, and providing affordable healthcare is such a tough political conundrum. Take drug safety. Superficially, it seems a noble and necessary objective. However, the consequences of ever-growing government regulation, as epitomised by the US Food and Drug Administration, are severe.

Critics who believe society has become too risk-averse argue that the number of new drugs that are rejected or delayed by regulatory approval is rising fast. The FDA has been forced to concede that the rate of success in bringing new drugs to market has declined in recent years, and the size and duration of clinical trials has grown substantially. It denies that political pressure, public alarm or growing conservatism are to blame, but justifies the extremely high cost of drug approval by citing honest scientific concerns about safety.

A spokesperson for the FDA told the Financial Times of London that regulators today have a better understanding of how best to identify risks. “We're smarter now [but still] humbled by what we don't know,” she told the paper, in what amounts to a candid admission that knowing everything in advance in order to reduce private risk to zero, is the ultimate aim of the central planner.

By contrast, the competitive market economy is based on the principle that no single agency can ever know everything that is needed to meet the subjective needs of everyone in the market.

In an in-depth series of research reports, the Centre for the Study of Regulated Industries at Bath University addresses the oft-voiced concern that risk is not effectively addressed by policymakers and the state. “This was articulated notably in the report by the Better Regulation Commission in 2006,” it notes, “which argued that society and policymakers are excessively risk averse and this is leading to overregulation and the stifling of individual initiative and responsibility. This can perversely lead to greater risk as individuals become insufficiently resilient and thus more vulnerable, and new ideas and innovations are not developed.”

It admits that excessive regulatory risk aversion is not universal, but does conclude that all is not well. How best to solve this problem differs from industry to industry, and case to case.

The question then is whether we ought to welcome greater systemic risk-aversion enforced by regulation in a field - high technology - that has always been a hallmark of fast-paced innovation and risk-taking, or whether throttling progress will prove counter-productive.

Kam concedes that industry self-regulation is a better option than government regulation, but as a former president of a major standards body, is firmly in favour of some sort of formal regulation. Moreover, he appears to be dissatisfied with the raft of certifications already on the market, covering a vast range of technology-related disciplines. Many of these accreditations are highly respected and thoroughly test the competence of their holders, but a great deal of innovation occurs outside the strictures of corporate, big-budget, formally-accredited software development.

Do we really want to require software developers to be licensed like doctors and pharmacists, and software programs be tested and approved the way drugs are brought to market? The pharmaceutical industry, where lives and human health are far more clearly at stake, is suffering under the crushing weight of extremely high regulatory costs that stifle innovation and new product development. Do we really want to model software quality assurance on the same principles?

For some projects, customers (or employers) will insist on formal accreditation and various other assurances of competence. For many others, buyers of software might prefer to dispense with such formalities, in the interest of lower cost or faster speed-to-market. Each case is different, and in each case, the customer is the best judge of what risk to take. The notion that any set of regulations, no matter how comprehensive, can adequately address all the risks, is vain, and shifts responsibility away from the customers who take the risk in the first place. As the Bath study says, blinding customers to risk can perversely lead to greater risk.

“The cold, dead hand of regulation, or the caring, positive embrace of quality and high standards?” asks Hislop. The answer seems obvious, but that this question even needs to be asked should scare anyone who thinks technology and innovation are good for business.

Instagram was developed, grew successful, and was ultimately sold for a billion dollars. If we start pre-qualifying developers simply for the right to offer software to the public, expect a billion-dollar price tag for the next innovative application, before it even has its first customer. Instagram would probably never have got off the ground, nor would Facebook have existed to buy it.

Would that have made the world richer or poorer?

Share