Subscribe
  • Home
  • /
  • TechForum
  • /
  • Why your measures of success may lead to your failure

Why your measures of success may lead to your failure

By Terry White, Netsurit, Executive Consultant

Johannesburg, 05 Aug 2020

In World War 2, only a quarter of British bomber crewmembers completed their tour of duty of 25 or 30 missions. Bomber command wanted to improve the protection on their planes, but because armor-plating was too heavy to fit over the entire plane, they came up with a plan to fortify the worst-hit parts only. 

They mapped and measured the areas on returned planes that had the most shrapnel and bullet-holes – the fuselage, wingtips and tail, and resolved to strengthen these. But Abraham Wald, a statistician, pointed out that they only had measures of planes that had returned to base. 

These measures indicated that planes could sustain damage to the mapped areas, keep flying, and return to base. So the bombers should be reinforced where there was no evident damage. 

As a result of Wald’s observation of what was not measured, they reinforced the engines, fuel tanks and cockpit, and the survival rate increased dramatically. The message here is that, often, it is what we don’t measure that provides the answers.

Robert McNamara was the US Secretary of Defense during the Vietnam war, and he said that one would win the war by maximising the enemy’s deaths and minimising American deaths. This approach became known as the McNamara Fallacy, and it has four steps to it:

  • Step 1 – Measure whatever can easily be measured.
  • Step 2 – Disregard that which can’t be easily measured or give it an arbitrary quantitative value.
  • Step 3 – Presume that what can’t be measured easily is unimportant.
  • Step 4 – Believe that what can’t be measured easily doesn’t exist.

Just because we measure something doesn’t make it important or useful.

Another take on measurement silliness is made by this logic: We evaluate the success of a strategy by deriving and measuring a set of strategic metrics. All good so far. Then if we track these measures, and we show that we have achieved them, then the strategy is a success. 

However, this argument is prone to a phenomenon called surrogation, which is where the measure becomes the replacement of the strategy. Essentially, this means that managers tend to take measures of success as the actual strategy. Managers who surrogate think that if they achieve the measurement targets, they achieve the strategic objectives. But more than anyone, CIOs know that this is a fallacy, as exhibited in having an across-the-board green SLA scorecard, yet having users unhappy with the services delivered by the IT department.

Yet in our rush to measure customer satisfaction scores, MTTR, open tickets, projects completed on time and budget, and an extensive list of quantitative measures, how sure are we that customers actually care about how fast we fixed something? And do these measures tell us, for instance, whether projects achieved the benefits they promised, or even if those benefits are still appropriate? Do we have measures for how much a non-IT (LoB) executive trusts and relies on the CIO’s opinion? Is the IT department actually delivering what CEOs really want, or are they delivering what they think CEOs need?

So here’s the thing. Three phenomena introduce a bias to what and how we measure IT success: The Wald observation that sometimes what we don’t measure is more meaningful than what we do measure; the McNamara Fallacy which believes that if it is easy to measure, it is useful and important; and the Surrogation shift which causes managers to think that the measure is the objective.

CIOs have it stacked against them when it comes to measurement, but there is a way forward. We need to re-examine all our IT measures of success and reduce the number of measures we take. We should measure only those metrics that are pervasive and predictive. Pervasive metrics are those that have significance elsewhere in the organisation (usually they are business measures) and that endure for a meaningful period (allowing trend analyses.) And predictive metrics are those that directly lead to a business outcome. 

For example, MTTR might indirectly lead to some business outcome (stability?), but what is its predictive value? User satisfaction may indicate happy users, but what is the business impact? These measures may be sub-metrics to more pervasive and predictive metrics, but they should be kept inside the IT department.

The message is clear: Publish metrics that mean something and contribute to business results. Or don’t publish at all.

Share

Editorial contacts