AI: The growing case for transparency

As artificial intelligence (AI) becomes a part of our lives, the need to move beyond the black-box approach is becoming more and more urgent.

Johannesburg, 10 Feb 2020
Lee Jenkins, Chief Technology Officer, ETS
Lee Jenkins, Chief Technology Officer, ETS

As artificial intelligence (AI) becomes a part of our lives, the need to move beyond the black-box approach is becoming more and more urgent.

We already have a legal framework relating to data protection – GDPR in the European Union and POPI here in South Africa are examples – but, as yet, we lack any regulatory framework for what companies do with their data. Companies and governments are now using AI not only to analyse their data, but also to support decision-making and, increasingly, to automate the whole decision-making process. Because these decisions have implications for people, the need for AI to be regulated is becoming acute.

As the financial services industry is adopting AI so strongly, financial regulators may pioneer this area.

There are a number of issues that we need to look at in order to understand why a new set of regulations is necessary for AI.

To begin with, let’s just remind ourselves of how pervasive AI is becoming. AI is forecast to grow at a compound annual growth rate of 55.6%, becoming a $118.6 billion industry by 2025. Andy Ng, a tech entrepreneur, faculty member at Stanford and the founder of the Google Brain Deep Learning Project, put it in a nutshell: “AI is the new electricity” with the power to transform every major sector.(1)

At a somewhat trivial level, every time a Web site recommends content for you, or you get an e-mail about last-minute deals to Thailand, you can bet that some algorithm has been hard at work on some data about you. But as AI, along with supporting technologies like machine learning and neural mapping, grow in sophistication, more and more the work done by humans will be done by machines.

Ever wondered how companies can approve a credit application, a loan or a mortgage within a few hours? Or how your medical aid company seamlessly adjusts your contribution based on your lifestyle? None of this is done by humans – the amount of data and the scale of the customer base is simply too big, especially within the short timeframes the market demands. What makes it possible is the availability of data, the tools to analyse it and now the ability to make decisions based on the analysis and on certain assumptions – all in a rapid, automated process.

It all looks like magic when “the system” pops out a decision, until it’s one that prejudices you – when the loan or mortgage is refused, or your medical aid contribution is raised. Typically, these AI-backed decisions are virtually impossible to query because chances are the institution itself has no real insight into the model on which the decision-making process is based.

In many cases, too, the institution cannot even trace the decision pathways through which a particular decision was made.

All of this stems from the fact that AI skills are in drastically short supply. To write a successful algorithm requires not only rare data science skills, but also deep industry knowledge. Such individuals are in short supply, and only a few corporates have the budget and strategic requirement to build up their own AI capability: most organisations rely on external AI providers.

The point here is that the individual writing the algorithm may, wittingly or unwittingly, incorporate bias into it. Here’s a simple example: a mortgage application for a property in a certain area might be rejected because the algorithm equates that area with an adjoining neighbourhood that is considered high-risk. Because the decision took place inside a black box, the lending institution cannot explain why the decision was made, or assess whether some unconscious bias played a role.

A similar case would be when an insurer rejects a claim using AI. The insured person has a right to know what factors influenced that decision.

Unsurprisingly, given the strong adoption of AI in the financial services sector, regulators in the United States and elsewhere are starting to look at regulating AI in the sector. Key regulatory themes have emerged, among them accountability, governance, bias, data protection and explainability.(2) Explainability, which relates to how AI systems make decisions and what their underlying rationales are, is at the heart of this emerging regulatory initiative – hence the growing call for explainable AI (XAI).

It’s in all our best interests for AI to emerge from the black box. Making it explainable will build trust with customers and, as governments use AI to enforce policy decisions, as China is already doing, citizens will be better able to safeguard essential liberties.

(1) “Why AI is the ‘new electricity’” (7 November 2017), available at https://knowledge.wharton.upenn.edu/article/ai-new-electricity/

(2) PWC: “AI in financial services. Are you meeting the regulators’ expectations” (December 2019), available at https://www.pwc.co.uk/financial-services/assets/pdf/ai-in-financial-services.pdf.

Share