Can we get rid of bias in artificial intelligence?

Artificial intelligence systems are the product of constructed algorithms that inherit many of the biases that help to perpetuate the global challenges we hope to solve.
Read time 4min 10sec

To understand the problem of bias in artificial intelligence (AI), we have to understand the definitions of bias. The common definition of bias is that algorithms learn whatever patterns the creator of the data presents. AI bias is a bias that mirrors the prejudice of its creators or data, meaning that cognitive biases essentially are the root of all modern AI and data biases.

These categories include: Too much information, not enough meaning, need to act fast, and what should we remember? It’s out of these categories that modern AI biases arise.

AI algorithms may depend on one or several data sources, and it’s usually these underlying data sources, rather than the algorithm itself, that are the main source of the issue. We can easily conjure up a real example of how statistical bias leads to the common notion of bias.

In November 2019, we read about an ongoing scandal of gender discrimination with the issuing of credit cards. It all started when a software developer wrote on Twitter that he received a credit line 20x higher than his wife, despite the fact that they filed joint tax returns and that she had a higher credit score.

This led to an abundance of other married couples coming forward to share similar stories, followed by a frenzy in the media. How could this happen? It is less likely that the company is sexist and more likely that the data fed to the organisation’s card algorithm to determine creditworthiness had hidden biases.

Managing bias through a fairness pipeline may be possible using design, data, model and application within the pipeline. Algorithms are simply a reflection of the data they are trained on, meaning that biases propagate through the fairness pipeline.

There is no technical silver bullet to prevent bias in AI. There are, however, many actions you and your organisation can take to minimise the risks of bias.

First and foremost, it is essential to spend time on the necessary resources to audit data properly, especially when protected attributes such as age, gender, or race are part of a data set. But, auditing the data alone is not enough.

Bias can sneak in at multiple stages within the fairness pipeline and significantly affect the decisions made from AI.

Bias can sneak in at multiple stages within the fairness pipeline and significantly affect the decisions made from AI. To implement unbiased models in practice, a process supported in a platform for development, deployment and monitoring is needed.

By now, you have probably come to terms with the fact that no organisation is free from the risks and consequences of bias in AI. We know that biased data collection leads to biased datasets, which in turn produces biased algorithms that invent biased results and solutions. We also know that the induction of bias can occur at any stage in the fairness pipeline and does not fall in the data collection process alone.

To tackle bias in AI, it is in the best interests of any company to implement a robust AI governance framework anchored in an AI platform. With the right support in place, there is less to worry over with respect to the phenomenon of bias in AI.

We have become increasingly reliant on AI to help solve some of the world’s most pressing problems. From healthcare to economic modelling, to crisis prediction and response, AI is becoming quite common and, in some cases, inherent in how we operate.

While the insights offered by AI are invaluable, we must also recognise it is not a flawless system that will provide us with perfect answers, as many practitioners would have you believe.

Our AI systems are the product of constructed algorithms that have, however inadvertently, inherited many of the biases that help to perpetuate the global challenges we hope to solve. The result is that AI and machine learning are not purely agnostic processes of objective data analysis.

In order for these technologies to make progress, confront bias and help tackle significant global problems, we need to rethink, reimagine and re-purpose our approaches, rather than impose our understanding on data.

Instead, we need an iterative, diverse approach that can incorporate more perspectives and diversity of thought. Models developed from globally distributed intelligence networks may offer a way forward and more unique, unbiased approaches to tackling serious world issues, deconstructing bias in AI and rethinking AI.

The global problems we face today are unprecedented and this does mean we have opportunity to redefine, rethink and reimagine solutions.

Lavina Ramkissoon

Conscious Creator I Trailblazer I Thought Leader

Ramkissoon, who writes in her personal capacity, is an AI mentor, strategist and trailblazer standing for the unification of the African tech space. She is a conscious tech proponent with expertise in strategy, technology and psychology, and their integration into AI, blockchain and ethics. This tech catalyst has roles as chairperson, director and founder across multiple industries, sectors and technologies − aiming to unlock the potential of the African AI arena. 

See also