Subscribe

AI bias ups need for behaviour experts

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 07 Jun 2019
Addressing cognitive bias should be a top priority, if AI is going to reach its full potential, says Gartner.
Addressing cognitive bias should be a top priority, if AI is going to reach its full potential, says Gartner.

Users' trust in artificial intelligence (AI) and machine learning (ML) solutions is plummeting, due to increasing incidents of irresponsible privacy breaches and data misuse.

This is according to a new report by research firm Gartner, titled: `Predicts 2019: Digital Ethics, Policy and Governance Are Key to Success With Artificial Intelligence’.

The report predicts that by 2023, almost 75% of large organisations will hire forensic experts in Ai behaviour, privacy and customer trust specialists to reduce brand and reputation risk caused by AI solutions.

Bias based on race, gender, age, location, as well as on a specific structure of data, have been long-standing risks in training AI models, Gartner notes. “In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret.”

According to Jim Hare, research VP at Gartner, new tools and skills are needed to help organisations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk. “More and more data and analytics leaders and chief data officers are hiring machine learning forensic and ethics investigators,” he says.

Uncovering bias

Gartner says sectors like finance and technology are increasingly deploying combinations of AI governance and risk management tools and techniques to manage reputation and security risks.

In addition, organisations such as Facebook, Google, Bank of America, and NASA are hiring or have already appointed AI behaviour forensic specialists, who primarily focus on uncovering undesired bias in AI models before they are deployed. These specialists validate models during the development phase and continue to monitor them once they are released into production, as unexpected bias can be introduced because of the divergence between training and real-world data.

“While the number of organisations hiring ML forensic and ethics investigators remains small today, that number will accelerate in the next five years,” says Hare.

Concerns regarding racial or gender bias in AI have arisen in applications as varied as hiring, policing, judicial sentencing, and financial services. Addressing bias will need to be a top priority, if the technology is going to reach its full potential, notes Gartner.

Bad example

Last year, Amazon.com’s ML learning specialists discovered that their new online recruitment platform became biased against women.

The team had been building computer programmes to review job applicants’ resumes, with the aim of mechanising the search for top talent, reports Reuters.

The company later realised that its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

Share