Subscribe

Overcoming AI, ML limitations with XAI

Explainable AI (XAI) is a key innovation that is now seen as a viable technology and is expected to gain ground in 2023.
Paul Stuttard
By Paul Stuttard, Director, Duxbury Networking.
Johannesburg, 09 Dec 2022

According to Forbes, the global media company, innovations and developments in transformative technologies such as artificial intelligence (AI) and machine learning (ML) will continue apace in 2023. 

ML is an application of AI and a key component in the field of data science. It is the process of using mathematical data and algorithms to help a computer learn without direct human instruction. More specifically, ML is designed to emulate the way humans learn and make critical decisions.

Through the use of statistical methods, ML algorithms are trained to make classifications and predictions, which may subsequently drive decision-making processes with the obvious goal of a positive outcome.

ML is seen as an essential tool for using and optimising the increasing amount of data available for analysis and decision support in the corporate and private sectors, in the military, advanced medicine and many other areas.

With ML algorithms, as is so often the case with humans, repetition and experience in a particular field or endeavour result in systematic improvements in accuracy, proficiency and predictability over time.

However, it is now accepted that one of the negative aspects of many ML algorithms is what has been described as their “black box” nature. It is not always clear how ML algorithms treat real-world data and how − or even why − certain decisions are made.

ML algorithms are extremely hard to explain – even in scientific terms − and ML models often cannot be completely understood by AI experts.

As a result, ML algorithms are extremely hard to explain – even in scientific terms − and ML models often cannot be completely understood by AI experts. Consequently, many users find they are unable to put their complete trust in ML-derived predictions and decisions.

Against this backdrop, a key innovation expected to gain ground in 2023 is a set of tools and frameworks designed to assist researchers, developers and users to better understand the complex inner workings of ML models. This innovation has been dubbed Explainable AI (XAI).

The term was coined by the US Defence Advanced Research Project Agency (DARPA) as a research initiative designed to overcome what it perceived to be the shortcomings of conventional AI.

Now seen as a viable technology, XAI has been likened to the co-pilot of an aircraft whose task it is to assist the pilot to complete critical operational checklists and, acting as an extra pair of eyes and ears, alert the pilot to any operational anomalies.

Like the pilot and co-pilot analogy, it is the increasingly intense interplay between AI technology and humans that is driving the need for XAI, as well as the need for “explainability”.

For example, in the business world, when a vital decision affecting the direction of a new venture, a new product design, or a change of location has to be made, it is generally clear who has the final word and signs-off on the project.

Should issues arise down the line, the human responsible can be interrogated and an explanation garnered about the reasons behind the decision and what motivated it.

On the other hand, when corporate decisions are made by conventional AI systems based on ML algorithms, the responsibility for a poor decision is often less clear.

According to Jonathan Johnson, prolific author and public speaker, explainability becomes significant in the field of ML because it is not often apparent.

“Explainability is an extra step in the [ML model] building process,” he says. “Like wearing a seatbelt while driving a car; it is unnecessary for the car to perform but offers insurance when things crash.”

In this light, one of the benefits of XAI in its co-pilot role is its ability to look over the horizon and predict the accuracy and transparency of outcomes based on decisions taken by conventional AI systems applying ML algorithms.

Another benefit is XAI’s ability to provide in-depth views and analyses of issues that may impact the operational efficiency, security or proficiency of a sizable corporate network, thus minimising the time taken to achieve similar results via manual methods.

For many organisations, XAI is crucial in building trust and confidence when putting ML models into production because it is able to help humans better inspect, understand and verify ML algorithms’ behaviour. This is achieved by “drawing back the curtain” to gain a worthwhile appreciation of the complex data behind every decision taken.

As a bonus, XAI proponents claim the technology is adept at debugging ML models with the goal of significantly improving and even rationalising their performances in the short term.

This is necessary as ML systems have been known to learn unwanted tricks that do an optimal job of satisfying explicit pre-programmed goals, but do not reflect the implicit demands of their human system designers.

For example, it is reported that an AI/ML system tasked with image recognition learned to "cheat" by looking for illegitimate shortcuts to speed up the process.

Thankfully, today XAI is able to identify such pitfalls by using various methods, such as feature importance scores, counter-factual explanations and influential training data, of which there is generally little understanding by organisations and users.

Share