Investments in machine learning in 2017/18 to support FNB’s risk function are now coming to fruition, having saved the bank and its customers nearly R2 billion in fraud prevention in the last financial year alone.
This is according to Dr Mark Nasila, chief data and analytics officer, FNB Risk.
While the bank has a history of having implemented artificial intelligence (AI) in various business processes, he warns that the current buzz means many enterprises are focusing largely on the technology, due to a fear of missing out, rather than considering how to use it to enable strategic objectives.
“A lot of organisations have approached AI adoption from a technology perspective. There's been a lot of promise around what the technology, and its prowess, can do,” he says.
“But, when you introspect, organisations create value for their customers in different ways, and those objectives define what the role of AI should be; that should be the basis of what an AI strategy is.”
Speaking to ITWeb TV, Nasila warns there are potential pitfalls for organisations introducing AI, if the process is rushed and not properly thought through. Venturing into a new technology means introducing new risks, he says.
Even with good planning, the unexpected can still happen, such as the recent case of Amazon Web Services’ Kiro agent overwriting code and causing service downtime.
“Something will always go wrong that we're not prepared for, that we don't know,” says Nasila.
Data provenance is one area of risk around large language models that Nasila highlights. “Generative AI is introducing a lot of synthetic data into the environment. The large language models of today, by the end of this year, will likely be trained on data that has been synthetically created, and that doesn’t represent a reality of how things are happening.”
Among some of the more immediate areas of risk that enterprises may face are data leakage, hallucinations and inaccurate data, decision fatigue, bias in and lack of transparency around decision-making, reputational and business damage, and even job displacement.
The human aspect is one of the most overlooked and underinvested areas in AI, says Nasila. “We've seen billions of dollars and grants go into AI, but only a small percentage on empowering people and upskilling them…on how they will apply their skills within their capacity now that there is a technology like AI in place.”
An AI strategy should equally be seen as a human strategy, he says. “To realise any value or return on investment, you must make sure you're realising more value from people.”
Nasila highlights that success in AI involves integrating the technology into business processes and driving a human-machine partnership.
With the current wave of interest in generative AI, many organisations are experimenting with the technology. However, for Nasila, success from such experiments comes when they are scaled to an enterprise level, and employees have accepted and moved to the new way of working.
“You can’t have one experiment that demonstrates value, yet you still have people operating the old way.”
An example of how AI has been introduced successfully in FNB is around risk assessment. “[It] involves gathering a lot of evidence, putting together insights and writing reports. These tasks used to take so long. Now we have agents that perform risk assessment and they basically help our investigators or due diligence analysts focus on decision-making as opposed to gathering evidence.”
AI is also being used by the bank to identify deviations, whether that’s in contracts or at a transactional level, to spot when money is being moved around, and picking up behavioural differences so that activities that could be fraudulent are flagged for investigation.
Nasila’s closing piece of advice for enterprises that are looking to introduce AI is to look beyond the technology, evaluate the risks and put all the building blocks in place, including strategic objectives.
“AI provides an opportunity to reimagine organisations, to prepare themselves for the future. But becoming AI-enabled, or even data-enabled, is not easy. It needs the right skills. It needs the building blocks. It needs to be set up to succeed. It needs a look at the role of AI in the form of decisions. And they must treat it like any other strategic objective or strategic initiative that requires people, resources, funding and every other resource.
“The last three years we've seen this technology change how the world looks at things. Today, we’re seeing a lot of organisations pay a price for just going along with the hype. But we're seeing a handful of organisations get it right because they’re making themselves ready.”
Share