About
Subscribe

The threat landscape has shifted. Has your defence?

AI is now both the weapon and the shield, forcing banks and fintechs to strengthen detection, response and resilience before attacks escalate.
Johannesburg, 25 Mar 2026
Barrie Venter, Product Manager: Digital Onboarding and Fraud Risk Management, Sybrin.
Barrie Venter, Product Manager: Digital Onboarding and Fraud Risk Management, Sybrin.

Identity fraud once relied on presentation attacks such as printed photos and replayed videos. That period has ended, and a more complex threat has taken its place.

Deepfake technology that once required significant resources (and skills) is now cheap, accessible and convincing enough to fool verification systems at scale. “A single fraudster can now scale their operations exponentially,” explains Barrie Venter, Product Manager for Digital Onboarding and Fraud Risk Management at Sybrin. The shift to automated, AI‑driven fraud is now well under way and agentic AI is accelerating it thanks to autonomous bots that can probe onboarding flows, generate synthetic identities and execute multi-stage fraud without any human involvement, at machine speed. Banks and fintechs that were built for a different threat environment are coming to the realisation that what worked before is no longer enough.

AI versus AI

AI is reshaping both fraud and defence across onboarding and payments. According to Venter, it has become "a little bit of a democratiser of fraud" and institutions are already seeing its impact as AI‑driven scams increase in volume and sophistication. “We are also using AI to combat AI,” he continues, pointing to both anomaly detection and dynamic risk scoring as key examples. Institutions across payments and onboarding are finding that AI‑powered detection is now simply part of what a functional defence looks like. Many still treat KYC as a single event, capturing the biometric, validating the document and moving on.

For Venter, onboarding should be the start of an ongoing trust relationship, where every subsequent interaction either reinforces or updates the picture of who that customer is. He says device fingerprinting is becoming an important part of this picture, tracking the signals from devices making requests and confirming that the same trusted device is consistently acting on behalf of the same trusted identity, adding another layer of intelligence to how continuous authentication works in practice.

Agentic AI presents a different kind of challenge. “AI agents can probe systems, generate synthetic media and execute attacks at machine speed,” Venter says. That capability puts pressure on the operational layers that sit behind onboarding and payments. “Banks need to start thinking seriously about how agentic AI will impact their processes, especially exception handling,” he says.

The volume of automated activity will overwhelm traditional controls, pushing pressure into the operational layers that were never designed for this scale. “Human reviewers will be overwhelmed by bot‑generated exceptions unless you’re also using AI agents on your side to handle that volume in the case management layer,” Venter says. Most institutions are investing in security. Fewer are investing in resilience, and Venter says that gap matters. “If you cannot intervene in real-time, the money is gone forever,” he says. At the volumes instant payment rails operate – sometimes 3 000 to 5 000 transactions per second – the monitoring capability required is significant.

Beyond the checkbox

Regulation is pushing institutions to act, but Venter is candid about what he sees on the ground. “I do see it often as a bit of a checkbox exercise that creates more friction but does not guarantee any safety,” he says. Traditional AML controls were built around static thresholds and batch processing, and they were not designed for the behavioural, real‑time reporting that regulators are increasingly moving towards. Institutions that simply tick the boxes tend to frustrate legitimate customers with false positives while fraudsters slip through.

“Safety really requires proactive risk management that goes beyond the minimums,” he says, explaining that compliance should be the baseline, not the ceiling. One of the most practical steps institutions can take right now is connecting onboarding data to transaction monitoring. KYC and KYB data captured at onboarding contains rich identity signals, but in most institutions it sits separately from the engine that monitors transactions. “Use the data that you get from KYC and KYB inside your transaction monitoring engine,” advises Venter. That data should be driving rules, informing risk topologies and shaping how alerts are scored and prioritised.

At the case management layer, AI models can reduce the noise around false positives significantly, allowing investigators to focus on real threats rather than a backlog of exceptions. And as agentic AI generates more automated exception traffic, the same thinking needs to apply to how exceptions are handled at scale. Fraudsters are already combining deepfake onboarding with instant payment rails to move illicit funds in seconds, and Venter says the industry cannot afford to treat these as separate problems. “You cannot decouple the identity from the transaction,” says Venter. “If you aren’t carrying the risk context of how an account was onboarded directly into the engine that monitors its transactions, you have a blind spot that criminals will exploit every single time.”

Share