How AI is fighting fraud and data breaches

Cape Town, 26 Mar 2019
Read time 5min 20sec

The cyber environment is becoming increasingly complex as the exponential surge of data continues. By 2020, US technology conglomerate Cisco estimates approximately 99% of devices (50 billion) will be connected to the Internet. Currently, only 1% of devices are connected. This calls for increasing importance to be placed on the security and privacy of user information.

One of the main contributors to cyber security threats is a lack of understanding of the value of personal information. Many people do not think twice before allowing Facebook apps such as: "How many babies will you have?" to access their profile information. There is little awareness or even consideration for the consequences of allowing certain apps or Web sites to gain access to one's personal information.

With the progressive sophistication of artificial intelligence (AI), the detection of cyber security breaches can help users protect their personal information. In most instances, this is done seamlessly, without the active involvement and consciousness of the user.

Here is an exploration of how AI is being used to fight personal information threats in different industries.


Anomaly detection is a technology that uses AI to detect unusual behaviour in a complex environment. An example of this is when a customer suddenly makes a large withdrawal from their bank account. This activity would fall outside of the parameters of "normal behaviour" for this particular customer, so the customer and the bank would be notified of this unusual activity.

Credit card fraud/misuse is only one of many challenges faced by the banking sector. AI is helping to mitigate these risks using a technique called "misuse detection". This is where machines detect credit card invasions based on previous rules that have been programmed into the machine. Every known invasion has a unique signature. These signatures are characteristics that define an invasion. The signatures also have a related error. When the system detects one of these signatures, a warning is raised to the bank.

Another challenging area for banks is loan application fraud. AI is used to quickly analyse information relating to an applicant's authenticity and detect unusual behaviour or anomalies in the data provided; for instance, a suspicious residential/business address provided.

Time spent filling in applications is another essential factor used to detect potential application fraud. By eliminating fraudulent loan applicants early in the application process, fraud can be reduced, and more time can be spent thoroughly assessing genuine applications.


Insurance companies have become a hot commodity for hackers because of the huge amount of data that insurers gather and store about individuals and businesses. The reports on Liberty Life's 2018 e-mail breach caused a 4.7% drop in its share price, wiping R1.68 billion off its R34 billion market value.

Understandably, the need to stay competitive and mitigate security threats has led companies to digitise their services and invest in new digital systems. However, this investment sparks many potential cyber security threats.

At the point where a client presents his/her insurance application, there is an expectation that the potential policyholder gives correct information. However, there are still a significant number of candidates who fabricate data to control the quote they receive from the insurance company. To tackle this issue, insurers use AI to evaluate a candidate's online networking profiles for affirmation that the data given isn't false. For instance, AI can analyse the potential policyholder's social media pictures, posts and information to confirm application details, checking, for example, whether the potential policyholder is a smoker, whether they are providing the correct employment details, etc. This technique is effective in tracking fraudulent applications being submitted.

AI can be used to automate insurance claims assessment and routing based on existing fraud patterns. This process not only flags potentially fraudulent claims for further review, it has the added benefit of automatically identifying good transactions and streamlining their approval and payment. With AI-based fraud detection, fraudulent claims can be evaluated and flagged before they are paid out. This reduces costs for insurance providers and helps reduce costs for consumers.


Healthcare privacy and security is complex because thousands of people are able to view patient data. It would be an impossible feat to manually analyse the number of transactions to patient data each day. Moreover, when a patient's data is connected to the Internet, there is a greater risk of privacy and security breaches.

AI has the power to sift through thousands of transactions to patient data per second, and review different factors relating to each transaction, such as location of access, number of login attempts, and the duration between each login attempt. In a case where a staff member's account is suddenly accessing 10 000 patients' files at the same time, this unusual behaviour would be detected by AI and an alert would be issued.

Medical equipment, such as pacemakers and insulin pumps, are widely used in the world and offer substantial benefits to patients. However, these devices are vulnerable to attacks as many do not have the necessary version of the operating system needed to fully use the security and privacy of the device.

Security researchers have tested the vulnerability of medical devices by, for example, allowing malware to be delivered to a patient's pacemaker system. The pacemaker was commanded to issue a shock to the patient. In these circumstances, AI is used to detect unusual commands being sent to the device using anomaly detection (mentioned above). AI can continuously monitor the device without being dependent on manufacturers to inform the hospital/patient of vulnerabilities.

Do you need help with assessing the need for, or implementing, an artificial intelligence solution? Feel free to reach out to us at (021) 447 5696, or e-mail

Editorial contacts
Analyze Consulting Ndilisa Majola (+27) 21 447 5696