Subscribe

Truth or dare: Should we trust artificial intelligence?

One of the biggest threats AI poses to businesses is that too many believe it to be truthful information, when there is no validation for its accuracy.
Michael Brink
By Michael Brink, Chief technology officer of CA Southern Africa
Johannesburg, 17 Oct 2023
Michael Brink, CTO of CA Southern Africa.
Michael Brink, CTO of CA Southern Africa.

Artificial intelligence (AI) can directly or indirectly affect cyber security in a multitude of ways, with various examples of ChatGPT exposing corporate data.

One such case is that of Samsung, which allowed engineers at its semiconductor division to use ChatGPT to help fix problems with source code. The employees mistakenly entered top-secret data for a new programme, and internal meeting notes relating to Samsung hardware.

Before we go further, it’s important to understand that ChatGPT retains all data fed to it in order to train itself further. Hence, these trade secrets from Samsung ended up in the hands of the AI chatbot maker, OpenAI. Samsung Semiconductor is reportedly developing its own AI for employee use.

There is no current way for enterprises to trust AI-supplied data.

In another instance, an executive cut-and-pasted a firm's 2023 strategy document into ChatGPT3 and asked it to create a PowerPoint stack. In yet another, a doctor input his patient's name and medical condition and asked ChatGPT to craft a letter to the patient's insurance company. The concerning thing is that these examples are possibly just the tip of the iceberg.

However, these blunders appear to be no barrier to the uptake of generative AI programs, which are being adopted at a furious rate.

Fly in the generative AI ointment

One of the biggest threats AI poses to businesses today is that all too many regard it as truthful information. The problem with this is that there is no validation for its accuracy.

When someone uses a search engine, the answers it returns provide attribution for their sources. Attribution allows the user to validate the truth or falsehood of the data. With AI, there is no attribution because it is correlating information from thousands of different places.

There is no current way for enterprises to trust AI-supplied data. The fundamental problem with it is that AI could be providing users with incorrect information due to its model and how it was trained or fine-tuned.

Given the speed of its adoption and spread across enterprises, this is regarded as the biggest threat that AI poses to businesses today. Of course, this may not be the case in years to come, but today this incorrect information can create an existential crisis for organisations.

In the future, companies may have some form of AI trained by their own datasets and for their own specific use – but this is not currently the case. Today, there is one common dataset for everyone, which is turning into a big headache for enterprises, worldwide.

Cyber risks attached to generative AI

Attackers, privacy, intellectual property (IP) and copyright are the areas of primary risk and concern.

For cyber criminals, ChatGPT, GitHub Copilot and other generative AI solutions don’t really provide additional advantages.

There are some areas where generative AI tools can help launch an attack; for example, by constructing better or dynamic phishing e-mails, or writing code. ChatGPT, like the other generative AI tools, is an information content development tool and not a self-conscious entity.

It can be asked to “tell me all the common ways to infect a machine”, but it cannot be asked to “infect these machines in a way never thought of before”.

Malware code or the text in a phishing e-mail’s message body represents only 1% of the entire effort required to execute a successful attack. While AI helps multiply the number of attacks and common implementation in certain areas of an attack chain, it doesn’t automate a cyber attacker’s end-to-end needs.

However, accidental attacks are an additional consideration. Code received from a generative AI tool and put into production unvalidated can inadvertently introduce a new attack surface or cause a business disruption.

Privacy, data loss and risk

When it comes to privacy, the principal consideration is the way ChatGPT can be used by staff. Not only could they be uploading sensitive documents, or asking queries that leak sensitive corporate information, but the information and queries they make can also be integrated back into the ChatGPT app.

Employees using AI must be counselled to guard against inadvertently giving away sensitive corporate information. Providing sensitive information to generative AI programs has the same effect as giving that information away to a third-party.

Information fed into AI programs, such as ChatGPT, becomes part of its pool of knowledge. Any subscriber to ChatGPT has access to that common dataset. This means any data uploaded or asked about can then be replayed (within certain app guardrails) to other third-parties that ask similar questions.

The information the AI system provides must be verified. Moreover, it is crucial to learn how to measure an AI system to protect it against bias and the inadvertent mixing of critical enterprise data.

Misinformation and bias exist across the internet. When it comes to AI, those factors must be tested and corrected before any of the data received by the AI is used.

In my next article, I will briefly touch on copyright and IP issues and expand on the benefits of AI.

Share