GenAI did not arrive gradually. Within a year of ChatGPT’s launch in 2022, it had found its way into workflows, browsers, tools and everyday use. Everyone was told that if they weren’t using GenAI in business, they’d be left behind. Early adopters moved fast and the market followed. GenAI is trusted because the barrier to entry is so low. Chatting with a large language model (LLM) is easy, and staff may assume the output is correct. SAS research has found that GenAI is rated as more trustworthy than any other form of AI, including traditional machine learning systems.
“Over-trust weakens judgement before it breaks technology,” says David Pretorius, pre-sales manager, SAS South Africa. When staff assume outputs are reliable, they share more data than intended, move faster than review processes and accept recommendations unquestioningly.
“The earliest security failures tend to be simple, but are damaging,” he says. For example, when employees paste sensitive information into GenAI prompts, inconsistent access controls can allow AI assistants to expose internal content to unauthorised users.
Pretorius says models aren’t malicious; risks emerge when organisations are too lenient, too quickly. Samsung learnt this the hard way when staff pasted source code into ChatGPT, unintentionally exposing internal data to a public service. It became one of the first high profile GenAI data leaks, pushing the Korean tech company to restrict public AI tools internally, a policys till enforced two years later.
Decisions that cannot be traced cannot be safely scaled.
David Pretorius, SAS
This begs the question: who is responsible for GenAI? According to BCG’s latest AI Radar, CEOs are taking the lead with AI adoption and implementation. In the report, 72% of respondents said they are personally accountable for AI. More than half of the CEOs surveyed believe their job stability depends on getting AI right. “The CEO is becoming the chief AI officer,” says Christoph Schweizer, BCG CEO. “AI is no longer a technology question. It’s a question of how you steer the overall strategy, operations, organisation and ways of working of your company.”
Visibility is important, adds SAS’ Pretorius. Business leaders may struggle to answer which data informed the model, and which model version generated the output. “Decisions that cannot be traced cannot be safely scaled,” he says. Saša Slankamenac, architect in the office of the CTO at Dariel, says GenAI is not a new security category, but, rather, software that needs to be integrated, monitored and maintained. The model processes text, but the real security story lives in the services and agents built around it. These components accept inputs, call external APIs, store histories and govern where data moves inside and outside the organisation. “In the same way that we integrate with SAP, a legacy system or a database, we integrate with GenAI services,” he says. “The same software lifecycle and security practices should apply.” That means treating GenAI endpoints the same way you would any other external service. Traffic should move over encrypted channels such as HTTPS with TLS and through controlled network paths that are visible to your security and operations teams. Logs need to capture what happened, and when, without leaking sensitive payloads.
There’s also the fact that popular GenAI tools store conversations within user profiles. “That data behaves like any other record and must be treated as such,” says Slankamenac. For secure GenAI adoption, organisations need to stop thinking of prompts as throwaway text and start treating them as part of the application’s data layer. Prompts, chat histories, file uploads and retrieval augmented snippets all become assets that can be audited. Whether this information is in your own systems or with a provider, it should be treated the same as any other sensitive record. Enterprise versions of these tools (like Microsoft Copilot for Microsoft 365, Google Gemini for Workspace and OpenAI’s Enterprise and Teams offerings) provide audit logs that security teams can pull into their SIEM. These logs show who used GenAI, when and which files were uploaded, giving the same operational visibility expected from any other business system. “It’s just software,” says Slankamenac. “The model does the processing, but the service around the model needs to be secured.”
If an organisation is going to use GenAI, it should decide which data it’s prepared to expose. “It’s more about your internal process than the fact that it’s GenAI,” says Slankamenac. There are businesses that draw a hard line and prohibit client or confidential information from being entered into public GenAI tools. Instead, they use internal agents, self-hosted or private enterprise models to keep sensitive workflows within its environment.
For businesses without in-house development skills, the convenience of hosted providers means the focus should be on contractual safeguards, data handling commitments and encryption standards. “You’re trading convenience for taking on that operational complexity yourself,” he says.
GenAI accelerates work by removing friction between question and response. It’s not inherently unsafe, but it exposes how decisions are made under pressure. Andriy Burkov, a Canadian author with a PhD in AI, has called LLMs “useful liars”, a reminder that fluency should never be mistaken for understanding. The responsibility for interpretation, verification and consequence remains entirely human, or, at least, with humans in the loop.
THE HUMAN SIDE OF SECURITY
In many organisations, GenAI is still treated like a productivity hack instead of a governed business system. According to Heino Gevers, senior director of technical support at Mimecast SA, this is how businesses end up repeating all the security mistakes of early cloud adoption. “We’ve reached an inflection point where we need to make a transition from trying to protect the business with the best breed technology,” he says, adding that focusing on productivity gains without taking into account human risk is a critical mistake. Businesses will always throw technology at a problem, but with GenAI, risk is more than an IT function. For Gevers, there are three non-negotiables when it comes to taking GenAI security seriously. First, data loss prevention (DLP) needs to happen at the prompt gate. There have to be controls within GenAI tools that stop sensitive or confidential information from being typed or pasted into prompts. DLP tools like Microsoft Purview, Symantec, Netskope and Forcepoint check what a user is about to send out of the organisation. The tools look for sensitive fields and stop the data from leaving, or strip out the risky parts, before it reaches a GenAI program. Next, every AI interaction needs to be logged. Organisations should record who used GenAI, what was submitted and when, so they have a clear audit trail for governance, compliance and incident response. And content must be classified and cleaned in real-time. The system should also identify the type of data being used and redact customer or company confidential information before anything is sent to an external GenAI model.
“But technology without the human risk management component will fail,” says Gevers. “Users don’t really recognise they’re the custodians of the customer information.” In Gevers’ experience, there are a number of recurring behaviour patterns that create the most GenAI risk in practice. The first he calls convenience-driven risk. This is when employees literally paste an entire email into a GenAI prompt, because it allows them to quickly generate a summary. There’s no consideration around what data they’ve just handed over. There is also authority bias. This happens when staff trust the output of GenAI without verification, despite issues such as hallucinations. Of course, there are risks that sit outside corporate control, like if a staff member uses their cellphone to take a photo of a document and uploads it to an AI tool. Off network behaviour can’t be blocked, only guided.
Gevers says staff tend to resist blanket restrictions, but are more open to contextual guidance. Rather than simply blocking an action, Mimecast’s approach is to intervene in real-time. If someone tries to upload a file that contains confidential company data, the system should stop the action, explain why and prompt the user to reconsider before proceeding. “It’s continuous security awareness that builds muscle memory over time around what good data handling looks like,” says Gevers. The goal isn’t to shut GenAI down, but to let people use it freely within visible, evolving guardrails so innovation doesn’t come at the cost of losing control of company data.
* Article first published on www.itweb.co.za
Share