If you are a C-level executive with some sort of technology responsibility, you are either extremely and publicly hyped about the many benefits of generative artificial intelligence (GenAI) or you are out of a job the next time the board meets.
Actually, we don't have evidence of executives being thrown out of their offices – yet. But shareholders and analysts are demanding news of AI boosts to every outlook, every process, and just about every product, and the herd is with them.
Consider the 2025 ITWeb Brainstorm CISO Survey, where 92% of respondents either had sanctioned GenAI running or had blessed staff experiments. Or the South African Generative AI Roadmap 2025, which found that two-thirds of companies had some sort of GenAI in play.
Scepticism had little place, and the 6% in the ITWeb survey who had banned GenAI seemed a bit crazy – until OpenAI yanked the ability to let search engines index publicly shared chats.
There was no change in the actual risks involved, they just went from the theoretical to being an actual problem. Reporting showed that users were actively clicking the “let Google keep and show this to anyone forever” box on clearly sensitive conversations, and that OpenAI didn't think they could be stopped with a UX improvement. Just about at the same time as Sam Altman reiterated that his organisation believes it will have to turn over any chats if subpoenaed, even those where you complain about your life partner, or spill your darkest secrets to your ChatGPT therapist.
Lawyers noticed. Audit-and-risk committee members noticed. Suddenly, the idea of information leakage due to an employee frolic with an LLM seemed not at all absurd, and the CTO who had flagged the dangers didn't seem like a complete killjoy.
We don't have the survey data yet to know how many enterprises have added “chat.openai.com” to the blocked list since then, but there's been a notable buzz from security vendors as well as consultants on the need to monitor user inputs.
There definitely is a problem with users including sensitive information in prompts. Everything from passwords to payroll data has been merrily pasted into a public LLM, and user training has not kept up with user adoption.
Suddenly, the idea of information leakage due to an employee frolic with an LLM seemed not at all absurd.
And that's before the phishing networks start putting fake LLMs in front of staffers desperate to increase their productivity.
There's also an increase in concern about retention by way of training data. Last year, vendors just had to promise miracles if you open the data storehouses to their models, while you get to keep it all safe and in-house if you fear the cloud. This year they also have to explain whether a regulator will be able to pull, say, historic pricing data from the machine trained on that data, even if the underlying data is scrubbed in accordance with retention policies specifically created to reduce the legal attack surface.
Meanwhile, a spike in ransomware attacks is putting the squeeze on cyber insurance, and it is getting hard to outsource ill-defined risks to an outside party in return for money.
So, you're a CISO. You can't claim you didn't know the risks, you can't be sure you can limit the risks by domain or in time, and you can't outsource the risks.
Do you maybe put the brakes on the whole GenAI thing for a while?
It wouldn't seem entirely crazy anymore.
Share