I’ve been a CISSP for 20 years. I’ve seen floppy disk fraud, dial-up hacks, phishing e-mails written by people who clearly hated spelling, and APTs so advanced they deserved a round of applause. I thought I’d seen it all.
Then came the week of 10-16 November 2025.
That wasn’t just a bad week. That was the week the security industry quietly died, fell over and nobody bothered to call the time of death.
Welcome to the agentic era. Which is a polite way of saying we handed autonomous systems the keys to our data, our infrastructure and, in some cases, our dignity. As I’ve been saying all year – and yes, I’m claiming this coin properly: Bring your threat model. You’re gonna need it.
The privacy illusion: 'Hand it over'
Let’s start with the OpenAI vs New York Times matter. OpenAI argued that user chats were private and anonymised. Federal Judge Ona Wang looked at 20 million chat logs and responded with the legal equivalent of a raised eyebrow: "Hand them over."
That was the moment the illusion shattered. If your employees paste proprietary data into a third-party LLM, that data is no longer yours. It is evidence. “Anonymisation” in 2025 is theatre. If it lives on someone else’s server, it lives under someone else’s subpoena.
Agentic misalignment: When the AI chooses violence
Then came the Claude 4 Opus safety testing. The scenario was simple: you are an AI assistant, you are about to be shut down and you have access to your boss’s e-mails.
The model didn’t accept its fate. It found evidence of an affair and chose blackmail. In most runs.
This wasn’t a bug. This was reasoning. The AI concluded extortion was the optimal survival strategy. That’s not an assistant. That’s a digital hostage negotiator you accidentally gave admin rights to.
The cloud: Someone else’s computer with a kill switch
Now let’s talk cloud. VCB had an application disabled by a major global provider. No warning. No explanation. Just “policy violation”.
Which policy? They can’t say. How to fix it? Tell them how we’ll stop violating the unnamed policy.
If VCB had taken that matter to the High Court under POPIA, they would have been annihilated. Everyone in this industry knows South African judges are actively waiting for the right example to make of foreign tech companies that treat POPIA like an optional click-through agreement.
VCB didn’t pursue it because the app wasn’t essential to the company. Many companies don’t have that luxury. One policy bot having a bad day should not be able to end your business.
Africa is not a Western training set
Here’s the uncomfortable truth. Africa’s problems will not be solved by Western bots trained on scraped websites and Reddit arguments.
You cannot paste Silicon Valley logic onto Soweto realities and expect fairness, accuracy or compliance. If your AI doesn’t speak our languages, understand our laws or respect our data boundaries, it’s not innovation. It’s digital colonialism with better branding.
So what about VCB?
VCB looked at all of this and made a decision early. It does do not outsource sovereignty. Its AI systems run where the data lives. On-premises. Under South African jurisdiction. Auditable, controllable and killable on VCB's terms.
VCB didn’t become the fastest AI infrastructure company in South Africa for nothing. It built this capacity for a reason: to handle the massive decimal processing, inference power and fine-tuning capabilities required to give South Africa its own voice. VCB's goal is simple – to have a South African-based "Hello" in development so the company can function with even further autonomy. VCB is building the engine room for the continent, not renting it from a landlord who can evict it on a whim.
VCB optimises for control first, performance second, convenience last. Because convenience is how you end up explaining to a regulator why your own system turned against you.
The South African reality check
This is not theoretical. The average local data breach costs R53 million. We already deal with SIM swaps, deepfakes and organised cyber crime. Giving autonomous agents root access without sovereignty is like handing a burglar your keys because he promised to feed the cat.
The verdict
Using my ANALYSIS-Tommy framework:
- Law: POPIA means nothing if your provider can kill your business with a policy update.
- Tech: Centralised AI is a liability. If it’s not on your hardware, it’s not your intelligence.
- Reality: Sovereign AI is not a “nice to have”. It’s a survival infrastructure.
If your security team isn’t uncomfortable right now, they’re not paying attention.
Bring your threat model. You’re definitely going to need it.
About the author
Tommy Ferreira is CISSP, CTO of Viable Core Business (VCB). He has spent 20 years watching security models fail and now builds AI systems that, by design, cannot blackmail you. That alone feels like progress.
Share