About
Subscribe

AI levels up cyber risk and security

Johannesburg, 10 Oct 2025
AI is a double-edged sword.
AI is a double-edged sword.

AI – particularly generative AI – has become potentially the biggest risk in cyber security.

This is the warning from Henry Denner, Managing Director at SecureConekt, a partner of CipherWave, who says AI is enabling cyber attackers to evolve and move at unprecedented speed. “Where automation and machine learning have enabled more efficient cyber attacks, generative AI is accelerating the evolution of attacks in an unprecedented way.” 

Denner notes that while the terms automation, machine learning (ML) and AI are often used interchangeably, each represents a distinct rung on the ladder of digital capability.

He says: “Automation is the simplest tier. It follows explicit, human-written instructions: if X happens, perform Y. Whether it’s a script that renames thousands of files in seconds or a robotic arm that solders identical joints all day, automation brings speed, accuracy and tirelessness – but never deviates from its playbook. It is deterministic: the output is predictable because every path is hard-coded.”

Machine learning: Data-driven intuition

Machine learning sits one level up from automation, Denner explains. “Instead of living on fixed rules, ML learns patterns from data. Feed a model thousands of labelled photos of cats and dogs, and it discovers the statistical fingerprints that separate whiskers from floppy ears. After training, it can classify new images it has never seen. Crucially, ML adapts when conditions change – re-training on fresh data lets it keep pace with shifting trends, from new slang in social media to evolving credit card fraud tactics. The trade-off is opacity: the reasoning behind a prediction may be hard to trace, requiring careful evaluation and monitoring.”

AI: the thinking toolkit

AI is the overarching discipline that aims to replicate – or at least simulate – human cognitive abilities such as perception, reasoning, language and planning, Denner says. “ML is one of AI’s strongest tools, but AI also includes symbolic logic, knowledge graphs, natural-language processing and generative models. A chess engine that searches millions of moves per second, a voice assistant that understands spoken commands and a chatbot that drafts marketing content all fall under the AI umbrella. What makes AI powerful is its ability to blend multiple techniques: rule-based reasoning for clear constraints, ML for pattern discovery and language models for human-like communication.”

He says: “Automation does exactly what you tell it; machine learning figures out what to do by studying examples; artificial intelligence aspires to do all that while appearing to think. Understanding these distinctions helps organisations deploy the right tool at the right time – maximising efficiency today while paving the way for more adaptive, intelligent systems tomorrow.”

Trevane Paul, CEO of Conekt Holdings, says: "The speed at which attackers are adopting AI far outpaces most organisations' ability to respond. What we're seeing is not just a shift in tools, but in tempo. It’s no longer a matter of if you adopt AI for defence, but how fast you can operationalise it with the right partners in play."

AI: The double-edged sword

“AI is a game-changer and a double-edged sword in cyber security,” Denner says. “In the past year or two, threat actors around the world are leveraging AI offensively with growing success. AI has enabled them to craft phishing content and deepfakes that are indiscernible from the real thing. It has also put malware development capabilities into the hands of the criminally inclined – even if they themselves have no coding skills.”

Recently, researchers at AI security firm EdisonWatch showed how ChatGPT’s new Model Context Protocol (MCP) calendar integration can be weaponised to exfiltrate a user’s e-mail. This attack involves sending a specially crafted calendar invite containing a “jailbreak” prompt to a victim’s e-mail address. When the victim subsequently requests ChatGPT to check their calendar, the malicious call is executed, allowing the agent to search the inbox and forward e-mail content to an e-mail address predefined in the “jailbreak” prompt.

These types of AI attacks are not specific to ChatGPT. Security firm SafeBreach have demonstrated a similar attack targeting Gemini and Google Workspace.

“Although these types of attacks are still evolving, it clearly shows how day-to-day tools and applications used to manage, organise and simplify our lives can be used against us. AI tools such as ChatGPT, Gemini and Copilot have become such an integral part of our work and home ecosystem, and considering the speed at which AI operates, it gives attackers far more reach to our data without using outright malicious tools,” Denner says.

To keep up with this rapidly changing risk, organisations have to upgrade to intelligent, AI-driven security tools and work with cyber security partners like SecureConekt to operationalise security at scale, Denner says. “It’s also important for them to streamline and integrate their technology stacks to close gaps and treat security as a continuous capability rather than a once-off investment. The cyber security industry is continually working to enhance defence capabilities with the use of AI, and it’s crucial that organisations harness these tools to stay ahead of attackers.”

Paul adds: "We’ve spent years helping clients simplify and secure their environments – but AI has changed the game. SecureConekt ensures clients aren't just reacting to AI-driven threats – they’re getting ahead of them with intelligent, scalable security architecture."

This is the second in a series of press releases by CipherWave and SecureConekt on AI and cyber security. The next press release will cover how AI is being used by cyber threat actors. To see the other press releases in the series, click here.

Share