Artificial intelligence is a potent weapon for cyber criminals as well as for cyber security teams. Now, a third vector is emerging. The internal use of AI poses new risks for organisations: the Global CyberArk 2025 Identity Security Landscape report reveals that AI models and the rise of "shadow AI" are creating security challenges that demand attention.
Shadow AI and jailbroken models
The allure of AI tools has attracted workforces. Seventy-two percent of employees now use them regularly. South African companies are ahead of the curve: only 5% of local organisations surveyed by CyberArk do not use some type of AI. Those that do embrace a wide range of use cases, with virtual assistants (67%), fraud detection (62%) and data processing (62%) leading the pack.
This widespread adoption, however, is leading to a surge in shadow AI.
Shadow AI, similar to shadow IT, occurs when employees use AI services outside of established policies and security measures. According to CyberArk, 36% of employees use AI tools not fully approved or managed by IT.
"Data leakage has become very easy with AI," says Craig Harwood, AVP, South Africa and Middle East, CyberArk. "Employees can feed information into an AI service, exposing that information to external databases and people. These mistakes can include sensitive business data and critical information such as passwords and API keys. The data can resurface outside the organisation by accident or through malicious individuals using specific prompts."
Yet, organisations struggle to identify which AI tools are processing sensitive company data, hindering the enforcement of security controls. Almost half of organisations (47%) say they are unable to effectively secure and manage all the shadow AI tools in use.
South African companies are more confident, with 69% believing they can secure and manage all shadow AI tools. But this might be overconfidence, considering that 63% of South African organisations agree that AI bots have access to their sensitive data.
Even official AI projects can introduce security vulnerabilities: according to the CyberArk report, 68% of organisations globally lack adequate identity security controls for these technologies.
In the wrong hands, or through malicious prompts, LLMs can be manipulated – a process known as "jailbreaking", explains Harwood: "Criminals are working out ways to extract information from AI models. They can execute unauthorised database queries, external API calls and even access to networked machines. Jailbreaking isn't just a theoretical concern. It's a growing reality as organisations rapidly deploy AI without a comprehensive understanding of the security implications."
Not surprisingly, 66% of surveyed local companies consider AI agent behaviours being misconfigured or manipulated as the leading security threat when implementing AI.
Safeguarding AI in your business
Creating a robust and safe AI environment in your organisation is an ongoing process that includes important steps and considerations.
A dedicated workgroup should guide the process, composed of board members, executives and key stakeholders, including legal and security professionals.
Perceptions around AI can generate myths, misconceptions and marketing hype that inform incorrect strategic decisions. Strengthen the knowledge of leaders, from the board to managers, on AI's fundamentals, particularly how large language models work.
Prioritise data in terms of usage and compliance, establishing clear guidelines for how sensitive information can be used with AI tools.
Align AI models with security objectives, which include rigorously testing AI models for their susceptibility to jailbreaking and other forms of manipulation. Tools such as CyberArk's free and open source FuzzyAI are excellent for vetting AI models.
Establish a clear AI usage policy based on specific use cases in the business – surveys and employee feedback are invaluable.
Banning AI will likely increase shadow AI adoption. It's foremost a matter of culture, where organisations must embrace the technology and provide guidance for their employees, says Harwood.
"AI tools are intuitive and can provide real productivity advantages, and it takes little effort for people to start using them. The emergence of AI agents makes it even more inevitable that every organisation will be exposed to AI risks, even if it's something simple like an agent embedded into an operating system. They gain nothing from ignoring these risks, but they will gain tremendously if they put the foundation in place to use AI. Then they can reap the rewards of engaged employees, productivity and fewer compliance and security risks."
Share