About
Subscribe

AI agents moving faster than security can handle

Admire Moyo
By Admire Moyo, ITWeb news editor
Johannesburg, 25 Mar 2026
Existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale, says the CSA. (Image source: 123RF)
Existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale, says the CSA. (Image source: 123RF)

While () agents are rapidly embedding themselves across core enterprise systems, and access management (IAM) controls have not kept pace.

This is according to new research by the Cloud Security Alliance (CSA), which notes the gaps are exposing organisations to growing security risks.

The report, commissioned by identity platform Aembit and based on a survey of 228 IT and security professionals, finds that most organisations are already deploying AI agents in production environments, yet lack clear frameworks to manage their identities, permissions and accountability.

“AI agents are already embedded within enterprise environments, and as these systems take on more autonomous roles, organisations must address new challenges around identity and access,” says Hillary Baron, assistant vice-president of research at CSA.

“The survey data indicates that existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale.”

The report comes as South African companies are among the first in the region to embed autonomous AI agents into core business operations.

Examples include WeBuyCars, the local vehicle trading platform that has leveraged agentic AI to transform pricing and vehicle acquisition. The company recently said its AI agents have purchased more than 2 800 vehicles autonomously, using internal machine‑learning and large‑language model systems that make pricing and buying decisions with minimal human intervention.

In a letter to shareholders last month, Prosus CEO Fabricio Bloisi highlighted that agentic AI is becoming a core part of the business, not just an add‑on automation tool. The firm added 37 000 AI agents to its business.

Human vs machine activity

CSA data shows that 67% of organisations use AI agents for task automation, while around half deploy them for research, software development and security monitoring.

Only 15% report no production use, and more than 70% expect AI agents to become critical within the next year.

Despite this rapid uptake, the study shows that most AI agents do not operate as distinct digital identities.

KNOW MORE

Cyber security leaders looking to stay ahead of evolving threats can join peers and industry experts at ITWeb Security Summit 2026 in Johannesburg and ITWeb Security Summit Cape Town 2026. The events will explore how organisations can strengthen resilience against AI-driven attacks, supply-chain risks and emerging cyber threats.

Instead, they often rely on shared service accounts or even human user credentials, making it difficult to distinguish between human and machine activity. As a result, 68% of organisations say they cannot clearly attribute actions to AI agents.

According to the report, AI agents often exist in an identity grey area – 52% of organisations use workload identities, 43% rely on shared service accounts, and 31% allow agents to operate under human user identities.

The CSA says without a defined taxonomy, this identity patchwork can lead to unintended consequences, where AI agents inherit permissions beyond their intended role.

The report highlights a lack of clear accountability, with responsibility for AI agent identity and access split across security, IT and engineering teams. Only 9% of organisations assign primary ownership to IAM teams.

At the same time, confidence appears to outstrip capability, the research shows. While a majority of respondents say they are moderately confident in managing AI agent access, one-third are unsure how often credentials are rotated, and nearly one in 10 report that credentials are rarely or never refreshed.

AI agents are also increasing organisational risk by inheriting excessive permissions, says the CSA. Nearly three-quarters of respondents say agents often receive more access than necessary, while 79% believe they create new, hard-to-monitor access pathways.

According to the CSA, this risk is compounded by the way access is granted. Many organisations base AI permissions on human roles or pre-existing automation rules rather than defining agent-specific privileges, leading to “privilege creep” and broader exposure to threats such as prompt manipulation, it explains.

Insufficient reactive approaches

In the absence of mature identity controls, the organisation notes that companies are relying heavily on governance measures, such as policy restrictions, human approvals and post-activity monitoring.

It points out that common responses to incidents include disabling accounts or terminating systems, rather than dynamically adjusting permissions.

The CSA warns that such approaches are reactive and insufficient for scaling AI safely. Instead, it states that organisations are prioritising real-time visibility into agent activity, clearer separation between human and AI identities, and short-lived, task-based access controls.

“AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren’t designed to handle,” says David Goldschlag, co-founder and CEO of Aembit.

“The survey makes the stakes clear – agentic autonomy without identity-level access controls is a risk organisations can’t afford to ignore.”

The report concludes that AI agents are no longer experimental tools but operational actors within enterprise environments. However, existing security models – largely designed for human users – are struggling to adapt.

To mitigate risk, the CSA says organisations must extend core IAM principles to AI systems, including enforcing least-privilege access, establishing distinct identities for agents, and improving continuous monitoring and attribution.

As AI adoption accelerates, it says the ability to control how these systems access data and infrastructure will be critical to ensuring secure and accountable deployment.

Share