Subscribe
About

Five security challenges of agentic AI

Agentic AI will revolutionise work. But it will also create new risks.
Agentic AI poses great risks, yet fantastic opportunities. (Image: Supplied)
Agentic AI poses great risks, yet fantastic opportunities. (Image: Supplied)

Imagine asking AI to organise your next trip. Consider using AI to plan your grocery shopping for the week. Apply this future to the office: an AI that can handle more complex tasks such as reconciling accounts, working out stock levels and the required inbound logistics, or managing meetings by scanning e-mails, replying to correspondence and finding suitable spaces in your calendar.

These examples are not science fiction, thanks to AI agents, also called agentic AI. AI agents are software that execute layered tasks, either directly or by delegating to scripts or other AIs. Yet, they are not only task orchestrators – they learn as well, adjusting their behaviours and solutions as they progress.

It's an exciting prospect for many – who wouldn't want to avoid the drudgery of sorting out e-mails or account claims? It is no surprise that Deloitte predicts AI agent adoption will double over the current two years. However, they also create a new landscape of cyber threats and risks, says Yuval Moss, CyberArk’s VP of Solutions for Global Strategic Partners.

"In order for AI agents to do their job, they need access to applications and data. They are not as contained as other types of software and AI. They are, in effect, power users that have a lot of access. Without the right compliance, governance, policies and security oversight, they could cause a lot of damage if the wrong people access them."

Five agentic AI risks to security

Agentic AI is a fast-moving field and new risks are becoming apparent as this technology evolves. Still, security and risk leaders are becoming aware of the challenges. According to surveys conducted by CyberArk, 56% of companies are concerned that agents can access critical resources, and 60% worry that they can be manipulated.

Looking more closely at the problem, CyberArk highlighted five risks that decision-makers should pay attention to:

  • Humans become superusers: Agents will allow users to become managers of their own virtual teams that can operate interactively and autonomously. "Regular" users will have a lot more power and access under their control, which attracts cyber criminals and complicates security management.
  • Shadow AI agents: Thanks to SaaS and online platforms, it's never been easier to bring unauthorised and unprotected software into an organisation. AI agents are not an exception, creating situations where IT has no visibility of agents operating unchecked and without proper oversight.
  • Developers go end-to-end: Agentic AI will expand the roles of developers from individual contributors into one-person R&D and operations departments, independently managing the entire end-to-end application development and maintenance process. This expands the risks they manage as well.
  • Human-in-the-loop abuse: The human-in-the-loop process validates and maintains agent performance. If malicious insiders abuse these privileges or outside attackers gain access to them, they can cause serious damage with the AI and by escalating access to other resources.
  • AI agent proliferation: Machine identities outnumber human identities by as much as 45 to one, and it will only grow bigger. Managing hundreds, thousands, even millions of agents is a real prospect, posing a serious resource challenge to IT and security teams.

Derisking AI agents

Agentic AI poses great risks, yet fantastic opportunities as well. Organisations cannot avoid their adoption, and they should put measures in place to reduce security, compliance and governance risks, including:

  • Full visibility into activities.
  • Strong authentication mechanisms.
  • Least privilege access.
  • Just-in-time (JIT) access controls.
  • Comprehensive session audits.

A major step forward is to work with an access and identity management security partner that has a robust platform. It uses its research and innovation to deliver the most proven and newest defence strategies.

"The crucial challenge with AI agents is that we don't know yet how the market will evolve," says Moss. "But workplaces cannot wait and see – the advantages are too many to ignore or delay. They should adapt as the risks change, which is why a trusted security partner with a robust platform is a great investment. The partner is committed to continually researching, testing and expanding its capabilities, providing layers of oversight and control that would be expensive and hard to do internally."

AI agents will change how we work. But with the right measures and foresight, they shouldn't change how we keep our employees and organisations safe.

Share