Cliff de Wit, chief innovation officer at ADG.
This year, organisations will face a decisive turning point in the governance of artificial intelligence (AI), driven by the proliferation of advanced AI agents as consumer AI tools are increasingly harnessed for business use.
What began as “shadow AI”, where employees informally use consumer-grade AI tools, has now evolved into something far more complex and far more dangerous – the shadow agent. These autonomous AI agents, deployed independently by well-meaning staff to speed up workflows, could create invisible, unmonitored pipelines for sensitive corporate data.
As the Google Cloud Cybersecurity Forecast 2026 warns, employees are already independently deploying “these powerful, autonomous agents for work tasks, regardless of corporate approval. This has the potential to create invisible, uncontrolled pipelines for sensitive data, potentially leading to data leaks, compliance violations and IP theft.”
“Shadow agents represent one of the most urgent governance challenges of the AI era. Organisations have a shrinking window to put the right controls in place or risk losing visibility, accountability and, ultimately, trust,” comments Accelera Digital Group (ADG) Chief Innovation Officer Cliff de Wit.
Why shadow agents are different and more dangerous
Unlike traditional shadow IT, shadow agents are not passive tools, but rather autonomous action-takers capable of sending e-mails, modifying data or triggering workflows, possibly without human review.
According to the Google Cloud blog, “Blocking shadow agents won’t work. Here’s a more secure way forward”, the core value of AI agents lies in their autonomy, which heightens “their susceptibility to manipulation, increasing adversarial attacks and potential systemic failures”.
This autonomy essentially means that even a single unsanctioned AI agent can:
- Exfiltrate sensitive data through misinterpreted tasks.
- Breach privacy laws by accessing prohibited personal information.
- Trigger operational or financial harm through incorrect automated actions.
- Introduce compliance violations that leadership may not detect quickly.
Why blocking shadow agents won’t work
Many organisations instinctively respond by trying to block agents outright. But as the Google Cloud Cyber Security Forecast 2026 notes, banning agents is not an option, “as it only drives usage off the corporate network, eliminating visibility”.
“You cannot firewall your way out of shadow agents. The only sustainable strategy is governance, which will give employees safe, sanctioned ways to innovate while ensuring the organisation maintains oversight and control,” says De Wit.
The rapid adoption of AI agents across industries means the governance gap is widening. The aforementioned Google Cloud blog states that 82% of large enterprises plan to integrate AI agents within three years, yet most lack the frameworks to manage them safely.
This means that 2026 is the final moment for organisations to establish:
- Secure-by-design AI governance: Embedding security from the outset, not as an afterthought, including identity and access controls for AI agents, safeguards such as adversarial training and model-level protections, as well as continuous monitoring and auditability of agent actions.
- Agentic identity management: The Google Cloud Cyber Security Forecast 2026 predicts a shift where AI agents become “distinct digital actors, each with its own managed identity”. Organisations should prepare for this future by designing identity frameworks that treat agents as first-class entities with least-privilege access and just-in-time permissions.
- Employee education and cultural alignment: The Google Cloud blog stresses that education, not prohibition, is the key to reducing shadow agent risk. Thus, organisations must develop training programmes that build awareness, accountability and responsible AI usage.
Unlock the value of AI responsibly
Shadow agents are already reshaping enterprise risk. Organisations that act now and establish strong governance, secure platforms, and a culture of responsible adoption will unlock the full value of AI. Those who delay will face risks they can no longer see or control.
“Technology alone will not solve this challenge, so organisations must pair technical safeguards with clear policies. Governance is not about slowing innovation, but about enabling it safely,” concludes De Wit.