About
Subscribe
  • Home
  • /
  • Human Capital Management
  • /
  • Businesses ‘sleepwalking’ into AI governance crisis as confidence outpaces preparedness, finds global study

Businesses ‘sleepwalking’ into AI governance crisis as confidence outpaces preparedness, finds global study

Johannesburg, 04 Nov 2025
Mind the AI governance gap.
Mind the AI governance gap.

An AI “governance gap” is emerging as businesses pour money into AI tools and products without oversight or protective processes in place. While business leaders are chasing productivity boosts and cost reductions by investing large sums in AI, new evidence suggests many are sleepwalking towards significant governance failures, finds BSI research.

The global study, combining an AI-assisted analysis of over 100 annual reports from multinationals and two global polls of over 850 senior business leaders, conducted six months apart, offers a comprehensive view of how AI is publicly framed in communications, alongside executive-level insights into its implementation.

Governance gaps emerge

Sixty-two percent of business leaders expect to increase investment in AI in the next year, and when asked why, a majority citied boosting productivity and efficiency (61%), with half (49%) focused on reducing costs. A majority (59%) now consider AI to be crucial to their organisation’s growth, highlighting the integral role executives see AI playing in the future success of their businesses.

Highlighting the striking absence of safeguards, less than a quarter (24%) reported that their organisation has an AI governance programme, although this rose modestly to just over a third (34%) in large enterprises,[1] a pattern repeated across the research. While nearly half (47%) say AI use is controlled by formal processes (up from 15% in February 2025), only a third (34%) report using voluntary codes of practice (up from 19%). Only a quarter (24%) say employee use of AI tools is monitored, and only 30% have processes to assess the risks introduced by AI and the required mitigations. Just one in five businesses (22%) restrict employees from using unauthorised AI. The AI-assisted analysis reinforced this emerging governance gap and also identified a second, geographical one.

A key component of governance and management of AI lies in how data is being collected, stored and used to train large language models (LLMs). Yet only 28% of business leaders know what sources of data their business uses to train or deploy its AI tools, down from 35% in February. Just two-fifths (40%) said their business has clear processes in place around use of confidential data for AI training.

Susan Taylor Martin, CEO of BSI, said: “The business community is steadily building up its understanding of the enormous potential of AI, but the governance gap is concerning and must be addressed. While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails – and indeed without this being in place, new risks to businesses could emerge. Divergence in approaches between organisations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organisations vulnerable to avoidable failures and reputational damage. It’s imperative that businesses move beyond reactive compliance to proactive, comprehensive AI governance.”

Risk and security concerns remain under-addressed

Nearly a third of executives (32%) felt AI has been a source of risk or weakness for their business, with just one in three (33%) having a standardised process for employees to follow when introducing new AI tools. Capability in managing these risks appears to be declining, with only 49% saying their organisation includes AI-related risks within broader compliance obligations, down from 60% in the last six months. Just 30% reported having a formal risk assessment process to evaluate where AI may be introducing new vulnerabilities.

In their annual reports, financial services (FS) organisations placed the highest emphasis on AI-related risk and security (25% more focus than the next highest, the built environment). FS firms particularly highlighted the cyber security risks associated with implementing AI, likely reflecting traditional consumer protection responsibilities and the reputational consequences of security breaches. In contrast, technology and transport companies placed significantly less emphasis on this theme, raising questions about sectoral divergence in governance approaches.

Little focus on errors and value

There is also limited focus on what happens if AI goes wrong. Just a third say their organisation has a process for logging where issues arise or flagging concerns or inaccuracies with AI tools so they can be addressed (32%), while just three in 10 (29%) cite having a process for managing AI incidents and ensuring timely response. Around a fifth (18%) felt if generative AI tools were unavailable for a period of time, their business could not continue operating.

More than two-fifths (43%) of business leaders say AI investment has taken resources that could have been used on other projects. Yet only 29% have a process for avoiding duplication of AI services across the organisation in various departments.

Human oversight and training falls to the bottom of the list

Across the annual reports, the term "automation" is nearly seven times more prominent than upskilling, training or education. Overall, the relatively lower prominence of workforce-related topics suggests businesses may be under-emphasising the need to invest in human capital alongside technological advancement.

There is some complacency among business leaders that the workforce is well equipped to navigate the disruptions of AI and the new skills required to get the best out of it. Over half of leaders globally (56%) say they are confident their entry-level workforce has the skills needed to use AI, and 57% say their entire organisation currently possesses the necessary skills to effectively use AI tools in their daily tasks. Fifty-five percent say they are confident their organisation can train staff to use generative AI critically, strategically and analytically.

A third (34%) have a dedicated learning and development programme to support AI training. A higher proportion (64%) say they’ve received training to use or manage AI safely and securely, suggesting that fear of AI may be driving reactive training, rather than proactive capability-building. This report follows earlier research by BSI into the impact of the roll-out of generative AI on roles and work patterns published in October 2025. 

Download the report here.

Learn more about how BSI can support you on your journey with AI: https://page.bsigroup.com/AIgovernance

Or get in touch with BSI: bsi.za@bsigroup.com

BSI published the first AI management standard in late 2023 (BS ISO/IEC 42001:2023), and has since certified businesses to this including KPMG Australia.

AI centrality model reflects how connected themes are to the AI in 123 annual reports.
AI centrality model reflects how connected themes are to the AI in 123 annual reports.

[1] Large organisations are those with 250+ staff.

Share

Editorial contacts