CyberDexterity logo
companyzone companyzone

The cognitive debt of digital leadership

Why the pursuit of AI could quietly lead executives into advanced ignorance.
Johannesburg, 07 May 2026
Antonios (Tony) Christodoulou, CEO and Founder, Cyber Dexterity.
Antonios (Tony) Christodoulou, CEO and Founder, Cyber Dexterity.

There has rarely been a technology in history that entered the boardroom as quickly as artificial intelligence (AI) has. Within 18 months of its public release, ChatGPT moved from being a curiosity to becoming an agenda item, and then a board-level expectation. CEOs are now under sustained pressure to demonstrate AI adoption to investors, staff and peers. The pressure is real, the timelines are short, and the question most often asked in executive committees is some version of: "Where are we in deploying this?"

The question that has received far less airtime is: "To what end and at what cost?"

McKinsey's State of AI 2025 puts numbers on the gap: 88% of organisations now report regular AI use in at least one business function. Only 39% attribute any EBITDA impact to it, and most of those say less than 5% of earnings are attributable to AI use.(1) The temptation is to read this as a technical maturity problem. The need for better integration, better data management, better governance, better execution capabilities and the value will follow. There may be truth in this, but a more or less obvious reason, which is invisible and harder to fix: organisations have rushed to deploy AI without doing the slower, harder thinking about what they actually want it to do, what they're prepared to give up to get it and what they need to protect along the way.

The blind spots are starting to surface in ways that should lead executives to ask the right questions and really think through their AI strategy. Over-reliance on AI can set an organisation backwards from its customer value proposition to its risk profile, a new form of denial of service for cyber criminals, for example, where an attacker who can quietly corrupt, poison or manipulate the data an AI model relies on can degrade the quality of every decision the business makes downstream without the system ever appearing to fail. Data illusion, the comforting sense that a well-formatted dashboard equals a well-understood reality, is dressing up partial pictures as complete ones and could lead to erroneous decision-making. The erosion of empathy is showing up wherever AI now sits between a leader and the people they are meant to lead, summarising sentiment that used to be felt in a room, and flattening the nuance of a customer relationship into a satisfaction score.(2)

None of these consequences are being seriously considered in most deployment plans, because the deployment plans are built around a single, narrow promise: efficiency, which, in most cases, greater efficiency translates as headcount reduction. The trouble is that the people being optimised out are often the ones holding the tacit knowledge, the customer relationships and the institutional memory that the AI cannot replicate and was never asked to. Organisations chasing the efficiency dividend can find themselves a year later with leaner cost lines and meaningfully less of the human capability that made them worth investing in to begin with.

These are the surface symptoms. Beneath them, a deeper liability is accumulating. Software engineers have a name for the corners they cut to ship faster: technical debt. The code works today, but interest accrues quietly in the form of lack of scalability, slower releases and the eventual constraint or “not fit for growth”. Leaders are now building up a parallel form of liability, and almost no one is auditing it. Let’s refer to this as cognitive debt.(3)

Every time a dashboard surfaces a pre-formed insight, an AI tool drafts a board paper or a model recommends which deal to prioritise, the executive saves time and offloads a small amount of critical thinking. The saving is perceived value due to it being a visible outcome. The offload is not. Over months and years, the interest compounds. Judgment softens. Pattern recognition that used to live in the leader migrates into systems the leader doesn't fully understand. And the workforce, watching this from below, learns to do the same.

The cyber psychology of the offload

Three well-documented mechanisms explain how this happens, and why it happens to capable people who would never describe themselves as dependent on technology.

The first is automation bias,(4) the tendency to over-trust the output of automated systems even when contradictory information is in plain sight. Pilots have crashed aircraft following autopilot instructions over their own instruments. Executives, working in environments with far less feedback discipline than aviation, are exposed to the same drift without the safety culture that catches it.

The second is cognitive offloading.(5) The brain treats external systems as part of its own memory, a phenomenon researchers have studied since the early days of search engines. The "Google effect" showed that people remember where information lives more than the information itself. Generative AI extends this from facts to reasoning. Leaders increasingly remember which prompt produced a useful answer rather than the line of thought that would have produced it unaided.

The third is deskilling through disuse.(6) The cognitive muscles behind judgment, the slow synthesis of weak signals across decades of experience, atrophy when they are not exercised. A senior leader who used to walk into a room and feel something was off about a deal now reaches for the data room and the AI summary first. The intuition is still there in principle. In practice, it is being trained out.

What makes this particularly difficult to spot is that the heaviest offloading is happening in the place organisations are watching least. Enterprise-wide copilots and chatbots have scaled rapidly, but, as McKinsey notes, deliver diffuse, hard-to-measure gains spread thinly across employees. Nobody is tracking the individual productivity uplift, which means nobody is tracking the individual judgment decline either. The deskilling is invisible by design.

The cognitive debt assessor

A useful way to assess whether you are on the path to cognitive debt is to ask these three questions?

What is being offloaded?

Where you are making decisions but can no longer articulate why you reached the conclusion you did, only that the system suggested it. Strategy reviews where the deck was generated from a prompt and the meeting now revolves around editing the slides rather than interrogating the assumptions. Customer insight that arrives pre-summarised, where no one on the team has spoken to a customer this quarter. Talent calls where ranking algorithms or sentiment dashboards have replaced the slow read of a person across multiple contexts. Risk judgments that have moved from experienced unease to a green tile on a screen.

What is it costing?

The cost ledger has four lines. Decision quality declines first, often invisibly, because the leader is now optimising within the frame the tool offered rather than questioning whether the frame is correct or too narrow. Intuition erodes next, as the muscle of weak-signal synthesis goes unused. Organisational learning slows, because the tacit knowledge that used to be passed down through the apprenticeship model is now compressed into prompt libraries that capture the answer without the reasoning. And the workforce becomes prompt-dependent, a generation of analysts and managers who can produce polished output but struggle to think when the tool is unavailable or wrong.

What needs to stay human?

This is the discipline that separates organisations that will use AI well from those that will be hollowed out by it. The “no touch” zone is the part of the work where your competitive advantage actually lives. For most businesses, that is the judgment layer above the data, the customer relationship that no model can simulate, the cultural read of an organisation and the ethical calls that cannot be delegated to a system because accountability cannot be delegated.

What deliberate practice looks like

Three practices, none of them complicated, hold the line.

The first is the unaided first draft. Before any AI assistance is invited into a strategic question, the leader writes their own answer. A page, longhand if it helps. Only then does the tool come in, as a challenger rather than the originator. This single habit preserves the muscle that is otherwise quietly retired.

The second is the explicit human-in-the-loop on customer-facing judgment. The closer a decision sits to the customer value proposition, the higher the bar for human authorship. If your differentiator is trust, advice or relationship depth, automating the judgment layer is not efficiency. It is a slow erosion of the thing customers were paying for in the first place.

The third is what I call provenance discipline. For any significant decision, the leader should be able to answer two questions. Where did this conclusion come from? And what would have to be true for it to be wrong? Tools that cannot show their reasoning are not banned, but their outputs are treated as hypotheses rather than findings. This restores the questioning posture that automation bias quietly removes.

Notably, the organisations McKinsey identifies as AI high performers, the small share actually getting material value from their investments, are nearly three times more likely than their peers to have fundamentally redesigned their workflows around AI, and significantly more likely to have defined explicit processes for when model outputs require human validation.(7) The leaders extracting the most value from AI are not the ones offloading the most. They are the ones who have thought hardest about what to keep human.

The compounding choice

Cognitive debt, like technical debt, does not announce itself. It shows up in the quality of the next strategic decision, the depth of the next customer conversation, the calibre of the next generation of leaders coming through. By the time it is visible in performance, the interest has been compounding for years, and the organisation suffers the consequences.

This is the road to what we might call advanced ignorance, organisations equipped with the most sophisticated decision-support tools in history, run by leaders who can no longer reconstruct how their decisions were made, in cultures where no one is expected to. The absence of deliberate thought about what they are for, and what they are quietly displacing, is.

The pressure to deploy AI is not going away, and nor should it. But the leaders who will navigate the next decade well are not the ones who adopt AI fastest, nor the ones who resist it. They are the ones who pause long enough to ask what they are deploying it for, audit what they are quietly offloading in the process and protect the human judgment that makes their organisation worth choosing in the first place.

The tools are getting better at producing answers. The advantage will belong to leaders who stay better at asking the right questions.

Author: Antonios (Tony) Christodoulou

CEO and Founder, Cyber Dexterity | Adjunct Faculty GIBS Business School (Gordon Institute of Business Science) | PhD candidate in Cyberpsychology at Capitol Technology University, US. | Former CIO for a Global Fortune500 Company, American Tower Corporation.

References and links used:

El Tarhouny, Shereen, and Amira Farghaly. ‘Deskilling Dilemma: Brain over Automation’. Frontiers in Medicine 13 (February 2026). https://doi.org/10.3389/fmed.2026.1765692.

Gerlich, Michael. ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’. Societies 15, no. 1 (2025). https://doi.org/10.3390/soc15010006.

Goddard, Kate, Abdul Roudsari, and Jeremy C. Wyatt. ‘Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators’. Journal of the American Medical Informatics Association 19, no. 1 (2012): 121–27. https://doi.org/10.1136/amiajnl-2011-000089.

Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, et al. ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task’. Version 2. Preprint, arXiv, 2025. https://doi.org/10.48550/ARXIV.2506.08872.

Lester, Toby. ‘AI Is Making the Workplace Empathy Crisis Worse’. Psychology and Neuroscience. Harvard Business Review, 20 August 2025. https://hbr.org/2025/08/ai-is-making-the-workplace-empathy-crisis-worse.

Singla, Alex, Alexander Sukharevsky, Bryce Hall, Lareina Yee, Michael Chui, and Tara Balakrishnan. The State of AI: Global Survey 2025. McKinsey, 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.

Sukharevsky, Alexander, Dave Kerr, Klemens Hjartar, et al. Seizing the Agentic AI Advantage. McKinsey, 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage.

(1) Singla et al., The State of AI: Global Survey 2025.

(2) Lester, ‘AI Is Making the Workplace Empathy Crisis Worse’.

(3) Kosmyna et al., ‘Your Brain on ChatGPT’.

(4) Goddard et al., ‘Automation Bias’.

(5) Gerlich, ‘AI Tools in Society’.

(6) El Tarhouny and Farghaly, ‘Deskilling Dilemma’.

(7) Sukharevsky et al., Seizing the Agentic AI Advantage.

Share