About
Subscribe

The legal AI race to the bottom

Why the real competition is beneath the surface, by Aalia Manie, Partner: Webber Wentzel and Director: Webber Wentzel Fusion
Johannesburg, 11 Mar 2026
Aalia Manie, Partner: Webber Wentzel and Director: Webber Wentzel Fusion.
Aalia Manie, Partner: Webber Wentzel and Director: Webber Wentzel Fusion.

TL;DR: The race to the bottom in legal AI is real and it runs deeper than headlines suggest. Rather than a competitive sprint across the market, it demands a deliberate descent into foundational questions about what makes legal work valuable and trustworthy. Many organisations are adopting AI faster than they are building the governance, validation capability and human judgment needed to run it responsibly. The value equation extends far beyond cost minus time saved: it must account for strategic benefits that never appear on a spreadsheet and hidden costs that accumulate beneath the surface. The most significant tax is the "validation burden": a risk compounded by poorly designed systems, agentic AI, "vibe coding" and the hollowing out of human judgment. What's required is not hesitation but rigorous operational discipline. The floor only rises if foundations go deep enough to hold it.

Consider a corporate legal team that deployed AI to review a batch of customer contracts. Output was fluent and thorough. Deals closed faster, renewal rates climbed and the legal function was applauded for its speed. Months later, a material contract surfaced with a condition carrying acceptable legal risk but unacceptable commercial risk. Confidence in the system had grown to the point where outputs stopped being meaningfully checked and the critical commercial context was overlooked. The fix required three senior lawyers, days of renegotiation and an uncomfortable CFO conversation that eroded trust in the system.

This is not a technology problem or an argument against legal AI. It is a cautionary tale. Legal AI is seductive. It is easy to become so enamoured with the speedometer that we stop looking at the map.

As business expectations, regulations and AI capabilities evolve in parallel, we need the agility to operate at multiple speeds. The legal AI race runs deeper than price and velocity. It demands that we examine the mechanics of the technology, the economics of deployment and the ethics of reliance with equal rigour.

The value equation

The dominant narrative runs like this: AI compresses time, so costs must fall and legal services budgets face an inevitable race to the bottom. The logic appears self-evident. It is radically incomplete.

For corporate legal teams, this narrative creates immediate pressure. CFOs and boards expect reduced legal spend and overhead; business units demand faster turnaround. Yet the legal function must still deliver quality, manage risk and justify increasingly scrutinised budgets.

Meeting that pressure demands a sharper question than: "How much does this cost?" It requires asking: "Can we prove the value this delivers?" Speed and direct cost savings tell only part of the story. AI surfaces commercial insights buried in contract data, captures institutional knowledge before it walks out the door and enables strategic repositioning that no time-savings spreadsheet will ever reflect. These indirect returns are real, but they remain invisible without deliberate measurement. At the same time, AI introduces hidden taxes – governance overhead, training burdens, validation workflows, operational dependencies that quietly erode projected gains. The challenge, then, is building a defensible value equation that captures both sides honestly.

Organisations that do not examine where legal value really sits will mistake output acceleration for genuine improvement. The promise of AI is not merely to do existing work faster but to deliver what was previously impossible. Those who optimise only for visible throughput risk eroding the standards that make legal work valuable in the first place.

Validation risk and the speed mirage

A January 2026 Workday report found that 37% of AI-driven time savings are eroded by correcting, clarifying or rewriting AI output. Anecdotally, practitioners report similar patterns: Scottish solicitor Brian Inkster has described a roughly 2:1 ratio of time needed to verify AI-generated legal material relative to time saved in initial generation.

The assumption that faster output means better outcomes ignores a fundamental reality: in accuracy-dependent work, the bottleneck has shifted from creation to validation. The question is no longer: "Was this produced quickly?" but "Is this insightful, helpful and defensible?" Tools designed to compress timelines could end up expanding them if that question goes unasked.

Poorly deployed AI generates more material to review, more arguments to address, more strategic paths to evaluate without making legal work more effective. Where AI merely shifts effort from drafting to verification without net overall efficiency or value-add, the economics must be scrutinised.

The validation burden is not uniform. Generative AI carries the heaviest tax because its outputs are fluent – they seem right even when wrong, commercially unsuitable or contextually irrelevant. As models grow more sophisticated and outputs are more comprehensive, errors become paradoxically harder to detect because the language is more convincing and there is more material to verify.

Not all platforms impose equivalent validation burdens. Their design and architecture can ease some of the pain. Competent systems minimise error rates. Superior systems do more than minimise, they streamline the validation process itself. A platform that produces fewer mistakes, but obscures traceability, still imposes disproportionate review costs. The procurement question is not only: "What is the accuracy benchmark?" but "Does this reduce total effort and lead to a better outcome?"

The dangerous middle ground: Where stakes and over-reliance collide

Decision-makers who treat all legal AI as one monolithic category will systematically misjudge both cost and value. The economics vary by stake.

For high-volume, lower-stakes work (for example, standard NDAs, compliance checklists), AI delivers unambiguous returns with appropriate spot-checking. For high-stakes work, the economics invert. Bet-the-company transactions rarely face pricing pressure; clients pay for judgment, not speed. Here, the risk is cognitive anchoring: a comprehensive AI-generated draft may subtly constrain the reviewer's analysis.

The danger lives in the middle ground: moderately complex contracts, research synthesis, due diligence. The work matters enough that spot-checking is insufficient, yet thorough verification is unsustainable. Organisations default to over-reliance, not because they've evaluated the risk, but because the alternative feels too expensive. Here, the "speed mirage" does its most consequential damage. If the market adopts AI in this shallow way, the floor does not rise. Once clients, firms and teams grow accustomed to faster, cheaper, more polished but less carefully interrogated work, the whole system starts competing on the wrong axis.

Agents and vibe coding: Higher reaches and deeper risks

Agentic AI (systems that chain multiple steps such as researching, reasoning and drafting) amplifies this risk exponentially. Unlike single-output tools, agentic workflows propagate errors across sequential steps: a flawed intermediate conclusion cascades through every downstream output. The validation burden is no longer confined to reviewing a final product. Bottoming out the risk in an agentic system means exhausting every possible failure mode at every layer.

"Vibe coding" – using conversational AI to generate functional code without deep technical expertise – extends this validation problem to its logical extreme. Lawyers can now prototype tools and automate workflows without coding skills or access to engineering resources, but in doing so they construct systems in a language they cannot read, then rely on the same probabilistic engine to confirm its own work. The same professional who would never advise a client to sign an unreadable contract may be deploying unverifiable code that carries real business impact. Where the output does not require deep comprehension, it delivers. Beyond that threshold, vibe coding may present the purest form of the validation gap: output that is functional but fundamentally uncheckable by the person who commissioned it.

The hollowing out of know-how: When nobody is left to drill

Another hidden cost is the depletion of know-how. Senior lawyers who intuitively catch subtle errors will not be around forever. If juniors learn only AI-mediated drafting, can they meaningfully verify machine output? As human audit capacity diminishes, the validation tax compounds rather than declines.

Agentic AI intensifies this challenge. Agentic systems compress or eliminate the intermediate reasoning stages through which practitioners develop expertise. Junior lawyers who interact only with an agent's final output never see the research paths explored, authorities weighed or analytical choices made. The capacity to spot a subtly wrong conclusion atrophy if the individual outsources to a black-box system where there is no ability to audit, understand and amend stages or actions before they are taken.

In architecture, adding weight to a structure requires reinforcing what lies beneath it. The same principle applies here. To raise the standard of legal work sustainably, organisations must invest in what sits underneath: structured training pathways, active supervision and deliberate practice that rebuild human judgment alongside AI capability.

Beneath the bottom line: True cost and returns

The ACC and Everlaw reported in October 2025 that among 657 in-house professionals surveyed, generative AI adoption more than doubled in a year (23% to 52%), yet only 12% of teams track technology ROI structurally. Organisations are making consequential decisions inside a metric vacuum. They cannot say, with evidence, what AI changed in cycle time, error rates or business outcomes. GCs who cannot demonstrate ROI with real data risk losing credibility and control over their own innovation roadmaps.

This is not a new pattern. Legal departments have repeatedly built technology business cases on productivity and cost reduction, only to be blindsided by adoption friction and verification overhead that eroded anticipated returns – while indirect value went untracked and uncredited. The difference now is pace and stakes: AI adoption is accelerating faster than teams can build the measurement infrastructure to capture what it truly costs and what it truly delivers.

The cost of legal AI extends well beyond licence or development fees. Effective deployment demands integrating AI into human-centric processes in ways that require significant internal labour, change management and opportunity cost. Training is a recurring line item: teams must be upskilled continuously as models and interfaces evolve. Regulatory and governance obligations compound the burden, particularly where AI touches sensitive data, triggering rising cyber insurance premiums and expanding compliance requirements. Vendor dependency introduces its own risks: switching costs, platform lock-in and the quiet erosion of negotiating leverage as workflows become entangled with a single provider's architecture. And none of this is static; ongoing maintenance, recalibration and performance monitoring demand sustained investment long after go-live. Each of these costs is individually manageable, but collectively they represent a material and compounding investment that must be weighed against demonstrable, not assumed, value.

The dynamic has precedent. Decades ago, economists observed that despite massive investment in computing, productivity remained stubbornly flat – a phenomenon known as the Solow paradox. Gains eventually materialised, but only after organisations made complementary investments in workflow redesign, training and process change.

The lesson is not that AI cannot deliver value, but that value does not arrive on installation. AI investments are not set-and-forget decisions.

Winning the race: Depth first, then speed

Beneath the immense promise of legal AI lurks an understated threat – one that has nothing to do with whether to invest in AI (we must) and everything to do with how and when we do so.

Legal AI delivers real, measurable returns in the right contexts. That much is settled. The harder truth is that realising them demands what most organisations haven't yet built: clear baselines, governance that functions under pressure, and the institutional patience to measure before scaling and verify before trusting. The technology may be ready, but the organisational scaffolding around it is still being improvised.

And this is where the strategic tension lives. The pressure to innovate at pace is legitimate, but pace without operational discipline produces adoption without accountability. Implementing selectively, tracking outcomes rigorously, adapting in response to data rather than marketing cycles or the latest model announcements.

The race to the bottom in legal AI is real. But it was never simply about the numbers. Price may find a floor, but our standards should be actively raised and not left to drift. Technology will evolve, regulations will tighten, vendor economics will restructure, and without deliberate intervention, the human capacity to validate what AI produces will quietly diminish. The organisations that thrive through this transition won't simply be the fastest to adopt. They'll be the ones who understood the value equation deeply enough to lay the groundwork before competitive pressure made it feel optional. That discipline is difficult. It is also the only foundation strong enough to support what the future demands.

Share