About
Subscribe

Claude Code flaw exposes AI website security gaps

Nicola Mawson
By Nicola Mawson, Contributing journalist
Johannesburg, 02 Mar 2026
Almost three-quarters of websites are built using artificial intelligence. (Graphic created with GenAI)
Almost three-quarters of websites are built using artificial intelligence. (Graphic created with GenAI)

A flaw in Anthropic’s Claude Code has highlighted broader in artificial intelligence (AI)-driven web development, as nearly three-quarters of new web pages are now generated using the technology.

Check Point Research found that the vulnerability in Claude Code allowed attackers to remotely execute code and steal application programming interface – or API – credentials through malicious project configurations. Anthropic has since remediated the vulnerabilities.

The flaw makes it possible to weaponise AI so that developers are turned into unsuspecting hackers when code goes live because it could have contained malicious code, explains Jacqui Muller, a researcher at Belgium Campus iTversity.

In addition, the vulnerability means the developers themselves could be exploited and, for example, data associated with a website that they created could be hacked and held ransom, notes Muller.

AI frenzy

This potential vulnerability is not limited to Claude Code because it can be exploited across several sandbox environments, including those offered by AI development tools Replit, Lovable and GitHub Copilot, among others, says Muller, who is also PhD candidate in computer science and information technology with information systems at North West University.

Nearly 74.2% of newly-created web pages in April 2025 included AI-generated content, according to a large-scale study by Ahrefs, an SEO and web analytics platform. BuiltWith.com lists almost eight million websites built using AI tools, including Verizon.com, Bell.ca and Roche.com.

Muller says those sites could be vulnerable to being exploited through the AI dev environment, which won’t be a known quantity until scans are run, which will take some time. “It depends on the extent that they use AI for their development and the underlying tech stacks they are using.”

Claude Code running inside a terminal window. (Image: Check Point)
Claude Code running inside a terminal window. (Image: Check Point)

Claude Code runs inside the terminal or development environment, allowing developers to delegate coding tasks through natural language instructions. Because the terminal has permission to create and delete files, install software, access stored keys and connect to the internet, the implications of the flaw were significant.

researchers found that, if an attacker hides malicious instructions inside configuration files, Claude Code could execute them automatically.

The vibe

Muller says there is a growing risk in what many are casually calling “vibe coding” – building solutions by prompting AI and accepting whatever it generates without properly understanding, reviewing, or validating the output.

“While generative tools can accelerate development dramatically, they can also introduce hidden vulnerabilities, inefficient logic, insecure defaults and architectural flaws,” says Muller.

Large volumes of near-identical sites built on the same frameworks, templates, or misconfigured services create predictable patterns and predictability created opportunities for attackers, says Muller.

“Hackers do not need to target one site manually; they target the framework footprint. The challenge is that many of the tools used to generate websites through AI use the same technology stacks, such as React and Vite,” she adds.

“We anticipate React and Vite sites being the victim of bulk cyber attacks as these AI tools will likely introduce the same vulnerabilities into these sites, unless instructed otherwise using an instruction set – which would require some serious expertise.”

Be specific

Brandon Lubbe, software developer at Enterprise Cloud, says that at the heart of any AI-generated output is a set of instructions that determines how tasks are carried out, how data is accessed and how decisions are made.

Without understanding those instructions, AI-generated code becomes something developers use but do not fully see into. “You may see functionality working on the surface, but you have no assurance that it adheres to secure execution patterns, input validation standards, or least-privilege principles.”

Relying on AI-generated code without proper review does not shift responsibility away from the developer, Muller warns. The risks of only relying on AI include personal information becoming available over the internet because of code a hacker could have inserted in the development environment, or through access gained by exploiting a vulnerability created by the nascent technology.

Muller says developers can thwart malicious actors by reviewing code, using instruction sets and thoroughly testing sites.

Share