About
Subscribe

Who is to blame when AI defames?

By Dario Milo – partner (and a member of the firm’s AI specialist team in dispute resolution, advising clients on emerging AI-related disputes, legal issues and potential risks), and Lia Wheeler – Candidate Attorney, Webber Wentzel.
Johannesburg, 23 Feb 2026
Lia Wheeler and Dario Milo.
Lia Wheeler and Dario Milo.

The rise of generative AI has also given rise to an increase in litigation based on false results which cause harm to a person's reputation. Defamatory output is sometimes caused by generative AI, and at other times, AI can be used to create false impressions about a person, as in the case of deepfakes. If a person’s reputation or dignity is harmed, a cause of action arises in South African law.

AI-related defamation lawsuits are being brought with increasing regularity. One of the first, against ChatGPT, was introduced in 2023 in Australia. The Hepburn Shire Council Mayor, Brian Hood, launched a defamation lawsuit against OpenAI, the owner of ChatGPT. The lawsuit concerned a false result generated by ChatGPT that claimed the mayor had served time in prison for a bribery charge in relation to a matter where he was, in fact, the whistleblower. The lawsuit was resolved in early 2024 after corrections were made to the ChatGPT outputs.

Another interesting case – this time in the United States of America (USA) – involved Robert Starbuck, an American filmmaker, journalist and activist. His complaint, filed on 29 April 2025, set the scene: "Imagine waking up one day and learning that a multibillion-dollar corporation was telling whoever asked that you had been an active participant in one of the most stigmatised events in American history – the Capitol riot on January 6th, 2021 – and that you were arrested for and charged with a misdemeanour in connection with your involvement in that event.

"Further imagine that these accusations were completely false…

"…Finally, imagine that the technology company continued to publish these and other lies about you for nine months after you first asked them to stop."

This was the basis on which Starbuck brought a defamation lawsuit against Meta Platforms – the owner of the Meta AI chatbot. In August 2024, Starbuck discovered the chatbot included these false and damaging statements about him in its outputs. According to his complaint, Starbuck "did everything within his power to alert Meta about the error and enlist its help to address the problem". However, despite his attempts to bring this to the company's attention, the defamatory outputs reportedly continued. It seems that while all information relating to Starbuck was eventually erased from all text outputs, additional misinformation was added via the Meta AI voice feature, including claims that Starbuck had "pled guilty over disorderly conduct" relating to the Capitol riot and that he had "advanced Holocaust denialism".

The question "who is to blame when AI defames?" may have been answered by the Delaware Superior Court in this case, but a public apology by Meta's Joel Kaplan indicated that the "parties [had] resolved this matter" and that the parties were collaborating to mitigate risks relating to hallucinations.

Another case – also in the USA – involved Mark Walters, a media personality, radio talk show host and Second Amendment (right to bear arms) advocate, who launched a defamation lawsuit against OpenAI in 2023. He claimed that Frederick Riehl – a journalist and editor of a news site focusing on Second Amendment rights – used ChatGPT, which produced statements about Walters being involved in embezzlement. Walters sued OpenAI (the owner of ChatGPT). However, the Superior Court of Gwinnett County in the State of Georgia ruled in favour of OpenAI in May 2025, on various grounds, one of which was that, as a public figure, Walters had to demonstrate actual malice (knowledge of falsity) on the part of ChatGPT. The court held that OpenAI could not be held liable; the key basis for the decision appears to be that the inclusion by ChatGPT of a disclaimer below the prompt bar meant that reasonable readers would know that ChatGPT makes mistakes. When considering whether the disputed output communicated a defamatory meaning as a matter of law, the court scrutinised this "hypothetical reasonable reader" test. The court identified that "[d]isclaimer or cautionary language weighs in the determination of whether this objective, ’reasonable reader’ standard is met". Due to the recurrent disclaimers that applied, users of ChatGPT in Riehl's position could not have believed that the output consisted of "actual facts" without venturing to verify the information. In the order, reference was made to Riehl's testimony, in that he was "sceptical" of the output; knew that it "was not true" and consisted of "the wrong information"; and that he was cognisant of ChatGPT's capability to produce hallucinations. Because Riehl did not believe the output, the court concluded that it could not have communicated a defamatory meaning as a matter of law. The court confirmed that this alone would have been adequate to find in favour of OpenAI and grant summary judgment.

In South Africa, while no cases have yet been decided, the AI platforms may not be as lucky as ChatGPT was in the Walters case. In South African law, the publication would likely be regarded as defamatory despite the disclaimer. Disclaimers are not "magic wands" to cure defamatory speech. And if, as we believe likely, they are required to show that they acted without negligence, then a court will need to take a very close look at the systems and processes the platform has adopted. At the very least, it is likely that such platforms will have a duty to act reasonably once notified of the defamatory or unlawful content. As AI platforms operating in South Africa will soon see, there is nothing artificial about a defamation lawsuit.

* Dario Milo is a partner at Webber Wentzel and a member of the firm’s AI specialist team in dispute resolution, advising clients on emerging AI-related disputes, legal issues and potential risks.

Share