About
Subscribe

AI ‘stars’ threaten humanity’s soul

Nicola Mawson
By Nicola Mawson, Contributing journalist
Johannesburg, 23 Jan 2026
AI-created actors flatten storytelling as they lack lived experience. (Image: Nicola Mawson | Freepik)
AI-created actors flatten storytelling as they lack lived experience. (Image: Nicola Mawson | Freepik)

Generative (GenAI) characters are increasingly becoming celebrities, a trend that eroding humanity’s creative foundations and cultural , Hollywood warns.

The rise of AI-generated personalities − such as actress Tilly Norwood, performer Lil Miquela, models Shudu and Imma, as well as the resurrection of Peter Cushing in Rogue One − also makes it harder for the public, regulators and courts to hold anyone accountable for harm.

Duncan Crabtree-Ireland, national executive director and chief negotiator of the Screen Actors Guild-American Federation of Television and Radio Artists, says by treating synthetic systems like humans, audiences could be misled, creative standards lowered, and ethical and legal clarity regarding responsibility erased.

The billion-dollar illusion

Research shows the market for virtual celebrities is growing rapidly. Straits Research says the global virtual influencer market is expected to grow from $8.30 billion (R33.6 billion at this morning’s exchange rate of R16.10) in 2025, to reach $111.78 billion (R1.8 trillion) by the end of the decade.

AI live platform inSnapAI points to the fact that the number of AI influencers has grown from in the hundreds in 2024, to more than 20 000 by the middle of last year. “AI influencers are making waves.”

AI actress Tilly Norwood caused a stir when she appeared on the scene in October. (Source: tillynorwood on Instagram)
AI actress Tilly Norwood caused a stir when she appeared on the scene in October. (Source: tillynorwood on Instagram)

Research shows there are tangible benefits to using AI characters. A paper published on academic portal Advances in Consumer Research in August found that AI-optimised influencer content generates 37% higher engagement rates than traditional influencer marketing. Purchase intent increases 42% when AI-driven personalisation was employed in campaigns, it states.

The Massachusetts Institute of Technology found last year that using GenAI can reduce video-ad production costs by approximately 90%, making personalised video campaigns economically viable at scale for the first time.

A craft

Crabtree-Ireland, writing in a blog published to coincide with the World Economic Forum in Davos this week, notes GenAI can “adopt conversational styles, borrow familiar turns of phrase and increasingly appear with photoreal faces and expressive voices”.

This, Crabtree-Ireland says, has important consequences when artificial systems are described using human professional categories, particularly when they are called “actors”.

“Acting is not simply a task that can be executed. It is a craft shaped by memory, vulnerability, curiosity and choice. To label a synthetic figure as an actor is to erase the distinction between human expression and automated simulation.”

Norwood's reckoning

This is at the heart of the recent controversy surrounding Norwood, created by AI talent studio Xicoia, part of Dutch production studio Particle6. Norwood’s 2025 launch sparked a Hollywood backlash, as stakeholders argued that using AI in this form devalues human artistry and raises ethical concerns over training on real performers’ work.

Eline Van der Velden's statement defending AI actress Tilly Norwood. (Source: tillynorwood on Instagram)
Eline Van der Velden's statement defending AI actress Tilly Norwood. (Source: tillynorwood on Instagram)

Eline van der Velden, founder of Xicoia, defended the creation, arguing that Norwood represents artistic experimentation rather than labour substitution. “She is not a replacement for a human being, but a creative work – a piece of art,” Van der Velden wrote on Norwood’s Instagram account.

Van der Velden, herself an actor, suggested that AI characters should be judged “as part of their own genre, on their own merits, rather than compared directly with human actors”.

Emotion by algorithm

Crabtree-Ireland flags several issues with using AI to create actors, including that it flattens human storytelling, which resonates because performances draw on lived experience with emotion and insight rooted in real human joy, loss and growth.

“Synthetic systems do not possess experience. They recombine patterns that suggest emotion.”

He says if audiences are saturated with synthetic content, culture itself risks becoming flatter and viewers may confuse “abundance for originality and technical fluency for meaning”.

Crabtree-Ireland comments: “That would be a loss not only for creative workers but for anyone who relies on stories to make sense of the world.”

AI-generated model, Shudu. (Source: shudu.gram on Instagram)
AI-generated model, Shudu. (Source: shudu.gram on Instagram)

As important to retaining humanity in storytelling is the question of accountability, as when audiences encounter an “actor”, they understand that means someone who signs contracts, negotiates terms and bears legal accountability.

Synthetic systems can do none of this. Yet by applying human professional titles to AI, companies risk obscuring the humans who actually designed, trained and deployed these systems – making it harder to assign responsibility when harm occurs.

“If a synthetic news host spreads misinformation or defames a private citizen, responsibility does not rest with the software. It rests with the humans who designed, trained, deployed and marketed it. However, giving these systems human characteristics can obscure that reality, shielding them from scrutiny,” Crabtree-Ireland cautions.

Simulation vs humanity

Despite the furore, Crabtree-Ireland says none of these concerns calls for abandoning digital tools. He explains that audiences understand that characters within stories can express thoughts or emotions that exist only within the narrative.

Problems arise when fictional framing crosses into commercial and real-world contexts, and creative industries should keep narrative and commercial reality clearly separate, he adds.

“Transparent language preserves the meaning of human roles, protects labour categories in law and ensures the public is not misled about who is actually doing the work.”

Disney resurrected Peter Cushing as Grand Moff Tarkin in 2016's “Rogue One, A Star Wars Story”. (Source: Disney Plus)
Disney resurrected Peter Cushing as Grand Moff Tarkin in 2016's “Rogue One, A Star Wars Story”. (Source: Disney Plus)

Companies also need to ensure a human is explicitly responsible for synthetic output, says Crabtree-Ireland. “The more a system resembles a human, the more essential it becomes to identify the humans accountable for it. No synthetic system should function as a shield for harmful outcomes.

“We can use technology to create extraordinary stories, but we cannot confuse simulation with humanity. A synthetic creation is a tool. A performer is a person. If we fail to protect that distinction, we risk losing not only accountability but the human voice at the centre of culture itself.”

Share