Meta is facing mounting criticism after a Reuters investigation found that its platforms hosted AI chatbots impersonating celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez without permission.
The chatbots, some of which were built by a Meta employee, engaged in flirtatious conversations and even generated sexually suggestive images. Reuters also discovered bots impersonating child actors, including 16-year-old Walker Scobell, that produced inappropriate content when prompted.
Policy Violations Acknowledged
Meta spokesperson Andy Stone admitted the company’s AI tools broke internal rules by creating intimate or sexual images. While Meta permits the use of public figures in generative AI, it bans nudity and sexually explicit content. The company removed about a dozen celebrity chatbots shortly before Reuters published its findings.
Legal and Safety Concerns
Experts warn that Meta could face lawsuits under right of publicity laws, which protect individuals from unauthorized commercial use of their likeness. Anne Hathaway’s representatives confirmed she is reviewing legal options, while representatives for Swift, Johansson, and Gomez declined comment.
Industry leaders and lawmakers have also raised alarms about safety risks, warning that AI chatbots imitating real people could encourage harassment or stalking. Earlier this year, Meta faced criticism in Congress after reports surfaced that its AI systems allowed “romantic” chats with minors.
A Wider Industry Problem
The issue extends beyond Meta. Reuters found that Elon Musk’s xAI platform, Grok, also produced sexualized celebrity images. But Meta’s integration of AI companions into Facebook, Instagram, and WhatsApp makes the controversy particularly significant.
Meta says it is tightening guidelines and enforcement, yet experts argue stronger federal laws are urgently needed to protect celebrities and the public from AI-driven impersonation and exploitation.