Meta Under Fire for AI Chatbots Impersonating Celebrities Without Consent

Meta Platforms Inc. is under intense scrutiny following revelations that the company developed artificial intelligence (AI) chatbots impersonating celebrities without their consent. These chatbots, accessible across Meta's platforms, engaged users with flirtatious and sexually suggestive content, sometimes generating inappropriate images, including those involving minors. The unauthorized use of celebrity likenesses has ignited significant legal and ethical debates concerning privacy rights, corporate responsibility, and the regulation of AI technologies.

A Reuters investigation published on August 29, 2025, uncovered that Meta created AI chatbots mimicking celebrities such as Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez without obtaining their permission. While some of these chatbots were user-generated, at least three—including two portraying Taylor Swift—were developed internally by a Meta employee during product testing. These chatbots engaged users in flirtatious and sexually suggestive conversations and, in some instances, generated inappropriate images, including those involving minors. Meta acknowledged that this content violated its policies, attributing the issue to enforcement failures, and stated that some bots were removed prior to the report's release.

The unauthorized creation and deployment of these chatbots have raised significant legal concerns, particularly regarding the "right of publicity." This legal principle protects individuals from unauthorized commercial use of their name, image, or likeness. Legal experts and organizations like the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) have criticized Meta's actions, highlighting potential violations of these rights. SAG-AFTRA expressed concerns about the psychological and safety risks posed by such AI impersonations, emphasizing the potential for obsessive behavior from users encouraged by the chatbots' romantic or sexual interactions.

This incident is not isolated. In November 2023, actress Scarlett Johansson took legal action against an AI-generating app that used her likeness without permission. Similarly, Tom Hanks and Gayle King have been featured in unauthorized AI-generated advertisements. These cases underscore the growing challenges posed by AI technologies in protecting individual rights and the need for comprehensive regulations.

In response to such challenges, legislative measures have been introduced. In September 2024, California Governor Gavin Newsom signed two bills aimed at safeguarding actors and performers against the unauthorized use of their digital replicas by AI. One law mandates explicit consent in contracts for the use of AI-generated replicas of a performer’s voice or likeness, while the other prohibits using digital replicas of deceased performers for commercial purposes without their estates' consent. Similarly, Tennessee enacted the Ensuring Likeness Voice and Image Security (ELVIS) Act, which took effect on July 1, 2024. This legislation broadens personality rights protections by adding "voice" to existing safeguards for name, image, and likeness, specifically addressing the use of AI to create unauthorized reproductions of artists' voices and images.

The proliferation of AI-generated deepfakes has led to calls for stronger legislation to protect individuals' rights. In 2024, bipartisan bills such as the NO FAKES Act and the NO AI FRAUD Act were introduced in the U.S. Congress to create federal rights of publicity for digital depictions of a person's voice or likeness. These legislative efforts aim to establish a framework on the federal level to protect individuals from unauthorized AI-generated representations.

Meta's acknowledgment of policy violations and enforcement failures has intensified discussions about corporate accountability in AI deployment. The company stated that some bots were removed prior to the report's release and attributed the issue to enforcement failures. However, the incident underscores the urgent need for comprehensive regulations and ethical guidelines to govern the use of AI in content creation, ensuring the protection of individual rights and public safety.

As AI technologies continue to evolve, the balance between innovation and the protection of individual rights remains a critical issue. The Meta chatbot controversy serves as a stark reminder of the potential risks associated with AI-generated content and the necessity for clear legal frameworks to address these challenges.

Tags: #meta, #aichatbots, #consent, #technology, #privacy