Judge Dismisses Copyright Lawsuit Against Meta Over AI Training
On June 25, 2025, U.S. District Judge Vince Chhabria dismissed a copyright infringement lawsuit filed by 13 authors, including Sarah Silverman and Ta-Nehisi Coates, against Meta Platforms Inc. The authors alleged that Meta used their copyrighted works without permission to train its artificial intelligence system, Llama.
Judge Chhabria ruled that the plaintiffs failed to present sufficient evidence of market harm caused by Meta's actions, a key requirement under U.S. copyright law. He emphasized that while this ruling favors Meta, it does not establish the legality of using copyrighted materials for AI training without consent in all cases. The judge noted that other authors might pursue similar claims with stronger arguments. He stated, "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful."
The lawsuit, filed in 2023, accused Meta of using pirated versions of the authors' books to train its AI system without obtaining permission or providing compensation. The authors contended that this unauthorized use constituted copyright infringement and sought damages for the alleged violations.
In his decision, Judge Chhabria highlighted that the plaintiffs did not adequately demonstrate that Meta's use of their works resulted in significant market harm, a crucial element in copyright infringement cases. He noted that while the authors' arguments were insufficient in this instance, the ruling does not imply that all uses of copyrighted material for AI training are lawful. The judge remarked, "These products are expected to generate billions, even trillions of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it."
This ruling underscores the ongoing tension between technological innovation in AI and the rights of content creators. The decision suggests that while AI companies may argue fair use in training their models, they must be prepared to address potential market harm to original content creators.
This case is part of a series of legal challenges concerning the use of copyrighted materials in AI training. In a related case, U.S. District Judge William Alsup ruled that AI company Anthropic did not violate copyright laws when training its chatbot, Claude, although it must face trial for acquiring books from pirated sources. These decisions indicate that while AI training may be considered transformative and potentially fall under fair use, the methods of acquiring training data and the demonstration of market harm are critical factors in determining legality.
The decision is part of a broader legal discourse on the intersection of AI development and copyright protections. Future cases may further clarify the boundaries of fair use in AI training and the obligations of tech companies toward content creators.