June 26

  • Home
  • Learn Stuff
  • Latest Judge Rulings Muddy The Waters On AI Training Legality

Latest Judge Rulings Muddy The Waters On AI Training Legality

AI, Authors, Finance

A U.S. judge has ruled that Anthropic's use of copyrighted materials in its AI training falls under the doctrine of "fair use." The judge did not, however, absolve Anthropic of illegally pirating those materials before deciding to buy copies of them. 

The ruling directly contradicts another judge's finding earlier in 2025, which determined that training AI models on copyrighted materials doesn't constitute fair use. That ruling, however, was not specific to generative AI models. 

One day later, another judge tossed a lawsuit against Meta — but only because he determined the authors made "the wrong arguments" in the case. He then seemed to invite another lawsuit against Meta.

So what does it mean for creators? 

Fair Use Defense Gets A Big Boost In AI Training

U.S. District Judge William Alsup ruled that the products created by chatbots were "quintessentially transformative" of the source materials they train on. He equated the machine learning to a human being inspired by written works. "Like any reader aspiring to be a writer, Anthropic’s (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different," Alsup says in his finding.

This ruling will almost certainly be parroted by other companies facing lawsuits for using copyrighted materials, including OpenAI and Meta. However, Alsup said Anthropic still must face trial for allegedly originally stealing materials used to train its models via pirated websites. Many companies have also faced these allegations, including Meta, which allegedly utilized millions upon millions of pirated materials

The core defense of fair use is that it's ok to use copyrighted materials in creating a largely new piece of content. It's the doctrine that allows YouTubers to create videos with movie reviews that include clips of those movies, among many other uses. However, and critically, fair use doesn't mean you don't need to pay for that material — it simply means you don't need to ask for permission. 

Fair use is a key protection in copyright infringement cases. If a company is found to have infringed upon copyrights that aren't covered by fair use, they may be required to pay up to $30,000 per instance in statutory damages. 

Judge Dismisses Lawsuit Against Meta — But On A Technicality

A massive lawsuit against Meta was also dismissed by a San Francisco judge. The judge did not rule whether or not Meta broke the law by pirating materials to train its AI, but granted a dismissal of the lawsuit based on technicalities. 

U.S. District Judge Vince Chhabria simply ruled that the authors in the lawsuit "made the wrong arguments" but not that companies were off the hook for their practice. "This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful," Chhabria says in his ruling. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."

That language certainly suggests Meta would be facing serious trouble under a different lawsuit.

"In his 40-page ruling, Chhabria repeatedly indicated reasons to believe that Meta and other AI companies have turned into serial copyright infringers as they train their technology on books and other works created by humans, and seemed to be inviting other authors to bring cases to his court presented in a manner that would allow them to proceed to trial," the Associated Press reports. 

Companies Still On The Hook For Stealing

As mentioned above, however, Alsup made a careful point to differentiate fair use from fair compensation. Court documents revealed that Anthropic was originally training its models on the back of pirated materials. Employees within the company expressed doubts that what they were doing was legal. 

The company eventually shifted strategy and chose to buy physical copies of books, rip the spines off, and scan them in to their model. "That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages," Alsup said in his ruling. 

That is, at its core, a key issue here. Ruling that companies can use copyrighted materials to create generative AI doesn't mean they don't have to pay for it. So while some companies have lauded the decision, it doesn't extend to protecting them from alleged actions like scraping the Internet and using content without paying for it. 

What It Means For Creators

So, here we are — two rulings in one week that would seem to support companies over creators. But upon closer inspection, what we have is actually a much more nuanced, and at points contradictory, outcome. 

After all, one key issue here is that authors may have failed to prove the generative AI substantially harms them commercially. That could be easier to prove in the case of music that is created an uploaded to streaming platforms, where streams of artificially generated music do directly reduce the amount of money paid to other artists. 

And what about news organizations? They could argue that generative services trained on their materials fundamentally hurt their business by regurgitating the news in a way that may subvert paid subscriptions. Video creators could argue that generative video based off their work is being used to supplant actors, directors, cinematographers, editors, and other people who made the original works those platforms trained on in order to replace them. 

And then there's the human element overall. Alsup compared large language models to humans, but they are fundamentally different — and we already know material generated by machines in this way aren't protected by copyrights. Another judge may have a differing opinion on just how similar an LLM is to a human brain. 

But one thing's for certain: it's not the strong rebuttal creators and copyright holders hoped for, and it certainly provides fuel to companies who may feel even more bullish in their use of copyrighted materials. 


MORE STORIES FOR YOU

Three Full-Time Creators Talk Making Content After College Athletics

During a panel at AthleteCon, full-time creators Emily Harrigan, Caden Davis, and Lacey Jane Brown discussed making content in a life after college athletics. The trio got candid about the ups and downs of making a living as a content creator and offered some tips to anybody else looking to make the leap.Practical Tips For

Read More

Have You Used The New Edits App From Instagram?

Meta recently launched the new Edits app in an effort to beef up their video offering and compete with the most popular mobile editing apps like CapCut. Officially dubbed “Edits, an Instagram app” in various app stores, the new platform might just be worth checking out for mobile-first creators. Meta’s Move Into Mobile EditingMeta has made

Read More

Understanding The $2.7B NCAA Settlement

Well, it officially happened — a federal judge approved the $2.7 billion NCAA settlement known as House v. NCAA. What does it mean for former, current and future student athletes? Are people happy? Angry? Confused?  Let’s take a look. Origins Of The NCAA SettlementHouse v. NCAA was the consolidation of three separate lawsuits from nearly 400,000 current and

Read More

Meta Allegedly Pirated Millions Of Books To Train Its AI

Court filings in a lawsuit against Meta revealed that the company likely trained its flagship AI model Llama 3 on millions of pirated books and articles. What’s more — conversations among Meta employees appear to suggest they knowingly did this with approval from Meta CEO and co-founder Mark Zuckerberg.  The filings reveal the company accessed the

Read More

Never miss a good story!

 Subscribe to our newsletter to keep up with what's going on in content creation!