ChatGPT maker OpenAI faces class action lawsuit over data to train AI

June 28, 2023
225 views

Listen 6 min Share this article Share Comment on this story Comment

SAN FRANCISCO — A California-based law firm is launching a class-action lawsuit against OpenAI, alleging the artificial-intelligence company that created popular chatbot ChatGPT massively violated the copyrights and privacy of countless people when it used data scraped from the internet to train its tech. Tech is not your friend. We are. Sign up for The Tech Friend newsletter. ArrowRight The lawsuit seeks to test out a novel legal theory — that OpenAI violated the rights of millions of internet users when it used their social media comments, blog posts, Wikipedia articles and family recipes. Clarkson, the law firm behind the suit, has previously brought large-scale class-action lawsuits on issues ranging from data breaches to false advertising.

The firm wants to represent “real people whose information was stolen and commercially misappropriated to create this very powerful technology,” said Ryan Clarkson, the firm’s managing partner.

Advertisement

The case was filed in federal court in the northern district of California Wednesday morning. A spokesman for OpenAI did not respond to a request for comment.

The lawsuit goes to the heart of a major unresolved question hanging over the surge in “generative” AI tools such as chatbots and image generators. The technology works by ingesting billions of words from the open internet and learning to build inferences between them. After consuming enough data, the resulting “large language models” can predict what to say in response to a prompt, giving them the ability to write poetry, have complex conversations and pass professional exams. But the humans who wrote those billions of words never signed off on having a company such as OpenAI use them for its own profit.

“All of that information is being taken at scale when it was never intended to be utilized by a large language model,” Clarkson said. He said he hopes to get a court to institute some guardrails on how AI algorithms are trained and how people are compensated when their data is used.

Advertisement

The firm already has a group of plaintiffs and is actively looking for more.

The legality of using data pulled from the public internet to train tools that could prove highly lucrative to their developers is still unclear. Some AI developers have argued that the use of data from the internet should be considered “fair use,” a concept in copyright law that creates an exception if the material is changed in a “transformative” way.

The question of fair use is “an open issue that we will be seeing play out in the courts in the months and years to come,” said Katherine Gardner, an intellectual-property lawyer at Gunderson Dettmer, a firm that mostly represents tech start-ups. Artists and other creative professionals who can show their copyrighted work was used to train the AI models could have an argument against the companies using it, but it’s less likely that people who simply posted or commented on a website would be able to win damages, she said.

Advertisement

“When you put content on a social media site or any site, you’re generally granting a very broad license to the site to be able to use your content in any way,” Gardner said. “It’s going to be very difficult for the ordinary end user to claim that they are entitled to any sort of payment or compensation for use of their data as part of the training.”

The suit also adds to the growing list of legal challenges to the companies building and hoping to profit from AI tech. A class-action lawsuit was filed in November against OpenAI and Microsoft for how the companies used computer code in the Microsoft-owned online coding platform GitHub to train AI tools. In February, Getty Images sued Stability AI, a smaller AI start-up, alleging it illegally used its photos to train its image-generating bot. And this month OpenAI was sued for defamation by a radio host in Georgia who said ChatGPT produced text that wrongfully accused him of fraud.

OpenAI isn’t the only company using troves of data scraped from the open internet to train their AI models. Google, Facebook, Microsoft and a growing number of other companies are all doing the same thing. But Clarkson decided to go after OpenAI because of its role in spurring its bigger rivals to push out their own AI when it captured the public’s imagination with ChatGPT last year, Clarkson said.

Advertisement

“They’re the company that ignited this AI arms race,” he said. “They’re the natural first target.”

OpenAI doesn’t share what kind of data went into its latest model, GPT4, but previous versions of the tech have been shown to have digested Wikipedia pages, news articles and social media comments. Chatbots from Google and other companies have used similar data sets.

Regulators are discussing enacting new laws that require more transparency from companies about what data went into their AI. It’s also possible that a court case could prompt a judge to force a company such as OpenAI to turn over information on what data it used, said Gardner, the intellectual-property lawyer.

Some companies have tried to stop AI firms from scraping their data. In April, music distributor Universal Music Group asked Apple and Spotify to block scrapers, according to the Financial Times. Social media site Reddit is shutting off access to its data stream, citing how Big Tech companies have for years scraped the comments and conversations on its site. Twitter owner Elon Musk threatened to sue Microsoft for using Twitter data it had gotten from the company to train its AI. Musk is building his own AI company.

Advertisement

The new class-action lawsuit against OpenAI goes further in its allegations, arguing that the company isn’t transparent enough with people who sign up to use its tools that the data they put into the model may be used to train new products that the company will make money from, such as its Plugins tool that lets other companies use OpenAI’s. It also alleges OpenAI doesn’t do enough to make sure children under 13 aren’t using its tools, something that other tech companies including Facebook and YouTube have been accused of over the years.

Share

Source: The Washington Post