OpenAI CEO Sam Altman’s words haunt Claude AI: “Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works”

OpenAI CEO Sam Altman's words haunt Claude AI: "Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works"

What you need to know

  • Anthropic has been slapped with a lawsuit by a group of authors for copyright infringement.
  • The company is allegedly training its Claude AI model using the authors’ content without consent or compensation.
  • OpenAI CEO Sam Altman had previously admitted it’s impossible to create ChatGPT-like tools without copyrighted content.

As a seasoned researcher with years of experience in the dynamic field of artificial intelligence, I find myself deeply concerned about the recent legal battles surrounding AI companies and copyright infringement. The lawsuit against Anthropic is just another chapter in the ongoing saga of AI-copyright relations, following similar lawsuits against giants like OpenAI and Microsoft.


It appears that Sam Altman, CEO of OpenAI, may have been correct in his assertion. Developing AI models like ChatGPT seems to involve using copyrighted content, as both Microsoft and OpenAI have faced numerous lawsuits due to copyright infringement allegations over the past few years. Now, Anthropic is finding itself embroiled in similar disputes, with several authors filing a lawsuit against the company for the same reason.

According to a complaint, Anthropic has been observed employing the works of authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson to educate its Claude AI chatbot on how to answer human queries. The tech company acknowledged this situation and is now evaluating a class action lawsuit that these authors have filed against them for allegedly violating copyright laws by using their material.

According to the lawsuit: 

It’s accurate to assert that Anthropic’s model aims to capitalize on the essence and creativity encapsulated in each piece of human work, by profiting off the learning of these works. When people learn from books, they either purchase legitimate copies or borrow them from libraries that have purchased them, ensuring some level of remuneration for authors and creators.

Established in 2021, Anthropic was created with a goal to drive innovation in the realm of generative artificial intelligence and provide secure and trustworthy models for all users. Interestingly, John Schulman, one of the co-founders of OpenAI, announced his departure from ChatGPT’s creator to concentrate on AI ethics at Anthropic. It is frequently perceived as a competitor to OpenAI, with their leading models sharing many common traits.

For instance, its recently unveiled Claude 3.5 Sonnet model competes on an even field with OpenAI’s GPT-40 model with vision capabilities and a great sense of humor. 

Meanwhile, Anthropic is engaged in a separate legal battle within the court system, accused of utilizing lyrics from copyrighted songs without permission or payment. Companies like Microsoft and OpenAI in the AI sector argue that training their models with copyright material falls under the category of “fair use.” They further contend that the law does not explicitly prohibit the training of AI models using copyrighted content.

What would happen if AI models were barred from using copyrighted content?

OpenAI CEO Sam Altman's words haunt Claude AI: "Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works"

Despite tech companies not facing limitations when it comes to employing copyrighted material in the development of their AI models, various accounts suggest that these chatbots appear to be losing intelligence, frequently providing incorrect replies or veering off course.

It has been observed that some AI chatbots have experienced vivid illusions, inaccurately suggesting a Food Bank as a tourist destination and even inviting readers to participate in a survey to determine the cause of a woman’s tragic death. Remarkably, Google’s AI Overviews feature once advised consuming glue and rocks. The predicament could become more severe if AI chatbots are barred from evaluating copyrighted material.

Read More

2024-08-21 12:16