Ex-OpenAI researchers claim Sam Altman’s public support for AI regulation is a facade: “When actual regulation is on the table, he opposes it”

What you need to know

  • OpenAI has opposed a proposed AI bill that promotes safety measures and practices in the landscape.
  • While the ChatGPT maker supports some of the bill’s provisions, it claims regulation should be “shaped and implemented at the federal level.”
  • Former OpenAI researchers say the continued development of advanced and sophisticated AI models without regulation could potentially cause catastrophic harm to humans.

As a seasoned researcher with a keen interest in the intersection of technology and ethics, I find myself deeply concerned about the current state of AI regulation. The recent events surrounding OpenAI’s opposition to SB 1047, a proposed bill designed to promote safety measures in AI development, have sparked heated debates that echo my own sentiments.


Observing the ongoing chatter suggesting OpenAI’s imminent financial collapse with estimates of approximately $5 billion in losses, I noticed that the creators of ChatGPT have expressed opposition towards a proposed AI bill (SB 1047) aimed at implementing safety measures to keep the technology from straying beyond its intended boundaries, as reported by Business Insider.

Privacy and security are major user concerns, prompting the immediate need for regulation and policies. OpenAI’s opposition to the proposed bill has received backlash, including former OpenAI researchers William Saunders and Daniel Kokotajlo.

In a letter addressing OpenAI’s opposition to the proposed AI bill, the researchers indicate:

Initially, we decided to collaborate with OpenAI, seeking to guarantee the safety of their advanced artificial intelligence technologies. However, due to a loss of faith in their ability to safely, truthfully, and reliably manage the development of these AI systems, we ultimately chose to part ways with OpenAI.

The letter claimed the ChatGPT maker develops sophisticated and advanced AI models without having elaborate safety measures to prevent them from spiraling out of control.

It appears that OpenAI moved swiftly in preparing for the GPT-40 launch, reportedly sending out invitations prior to conducting necessary tests. The company acknowledged that their safety and alignment team faced significant pressure, leaving them with limited time for thorough testing.

As a tech enthusiast, I must express my concerns about the latest product release from the firm. Despite their assertion that no corners were cut during production, there have been allegations suggesting they’ve prioritized aesthetics over safety procedures. The researchers have warned us about the creation of AI models without safety measures in place, stating that such an approach could potentially lead to severe public harm in the future.

AI regulation is crucial, but opposing forces are stronger

Sam Altman, CEO of OpenAI, has openly advocated for regulation in a straightforward manner. He suggests that technology should be governed similarly to the aviation industry, under the supervision of an international body responsible for safety testing of these advancements. Altman also mentioned, “The reason I’ve advocated for an agency-based approach rather than legislation is that within a year, any laws written would likely contain mistakes.”

As reported by the former employees of OpenAI, Altman’s advocacy for AI regulations may appear genuine, but it could potentially be misleading. In reality, when concrete regulation proposals are presented, Altman seems to oppose them. Yet, during an interview with Business Insider, a representative from OpenAI clarified:

“We strongly disagree with the mischaracterization of our position on SB 1047.” 

In a different correspondence from OpenAI’s Chief Strategy Officer, Jason Kwon, to Senator Scott Wiener (the bill sponsor in California), the organization outlined various grounds for their opposition to the proposed legislation. One of these reasons emphasized that regulation should ideally be developed and enforced at the federal level.

According to OpenAI Chief Strategy Officer Jason Kwon:

A unified federal policy on AI, instead of various state regulations, would promote innovation and enable the U.S. to take the lead in establishing worldwide AI standards.

It is uncertain whether the bill will eventually become law or if the suggested changes from the ChatGPT creator will be adopted. The researchers stated that we cannot wait for Congress to take action because they have stated they are unwilling to pass significant AI regulations. If they do pass such regulations in the future, it could potentially override California legislation.

Read More

2024-08-26 17:14