A former OpenAI researcher claims the ChatGPT maker could be on the precipice of achieving AGI, but it’s not prepared “to handle all that entails” as shiny products get precedents over safety

A former OpenAI researcher claims the ChatGPT maker could be on the precipice of achieving AGI, but it's not prepared “to handle all that entails" as shiny products get precedents over safety

What you need to know

  • OpenAI has lost almost half of its super alignment team.
  • A former OpenAI researcher attributes the mass exodus of employees from the firm to its focus on shiny products as safety processes take a backseat.
  • The research claims OpenAI could be on the brink of achieving AGI (artificial general intelligence), but it cannot handle what it entails.

As a seasoned researcher with over two decades of experience in the AI field, I find myself increasingly concerned about the trajectory of OpenAI. Having witnessed the rapid advancements and transformative potential of artificial intelligence firsthand, I have always been an advocate for its ethical and responsible development.


Over the last few months, I’ve noticed quite a few team members from OpenAI stepping away for different reasons. Most recently, co-founder Greg Brockman announced he’d be taking a sabbatical until the end of the year, while researcher John Schulman shared his decision to leave and join Anthropic, where he’ll focus on ensuring AI aligns with human values.

Following Sam Altman’s unusual dismissal and subsequent reappointment as CEO by the board of directors, which included Jan Leike, the former OpenAI super alignment lead, several key executives began departing from the AI company. Leike himself left the company after encountering repeated conflicts with high-ranking officials over matters such as safety, adversarial robustness, and more. He also pointed out that the emphasis on safety had been overshadowed by a focus on creating eye-catching products.

During a conversation with Fortune, Daniel Kokotajlo, who previously worked at OpenAI until early 2023 and is also part of their super alignment team, mentioned that over half of the team members have already left the company. “It’s not been an organized event,” Kokotajlo explained. “I believe it’s just people choosing to leave on their own.”

It’s worth noting that when OpenAI was first established, its primary objective was to develop artificial general intelligence in a manner that would benefit all of humankind. Recently, however, the company seems to have shifted away from this altruistic mission and operates more like any other profit-driven enterprise.

Elon Musk, in a public statement, has strongly condemned OpenAI for deviating from its initial purpose, labeling it as a severe breach of trust. In response, Musk initially filed a lawsuit against the company and Sam Altman concerning this matter, but later chose to withdraw that suit earlier this year. Notably, Musk has since lodged another grievance against OpenAI, accusing them of participating in illegal activities such as racketeering. As stated by Musk’s legal team, “The previous lawsuit did not have sufficient force.”

Is OpenAI AGI-ready?

A former OpenAI researcher claims the ChatGPT maker could be on the precipice of achieving AGI, but it's not prepared “to handle all that entails" as shiny products get precedents over safety

It’s no secret that OpenAI is working toward hitting the AGI benchmark, however, there’s a rising concern among users about its implications for humanity. According to an AI researcher, there’s a 99.9% probability AI will end humanity, and the only way to stop this outcome is not to build AI in the first place.

Despite establishing a dedicated safety team under CEO Sam Altman to oversee that the technology developed by OpenAI adheres to stringent safety and security guidelines, it appears that the company is primarily concentrating on product development and the commercial aspects of its operations.

It’s worth noting that OpenAI apparently moved quickly to release GPT-40, an action taken shortly after disbanding its safety and alignment team. Reports suggest they even distributed invitations for the launch event before proper testing had been completed. The company acknowledged that this team was under strain and had minimal time for testing due to the pressure they faced.

In essence, it’s challenging to pinpoint the main reasons why several executives and employees have departed from the company over the past few years. Some have even established competing firms with a focus on ensuring the safety of superintelligence. Kokotajlo hypothesizes that this exodus might be due to OpenAI being close to achieving the AGI benchmark, but it currently lacks the necessary knowledge, rules, and resources to manage all that comes with it.

In other locations, OpenAI has expressed disagreement with a senator’s proposed AI law aimed at implementing safety measures to keep the technology on track, arguing instead for federal regulation in this area.

Read More

2024-08-27 14:13