AI safety researchers leave OpenAI over prioritization concerns

As an analyst with a background in AI ethics and safety, I am deeply concerned by the recent developments at OpenAI. The resignation of key members of their team focused on AI safety, including Ilya Sutskever and Jan Leike, is a troubling sign.


All members of OpenAI’s team dedicated to addressing potential risks posed by artificial intelligence have either left the organization or apparently joined other research teams.

After Ilya Sutskever, OpenAI’s chief scientist and a co-founder of the company, made his departure announcement, I, as part of OpenAI’s super alignment team and its other co-lead, Jan Leike, shared on X that I had tendered my resignation.

Based on Leike’s statement, he left the company because he had concerns over its prioritization of product development at the expense of ensuring AI safety.

AI safety researchers leave OpenAI over prioritization concerns

As an analyst, I would interpret Leike’s perspective as follows: In a succession of publications, I have argued that OpenAI’s current focus on specific priorities set by their leadership might not be the most effective approach as we progress towards developing Artificial General Intelligence (AGI). Instead, I strongly advocate for prioritizing safety and preparedness in our AGI development efforts.

AGI refers to a theoretical form of artificial intelligence capable of matching or surpassing human abilities across various tasks.

At OpenAI, where Leike worked for three years, he expressed concerns about the company’s focus on creating eye-catching products at the expense of building a strong AI safety culture and procedures. He underscored the importance of allocating resources, especially in terms of computational power, to bolster his team’s essential safety research efforts, which were being neglected.

“…I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point. Over the past few months, my team has been sailing against the wind…”

Last summer, OpenAI established a new research unit with the goal of anticipating the eventual rise of artificially intelligent entities that could surpass human intelligence and potentially pose threats to their makers. Ilya Sutskever, OpenAI’s chief scientist and co-founder, was named as one of the team leaders overseeing this project, which was granted access to 20% of OpenAI’s computational capabilities.

After several team members stepped down, OpenAI chose to disband the “Superalignment” unit and assimilate its responsibilities into other ongoing research endeavors within the company. This action is believed to stem from the organization-wide restructuring process that was set in motion following the governance predicament in November 2023.

As a researcher closely following OpenAI’s developments, I can share that there was an attempt led by Sutskever and others on the OpenAI board to replace Altman as CEO in November of the previous year. However, this decision faced strong opposition from OpenAI’s staff, leading to Altman being reinstated to his position.

Based on The Information’s report, Sutskever communicated to employees that the board’s action to dismiss Sam Altman represented their duty to ensure that OpenAI creates artificial general intelligence (AGI) that serves the best interests of humanity as a whole. As one of its six members, Sutskever underscored the board’s dedication to aligning OpenAI’s objectives with the greater good.

Read More

2024-05-18 14:45