OpenAI’s ‘AGI Readiness’ chief quits — ‘I want to be less biased’

As a seasoned analyst with over two decades of experience in the tech industry, I find Miles Brundage’s departure from OpenAI as an intriguing development. His extensive background in safety research and his deep understanding of AGI make him a valuable asset to any organization working in this field.


Miles Brundage, a long-standing safety researcher at OpenAI, is set to establish or become part of another artificial intelligence company, effective from October 23rd, having resigned from his role as head of “AGI Readiness” within the organization.

In 2018, Brundage took on the role of a researcher at OpenAI. Since then, he has been appointed as the head of the company’s safety department.

In a recent Substack post, he stated that the “AGI Safety Team” at OpenAI is set to be dissolved, which could mean that the company no longer has a specific department focused on ensuring the safety of Artificial General Intelligence.

Artificial General Intelligence (AGI) refers to a concept where an artificial intelligence system could execute any task that a human can accomplish with the same resources at their disposal.

As a crypto investor, I found that the main focus of Brundage’s team seemed to be on offering policy recommendations and ensuring safety measures as we, the company, progress towards Artificial General Intelligence (AGI).

According to the article, he chose to leave because he wanted to dedicate more time to individual study and activism. He mentioned various factors leading to his resignation such as wanting to reduce bias from his corporate background and exploring new possibilities via self-employment or moving to another company.

Per Brundage: 

“I plan to start a new nonprofit (and/or join an existing nonprofit) and will work on AI policy research and advocacy. I will probably do some mix of research and advocacy but the details and ratios are TBD.”

It’s unlikely that the researcher will stay unemployed for much longer, considering he has already established a possible professional relationship with Brendan McCord, the head of Cosmos Institute – an organization focusing on creating tech with a human-centric ethos.

OpenAI’s ‘AGI Readiness’ chief quits — ‘I want to be less biased’

AGI Readiness

It’s still uncertain why Brundage left OpenAI. Upon inquiry by CryptoMoon, OpenAI provided this statement:

“We fully support Miles’ decision to pursue his policy research outside industry and are deeply grateful for his contributions. His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”

The company chose not to address CryptoMoon’s inquiries about the fate of their Artificial General Intelligence (AGI) Readiness team.

OpenAI’s employee exodus

Before the establishment of the AGI Readiness team, the main department responsible for overseeing artificial general intelligence within the company was a joint leadership of the “Superalignment” team, with Ilya Sutskever and Jan Leike serving as co-leaders.

The team was dissolved following Leike’s departure from the company in May, as he stated that he had reached a point of exhaustion due to conflicts with the management.

Last month, Sutskever left the company he was working for. Since then, he’s started a rival AI business of his own, which is currently valued at approximately $5 billion.

By this time, Andrej Karpathy, OpenAI’s co-founder and chief scientist, had moved on to establish his own company back in February.

In August, one of the co-founders, John Schulman, decided to leave and join a rival company named Anthropic. Additionally, it’s said that the previous chief technology officer, Mira Murati, is currently attempting to secure funding for launching her own artificial intelligence business that will compete with the existing one.

Read More

2024-10-24 22:40