Nobel Prize winner claims former OpenAI Chief Scientist fired Sam Altman because he “is much less concerned with AI safety than profits” — and suggests superintelligence might be on the horizon: “We have maybe 4 years left”‘ before human extinction

Nobel Prize winner claims former OpenAI Chief Scientist fired Sam Altman because he "is much less concerned with AI safety than profits" — and suggests superintelligence might be on the horizon: "We have maybe 4 years left"' before human extinction

What you need to know

  • According to a Nobel Prize winner, his former student Ilya Sutskever, former OpenAI Chief Scientist, fired Sam Altman because of his focus on generating profits rather than developing safe AI. 
  • The ChatGPT maker has been in the spotlight for prioritizing the development of shiny products while testing and safety are on the back burner.
  • The AI firm reportedly rushed the launch of its GPT-4o model and even sent out RSVPs for the launch party before testing had begun.

As a lifelong technology enthusiast and AI research aficionado, I find myself deeply troubled by the latest developments at OpenAI. Having closely followed the trajectory of AI research since its inception, I have always admired the noble intentions behind this pioneering organization. However, recent reports suggesting that profit motives are overshadowing safety concerns are alarming and unsettling.


Over the last few months, OpenAI has been attracting attention due to several factors, one of which is financial concerns such as potential bankruptcy predictions with estimated losses reaching approximately $5 billion over the next year. Additionally, key executives like Chief Technology Officer Mira Murati and Chief Scientist Ilya Sutskever have left the company, and there’s a speculation that OpenAI might transform into a for-profit entity to prevent hostile takeovers and external meddling. This move could potentially result in returning the $6.6 billion raised during its recent funding round to investors.

If you haven’t heard, scientists John J. Hopfield and Geoffrey E. Hinton were awarded the Nobel Prize in Physics for groundbreaking discoveries that contributed significantly to the creation of artificial neural networks. This technology is widely employed by tech giants such as Google and OpenAI to power their search engines and chatbots.

Remarkably, it’s worth noting that Ilya Sutskever, once serving as OpenAI’s Chief Scientist, was formerly a student of Geoffrey E. Hinton. In fact, Hinton acknowledged Sutskever as an ‘intelligent’ student, even admitting that this scholar outsmarted him in some aspects and played a significant role in his work.

The researcher subsequently recognized Sutskever’s accomplishments during his tenure as OpenAI’s Chief Scientist, and at the same time, Hilton took this moment to voice concerns about the direction of ChatGPT’s development under Sam Altman’s guidance. He even suggested that Altman prioritizes profits over AI safety.

Nobel Winner Geoffrey Hinton says he is particularly proud that one of his students (Ilya Sutskever) fired Sam Altman, because Sam is much less concerned with AI safety than with profits from r/OpenAI

Initially, Sam Altman, the CEO of OpenAI, was temporarily removed from his position by the board of directors. This action, which caused an uproar among employees and investors due to the ensuing controversy, was later reversed a few days later. At that time, Igor Sutskever, who was on the board, is believed to have backed Altman’s dismissal. Notably, after Altman was reinstated, Sutskever did not return to work, which might suggest tension in their working relationship.

In simpler terms, the physicist discussed the fast advancement of AI and the potential risk it poses to humanity. Hilton suggests we are nearing the point where superintelligence can be achieved, a development which could lead to the demise of humankind. However, he estimates that it will take approximately $7 trillion and several years to construct 36 semiconductor plants and additional data centers to make this a reality.

Meanwhile, Hilton is organizing his responsibilities, as he anticipates we may only have around 4 more years. Previously, an ex-researcher from OpenAI suggested that the company could be approaching a significant milestone. Yet, it might not yet be ready to fully tackle everything that this step involves.

Is safety still paramount for OpenAI?

Nobel Prize winner claims former OpenAI Chief Scientist fired Sam Altman because he "is much less concerned with AI safety than profits" — and suggests superintelligence might be on the horizon: "We have maybe 4 years left"' before human extinction

During a large number of high-ranking officials leaving OpenAI, Jan Leike, who used to lead alignment, stated that he had frequent disagreements with management over key concerns in developing next-generation AI models, such as security, monitoring, preparedness, safety, robustness against adversaries, (super)alignment, confidentiality, societal impact, and more. This situation suggested to Leike that the company was primarily interested in creating attractive products, while safety considerations seemed secondary.

A different report seems to support Leike’s views, as it has been revealed that the safety team at OpenAI was urged to expedite the testing process of the GPT-4o model. Strikingly, invitations for a GPT-4o launch party were allegedly dispatched by ChatGPT’s creator before any testing had actually taken place. However, an OpenAI representative maintains that the launch was challenging for their safety team, but asserts that they did not compromise on safety protocols during the product launch.

Previously serving as Chief Scientist at OpenAI, Ilya Sutskever chose to depart earlier this year to devote his attention to a project that held significant personal significance for him. Subsequently, he established Safe Superintelligence Inc., an organization dedicated to the development of safe superintelligence, which is a crucial concern in the rapidly advancing field of AI technology.

Initially, Sutskever’s departure appeared inconspicuous from an external perspective; however, top executives expressed their concerns privately about the potential collapse of the firm. It was said that they made attempts to persuade the Chief Scientist to return, but finding a suitable role for the co-founder within the company’s ongoing restructures proved challenging.

Read More

2024-10-10 18:10