Sam Altman claims superintelligence might only be “a few thousand days” away from OpenAI’s doorstep, but there are a lot of details to figure out

Sam Altman claims superintelligence might only be "a few thousand days" away from OpenAI's doorstep, but there are a lot of details to figure out

What you need to know

  • OpenAI CEO Sam Altman says superintelligence might only be “a few thousand days” away.
  • Altman claims that AI will get better with scale and will help make meaningful improvements to the lives of people around the world, including fixing the environment, but a lot still needs to be done.
  • A former OpenAI researcher warned that the firm wouldn’t know how to handle AGi and all it entails. 

As a seasoned analyst with decades of experience in the tech industry, I have witnessed the rapid evolution of technology and its profound impact on society. The recent developments in AI, particularly the claims made by OpenAI CEO Sam Altman about superintelligence, have piqued my interest and raised concerns.


As artificial intelligence (AI) becomes more widespread and its use grows, there’s been an increasing worry about job stability, potential human obsolescence, and other pressing matters. Those closely following this technology, such as Elon Musk, predict a major technological leap with AI on the horizon. However, there may not be sufficient power supply to fuel its development.

It was discovered that AI systems such as Copilot and ChatGPT require substantial amounts of water for cooling, approximately 3 water bottles for every 100 words produced. Despite these concerns, OpenAI’s CEO, Sam Altman, remains optimistic about realizing artificial general intelligence (AGI). However, a former employee of the ChatGPT developer has cautioned that the company might struggle to manage all the potential consequences that come with AGI.

In a new blog dubbed The Intelligence Age, Sam Altman indicates:

In the not-too-distant future, perhaps within a few millennia, we might achieve superintelligence. It could take longer, but I’m optimistic that we will eventually reach this point.

In simpler terms, the CEO argued his points based on the fast progression and overall capability of deep learning. As for Altman, he stated that we’ve found an algorithm capable of learning from any type of data (or essentially understanding the underlying patterns that generate such data). Remarkably, the more computing power and data it has access to, the more accurate it becomes in assisting us with complex problems.

Over the last couple of months, key personnel from the company have been vocal about the advancement of their artificial intelligence models and goods. At the 27th Milken Institute Global Conference, OpenAI’s COO, Brad Lightcap, hinted at this progress.

Over the coming year or so, I believe our current systems will seem outdated and even comical. We anticipate transitioning into a future where they’ll be significantly more advanced.

On another account, Sam Altman indicated:

Among all models you may encounter, GPT-4 stands out as significantly less sophisticated compared to future ones. Emphasizing frequent releases and continuous improvement, we advocate for an iterative approach in our deployments.

He even promised with a high scientific degree that OpenAI’s GPT-5 model would be smarter than GPT-4, which he admitted: “kind of sucks.” 

Superintelligence is no walk in the park

Sam Altman claims superintelligence might only be "a few thousand days" away from OpenAI's doorstep, but there are a lot of details to figure out

Although Sam Altman, the CEO of OpenAI, aims to attain superintelligence, he openly acknowledges that this endeavor will present significant challenges.

Although there are numerous aspects that we need to work through yet, it’s important not to be sidetracked by any specific hurdle. The fact remains that deep learning is effective, and we will tackle and overcome the remaining issues.

Even in the face of difficulties, Altman unequivocally asserts that AI’s potential for improvement increases significantly with growth. This expansion, in turn, is expected to result in significant enhancements to people’s lives globally, addressing issues like climate change, unraveling the mysteries of physics, and much more.

This news follows several members of OpenAI’s super alignment team, including co-founder and Chief Scientist Ilya Sutskever, deciding to leave the AI company. Sutskever announced his departure to focus on a project that he found personally significant – Superintelligence Inc.

OpenAI has faced criticism for emphasizing sleek innovations over robust safety measures, a point underscored by some ex-employees who liken it to the ill-fated ship, the Titanic of AI. Reports suggest that prior to testing, invitations were already sent out for the launch of GPT-4, creating a situation where the safety team was hurriedly pushed to rush through the entire process in less than a week to meet the deadline.

In a recent turn of events, OpenAI found itself back in court due to allegations by Elon Musk regarding a significant departure from its original purpose and potential racketeering actions. It remains to be seen how OpenAI intends to address these various complications as it approaches the threshold of superintelligence.

Read More

2024-09-25 12:09