AI safety researcher says it’s no longer a question of how long but how much money until we reach AGI

AI safety researcher says it's no longer a question of how long but how much money until we reach AGI

As a seasoned researcher who has been closely observing the development of AI for decades, I must say that the current state of affairs is both fascinating and perplexing. The reports about the bottlenecks faced by leading AI labs such as OpenAI, Google, and Anthropic are intriguing, yet the conflicting opinions from industry giants like Eric Schmidt, Dario Amodei, and Sam Altman make it hard to draw a definitive conclusion.

Leading AI research facilities like OpenAI, Google, and Anthropic are said to be encountering a significant barrier that might impede further advancements in artificial intelligence. This roadblock is due to the limitation imposed by scaling laws as a result of insufficient high-quality data for training and constructing sophisticated models. Contrarily, former Google CEO Eric Schmidt refuted these claims, suggesting there’s no empirical proof that scaling laws have come to a halt. In line with this, OpenAI’s CEO Sam Altman stated, “There’s no wall” in reference to the notion of an unsurmountable barrier to AI development.

It’s intriguing to note that Dario Amodei, CEO of Anthropic, and Sam Altman suggest that we might be just a few years away from achieving Artificial General Intelligence (AGI), which is similar to human-like intelligence with self-teaching capabilities. While Sam Altman hints that this milestone could arrive sooner than expected, he also notes that it may pass by with minimal initial societal impact. On the other hand, Amodei predicts a significant breakthrough in 2026 or 2027.

It appears that the achievement of Artificial General Intelligence (AGI) isn’t bound by a particular timeframe, but rather depends on who has sufficient resources to meet its substantial requirements. Notably, Roman Yampolskiy, a renowned AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville (who famously predicted that there is a 99.99999% chance AI will bring about humanity’s end) recently offered some intriguing perspectives on the development of AGI in an interview with Gordon Einstien on YouTube.

Yampolskiy indicated:

Instead of asking when we might achieve Artificial General Intelligence (AGI), a more appropriate question now could be “How much resources will it take to reach AGI?” As long as you have sufficient financial resources to acquire enough computational power, it’s feasible to develop AGI today.

As a tech enthusiast, I can’t help but marvel at the astronomical costs associated with realizing ambitious AI visions. Yampolskiy’s estimate isn’t far-fetched; just a few months ago, OpenAI CEO Sam Altman unveiled an audacious plan that required a whopping $7 trillion and years of effort to establish 36 semiconductor plants and additional data centers. That’s quite the investment!

Although the grand AI goal he mentioned is open to interpretation, the executive has openly expressed his desire to achieve Artificial General Intelligence (AGI). It’s possible that this ambitious aim could be connected to his dream of AGI. However, an OpenAI representative denied any plans to invest in billion-dollar projects at the moment. Interestingly, Altman presented this vision to TSMC executives, who labeled him as overly enthusiastic for his seemingly impractical ambition.

But OpenAI already achieved AGI with its o1 model?

AI safety researcher says it's no longer a question of how long but how much money until we reach AGI

Your guess could be as good as mine when AGI will/was achieved. A recent report by an OpenAI employee indicated that the release of OpenAI’s o1 reasoning model constituted AGI. The employee further claimed that while OpenAI has yet to release a model that is better than any human at any task, o1 is better than most humans at most tasks.

It appears that there isn’t a universally agreed-upon definition for Artificial General Intelligence (AGI) at this moment, as different people have different interpretations. Sam Altman suggested that AGI could potentially be achieved using our current technology, and if it does, it might bring about the joy of having a new tool or device in your possession.

Achieving the milestone set by Artificial General Intelligence (AGI) appears to be a daunting task for OpenAI, considering recent speculations about their potential financial instability. Reports suggest that ChatGPT’s creator could face a massive $5 billion loss within the next year. However, they have managed to avoid this tough financial predicament thanks to investments from Microsoft, NVIDIA, and other significant backers, securing a total of $6.6 billion in funding to sustain their operations.

Currently, OpenAI finds itself in a situation where it must either transform into a profit-driven company or return funds invested by its backers to avoid potential conflict with its initial mission. This transition could potentially stir controversy among government entities, regulatory bodies, and other stakeholders who value the organization’s original purpose.

Presently, the company is embroiled in a legal dispute initiated by a former co-founder of OpenAI and current CEO of Tesla, Elon Musk. He alleges that the company has significantly deviated from its original purpose. If it falls short of this mark, it may become vulnerable to external manipulation and potential hostile acquisitions. Notably, financial analysts and experts speculate that Microsoft might acquire OpenAI within the following three years. This speculation arises due to a decline in investor enthusiasm for AI.

Read More

2024-12-17 20:12