“We have already achieved AGI. It’s even more clear with O1”: OpenAI employee claims ‘o1’ constitutes AGI after Sam Altman indicated it would whoosh by with “surprisingly little” societal impact

"We have already achieved AGI. It’s even more clear with O1": OpenAI employee claims 'o1' constitutes AGI after Sam Altman indicated it would whoosh by with "surprisingly little" societal impact

What you need to know

  • A technical employee at OpenAI indicates that the firm has already achieved the AGI benchmark following the release of OpenAI o1 to general availability last week.
  • OpenAI CEO Sam Altman had previously indicated that AGI could be achieved sooner than anticipated and would simply whoosh by with “surprisingly little” societal impact.
  • An OpenAI employee admitted that while the company’s AI models haven’t categorically surpassed human cognitive capabilities, he indicated they are “better at most tasks than humans.”

As a seasoned researcher with over two decades of experience in the tech industry, I find myself both intrigued and skeptical about OpenAI’s recent claims of achieving artificial general intelligence (AGI). The rapid pace at which this company is advancing its AI models is nothing short of astonishing, but it also raises concerns about the potential impact on society.

Over the last several months, it appears that OpenAI has been making substantial progress in AI technology. Notably, Sam Altman hinted in a comprehensive blog article that we might be just a few thousand days from achieving superintelligence.

Lately, the executive has suggested that the AI company might soon achieve a significant breakthrough, possibly meeting the Artificial General Intelligence (AGI) benchmark by 2025. It’s worth noting that the executive also stated that contrary to common perception, the advent of AGI may have a surprisingly minimal impact on society.

It appears that a member of OpenAI’s team has suggested that the company may have achieved their goal of reaching a significant milestone, possibly due to the launch of their Strawberry AI model, OpenAI o1, which now seems ready for public use following its extended preview period, as it showcases general reasoning capabilities.

OpenAI’s Vahid Kazemi post on X (formerly Twitter):

2024 Reflections: I’ve been reflecting on the advancements in artificial general intelligence (AGI) this year, and it seems to me that we have indeed made significant strides. Observing the performance of O1, it’s become increasingly apparent that while we haven’t surpassed human capabilities across all tasks, AGI is now superior to most humans in a variety of areas.

I, as a passionate follower, can share that Kazemi openly acknowledges that our AI firm hasn’t quite surpassed the performance of any human in any given task. However, what’s intriguing is his assertion that our models have already outperformed most humans in the majority of tasks.

It’s sometimes argued that LLMs (Large Language Models) only know how to execute a formula. However, the complexity of a trillion-parameter deep neural network learning capabilities is hard to define precisely. Yet, if we consider science as a whole, it can be likened to a recipe: gather data (observe), form an idea or assumption (hypothesize), and test that idea with evidence (verify). While good scientists may develop more refined hypotheses based on their instincts, those instincts are built upon numerous trials and errors. Ultimately, there’s no limit to what can be learned given sufficient examples.

In the context given, AGI (artificial general intelligence) is a form of AI that outperforms human intellect in a broad range of subjects. According to Kazemi’s post on X, he doesn’t claim unequivocally that OpenAI’s models have surpassed human cognitive abilities altogether. Instead, he emphasizes their proficiency at performing tasks more effectively than humans do.

According to Sam Altman’s recent remarks, it seems we may reach the standard for Artificial General Intelligence (AGI) earlier than initially thought.

Elon Musk, who was once a co-founder of OpenAI and now serves as CEO of Tesla, has taken legal action against OpenAI and Sam Altman, claiming breach of its original mission and possible engagement in criminal activities such as racketeering. Additionally, Musk urged authorities to investigate OpenAI’s advanced AI models, arguing that they might represent Artificial General Intelligence (AGI) and potentially bring about catastrophic outcomes for humanity.

Has OpenAI hit the coveted AGI benchmark?

"We have already achieved AGI. It’s even more clear with O1": OpenAI employee claims 'o1' constitutes AGI after Sam Altman indicated it would whoosh by with "surprisingly little" societal impact

During the last weekend, news surfaced about OpenAI potentially reconsidering a strict provision in their contract with Microsoft that would terminate their partnership upon achieving Artificial General Intelligence (AGI). Some online speculation proposed that this move by ChatGPT’s creator could be aimed at securing additional financial backing from Microsoft for its complex and innovative AI projects in the future.

Experts and market analysts are suggesting that the buzz around AI is starting to dwindle, as investors shift their focus and investments elsewhere. Given this trend, it may become progressively challenging for OpenAI to continue its AI development, especially amid rumors of financial troubles. This situation could potentially make OpenAI more vulnerable to outside influence and potential takeovers, with speculations pointing towards Microsoft possibly acquiring the company within the next three years.

Even though OpenAI secured $6.6 billion during its recent funding round involving Microsoft, NVIDIA, and other major investors, boosting its market value to $157 billion, financial experts estimate that the company could potentially earn an additional $44 billion before achieving profitability in 2029. This projection is partly based on OpenAI’s partnership with Microsoft.

Read More

2024-12-09 17:09