Most AI companies, like Google, Anthropic, and OpenAI, are working towards Artificial General Intelligence (AGI) and are investing heavily in the necessary technology – including cloud computing and powerful GPUs – to get there.
Recently, there’s been a lot of talk about several companies potentially reaching a significant milestone. However, the term for this milestone has become a popular buzzword, and different leaders seem to define it in different ways, making its meaning unclear.
Many people describe Artificial General Intelligence (AGI) as an AI that’s smarter than humans. However, according to Microsoft’s agreement with OpenAI, they define AGI as an AI capable of generating up to $100 billion in profits.
In May, Demis Hassabis, the CEO of Google DeepMind, said that artificial general intelligence (AGI) is likely coming soon. However, he also voiced worries that we aren’t ready for the changes it will bring, admitting the potential consequences are a significant concern for him.
It appears Demis Hassabis may have shifted his stance on claims about AI capabilities. During a talk at the All-In Summit last week, the Google DeepMind CEO dismissed the idea that current AI systems possess intelligence comparable to a PhD, according to a post on X by vitrupo.
The executive pointed out a key issue: AI chatbots can sometimes give incorrect or misleading answers if questions are phrased in a particular way.
It really drives home how crucial it is to be good at writing prompts when you’re using AI tools like these. I recall reading a report that Microsoft actually got a lot of complaints last year – people were saying Copilot wasn’t measuring up to ChatGPT, and a big part of that likely came down to how people were asking it questions. It’s not just about the AI itself, it’s about *how* you talk to it!
Microsoft initially claimed users weren’t getting the best results from its Copilot tool because they didn’t know how to write effective prompts. However, the company has now released Copilot Academy to teach people how to use it more effectively.
Some of our competitors like to call today’s advanced AI systems “PhD-level intelligence,” but I disagree. While they can perform certain tasks at a very high level, like a PhD student, they aren’t generally intelligent. True general intelligence means being able to handle *any* task at that same advanced level.
Google DeepMind CEO, Demis Hassabis
Demis Hassabis believes a genuinely intelligent AI shouldn’t make obvious errors like these. He estimates we could see true AGI within the next 5 to 10 years.
He also pointed out that some essential features are lacking, like the ability for AI systems to continuously learn from online information and improve over time. Without this, chatbots can’t pick up new knowledge or adjust their responses based on the latest data.
Demis Hassabis believes current AI is still missing some essential abilities, though increasing the size and complexity of these systems could potentially overcome some of those issues. This idea surfaces as reports suggest leading AI companies, including OpenAI and Anthropic, are facing challenges because of a lack of sufficient, high-quality data to train more powerful AI.
Read More
- Eric Trump’s Bitcoin Prophecy: Floodgates Open? 🐘💥
- When Kraken Met Breakout: A Crypto Merger with a Twist 🦑💰
- Gold Rate Forecast
- BTC’s Desperate Dance: Volume’s Crucial Role in Avoiding a Doom Spiral 🐉📉
- Will Bitcoin Pull a Disappearing Act Below $100K? Grab Your Popcorn! 🍿
- How to Rank Up Fast in Valorant: Pro Tips for Everyday Players
- 🌟Pi Network’s Epic Upgrade: A Tale of KYC and Community 🌟
- Tron’s Fee Cut: Because Who Needs Money Anyway? 🤷♀️
- Alien: Earth Soundtrack Adds 2 Songs in Episode 6
- An MCU Fan Has the Perfect Defense of Spider-Man: Brand New Day’s Rumored Cameo
2025-09-16 14:41