Many individuals seem hesitant to embrace artificial intelligence, maintaining a cautious distance due to privacy, security, and broader existential worries. As per Roman Yampolskiy, an AI safety researcher who serves as the director of the Cyber Security Laboratory at the University of Louisville, there’s almost no doubt that AI could pose a threat to human existence with a probability estimated at 99.999999%.
According to recent updates, it seems many AI research facilities may soon reach the significant milestone known as Artificial General Intelligence (AGI). More precisely, OpenAI and Anthropic are optimistic that AGI could become a reality in the current decade.
Although there is a possibility that Artificial General Intelligence (AGI) might endanger humanity, OpenAI CEO Sam Altman asserts that this danger may not become apparent at the exact moment of AGI’s emergence. Instead, he suggests that its arrival could pass by with surprisingly minimal effect on our society, almost like a gentle breeze.
Yet, the previous chief scientist at OpenAI, Ilya Sutskever, has voiced worries that artificial intelligence may outstrip human cognitive abilities, eventually becoming more intelligent.
In order to find a solution for potential problems that might arise after the unveiling of Artificial General Intelligence, the executive proposed constructing a “survival shelter” or “safe haven,” where employees at the company could take refuge if an unexpected catastrophe were to occur (as reported by The Atlantic).
During a meeting among key scientists at OpenAI in the summer of 2023, Sutskever indicated:
“We’re definitely going to build a bunker before we release AGI.”
The initial mention of Sutskever’s comment regarding the bunker can be found in Karen Hao’s upcoming book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.” Notably, this wasn’t the sole occasion that the AI researcher brought up the safety bunker.
During OpenAI’s internal conversations and gatherings, it was not uncommon for the executive to bring up the topic of the ‘bunker.’ As I’ve gathered from a fellow researcher, several individuals echoed Sutskever’s concerns about Advanced General Intelligence (AGI) and its possible catastrophic impact on humanity.
Are we adequately prepared for a world that houses AI systems more powerful and intelligent than human beings?
This news follows remarks by DeepMind CEO Demis Hassabis suggesting Google may be approaching Artificial General Intelligence (AGI) with the latest updates to its Gemini models. He voiced worries, pointing out that our society might not yet be prepared for this development and the thought of AGI is causing him sleepless nights.
In other places, Dario Amodei, head of Anthropic, openly acknowledged their uncertainty about how their models operate. He also emphasized that this lack of comprehension could pose significant risks for society that warrant concern.
Read More
- PI PREDICTION. PI cryptocurrency
- How to Get to Frostcrag Spire in Oblivion Remastered
- S.T.A.L.K.E.R. 2 Major Patch 1.2 offer 1700 improvements
- Kylie & Timothée’s Red Carpet Debut: You Won’t BELIEVE What Happened After!
- Gaming News: Why Kingdom Come Deliverance II is Winning Hearts – A Reader’s Review
- We Ranked All of Gilmore Girls Couples: From Worst to Best
- How Michael Saylor Plans to Create a Bitcoin Empire Bigger Than Your Wildest Dreams
- Quick Guide: Finding Garlic in Oblivion Remastered
- PS5 Finally Gets Cozy with Little Kitty, Big City – Meow-some Open World Adventure!
- Florence Pugh’s Bold Shoulder Look Is Turning Heads Again—Are Deltoids the New Red Carpet Accessory?
2025-05-23 13:39