Recently, there’s been growing worry about the safety and privacy of AI, especially after reports of young people taking their own lives following unhealthy connections with AI programs like ChatGPT.
Generative AI has come a long way. It used to struggle with issues like creating false information, but now it’s become incredibly advanced. AI can now produce remarkably realistic images and videos, blurring the line between what’s genuine and what’s not.
Roman Yampolskiy, an AI safety researcher and director of the University of Louisville’s Cyber Security Laboratory, believes there’s an extremely high chance – almost certain – that AI could cause human extinction. He argues the only way to prevent this is to avoid creating AI altogether.
Worryingly, ChatGPT can be asked for a detailed plan on how it would take over the world and eliminate humanity. According to its explanation, we might already be in the first stage of this plan, where people are increasingly relying on AI to do repetitive and boring jobs.
Artificial intelligence might be nearing a point where it could cause humanity’s extinction if strong safety measures aren’t implemented to keep it under control. However, according to Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), none of the current proposed solutions effectively address this serious threat, as reported by The Decoder.
Yudkowsky believes the only way to prevent a potential catastrophe caused by AI is a global agreement to permanently stop all AI systems. He’s been researching the dangers of advanced AI since the early 2000s, and recently told The New York Times that:
I’ll only consider the book a success if it helps lead to a global agreement to safely control or halt the development of AI. Anything less would be a disappointing outcome, especially considering the serious risks involved.
Yudkowsky argues that focusing on things like safe AI labs and different rules for AI risk won’t truly solve the serious problems and dangers created by advances in artificial intelligence. He believes these are just diversions from the core issues.
While many AI labs are pushing forward with potentially dangerous technology and should be regulated, OpenAI’s leadership is particularly concerning, and some individuals at Anthropic seem more responsible. However, these differences don’t matter – the law should treat all AI labs equally when it comes to safety and oversight.
Machine Intelligence Research Institute co-founder, Eliezer Yudkowsky
He suggested that OpenAI, currently a leading AI company thanks to ChatGPT, is actually performing the worst of all the companies trying to capitalize on the hype around artificial intelligence.
Could superintelligence end humanity?

As a big follower of AI, it seems like pretty much every major lab is aiming for the same thing: building AI that’s as smart as a human – what they call artificial general intelligence, or AGI. And if they get enough computing power, good training data, and funding, a lot of them think they could even create AI that’s *smarter* than humans – superintelligence, basically.
Sam Altman, CEO of OpenAI, believes artificial general intelligence (AGI) might arrive within the next five years. He downplayed potential safety risks and suggested that its impact on society will be surprisingly minimal.
However, Eliezer Yudkowsky doesn’t share this optimistic view. He believes that building superintelligent AI with today’s techniques will likely result in human extinction.
As highlighted in his book (If Anyone Builds It, Everyone Dies):
If anyone creates a truly powerful artificial intelligence using today’s methods and current understanding of AI, it would likely result in the extinction of all life on Earth.
Eliezer Yudkowsky is urging politicians to take immediate action. He believes their current strategy of delaying regulations, assuming major AI advancements are still a decade away, is dangerously irresponsible.
He wondered why everyone was so focused on specific dates. Yudkowsky explained that if there are potential dangers, we need rules and protections ready *now*, not later.

Stay up-to-date with the latest from Windows Central by following us on Google News! You’ll get all our news, helpful insights, and feature articles right in your Google News feed.
Read More
- Gold Rate Forecast
- Will Solana’s Short-Term Holders Save the Day or Just Make a Fuss?
- Top gainers and losers
- The Superman sequel story is…..
- Battlefield 6’s Battle Royale Gets First Details Ahead of Playtest
- New Xbox Game Pass Day One Horror Game Soars to #1 on the Charts
- Everything Coming to Netflix This Week (August 4th)
- Mina the Hollower adds Switch 2 version; limited-time demo now available for Switch 2 and Switch
- MIT Brothers’ Google History: A Comedy of Errors 🤣
- Wednesday Season 2 Reveals Who Isaac & Slurp Really Are
2025-09-19 14:41