According to OpenAI safety researcher Steven Adler, who recently left the company after four years, developing Artificial General Intelligence (AGI) is a highly risky venture with significant potential downsides. He emphasized that at present, no lab has figured out how to ensure AI behaves ethically and aligns with human values. Moreover, he warned that the faster we rush into this race, the less likely it becomes that anyone will find a solution in time to prevent potential disastrous consequences.
In simpler terms, Adler’s farewell statement from OpenAI mirrors the apprehensions expressed by AI safety researcher and head of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, who suggests that there is a very high probability, almost certainty, that AI could lead to humanity’s downfall.
Recent update: After spending four years focused on safety matters at @openai, I decided to move on in mid-November. It was an exhilarating journey filled with numerous episodes – risky capability assessments, ensuring safety for artificial agents, exploring the boundaries of Artificial General Intelligence and managing online identities, among others. There are many aspects of this experience that I will surely miss.
January 27, 2025
Frankly, the rapid advancements in AI today leave me quite apprehensive. As I contemplate where to establish a future home and how best to prepare for my retirement, I find myself questioning: Will humanity survive long enough to reach those stages? The situation seems trapped in an unhealthy balance. Although some labs strive to develop AGI responsibly, others might cut corners to compete, potentially with catastrophic consequences. This pressure only leads us to accelerate our pace. I hope these research facilities can be more transparent about the stringent safety measures required to halt this progression.
Based on the researcher’s estimates, the sole method to avoid an unavoidable catastrophe would be to refrain from creating AI in the initial stages. However, given that OpenAI and SoftBank have made a $500 billion investment in Stargate for establishing data centers nationwide in the U.S., which aims to support advanced AI developments, it seems that this option may no longer be feasible.
It’s not the first instance where an employee from the ChatGPT creator, OpenAI, has left due to safety concerns. Previously, Jan Leike, who was the Head of Alignment, Super Alignment Lead, and Executive, announced his departure. This former lead revealed that he had differences in opinion with OpenAI’s leadership regarding essential priorities for future AI models, security measures, monitoring, and other aspects.
It seems Leike raised concerns about the prioritization of safety measures, as attention shifted towards exciting advancements such as AGI. Furthermore, it’s been suggested that OpenAI expedited the release of GPT-4o, dedicating minimal time for their team to conduct safety checks. Insiders reported that invitations for the product launch party were sent out before the safety team had completed their tests. However, an OpenAI representative acknowledged the haste in releasing GPT-4o, but maintained that no compromises were made on safety protocols to meet the strict deadline.
Anthropic CEO says AI will extend human lifespan by 150 years by 2037

At the World Economic Forum in Davos, Switzerland, Anthropic CEO Dario Amodei stated that advanced Generative AI may potentially extend human lifespans by a factor of two within the next five to ten years.
I predict that around the years 2026 or 2027, advanced AI systems will surpass most humans in nearly every aspect, showcasing remarkable capabilities. I am optimistic about the many benefits this advancement could bring.
In simpler terms, the leader emphasized significant advancements in technology, defense, and healthcare. He is optimistic that artificial intelligence might extend human lifetimes, even though there have been concerns about this technology potentially causing an existential crisis.
According to Anthropic CEO Dario Amodei:
In essence, if AI technology advances as hoped, it could potentially speed up progress in fields like biology by 100 years within the next five to ten years. To put this into perspective, considering human advancements in biology over a century, doubling the average human lifespan may not be too far-fetched. If AI can indeed expedite this process, we might achieve such progress in just half a decade. This is our ambitious goal at Anthropic. The question we’re pondering is: what’s the initial step toward realizing this vision? Are we perhaps two to three years away from the foundational technologies for this leap forward?
Amodei acknowledges that the development of AI isn’t an extremely precise field, implying it may follow unpredictable paths due to limited high-quality data for training AI systems. Notably, OpenAI CEO Sam Altman posits that AI could become intelligent enough to address potential issues arising from swift advancements in this domain, such as the possible extinction of humanity.
OpenAI is rapidly advancing towards Artificial General Intelligence (AGI), following confirmation from Altman that his team possesses the knowledge required to construct AGI and it may be attainable sooner than expected using existing hardware. Moreover, he suggested that the company might be preparing to direct its efforts towards superintelligence. This shift could potentially cause a 10-fold increase in groundbreaking AI advancements each year, comparable to the scientific progress made over an entire decade.
Read More
- How to watch A Complete Unknown – is it streaming?
- USD VES PREDICTION
- INJ PREDICTION. INJ cryptocurrency
- LDO PREDICTION. LDO cryptocurrency
- IP PREDICTION. IP cryptocurrency
- CAKE PREDICTION. CAKE cryptocurrency
- EUR HUF PREDICTION
- EUR PKR PREDICTION
- JTO PREDICTION. JTO cryptocurrency
- USD MXN PREDICTION
2025-01-31 15:09