Anthropic’s CEO says “we do not understand how our own AI creations work” — and yes, we should all be “alarmed” by that

As an AI safety researcher, I’m drawing attention to a startling statistic put forth by my colleague Roman Yampolskiy: there’s approximately a 1-in-10-million chance that the development of AI could lead to humanity’s extinction. To steer clear of this potential catastrophe, he suggests that perhaps we should reconsider the pursuit and advancement of AI altogether.

It’s worth noting that as AI technology developed by Anthropic continues to advance and reach new peaks, their CEO, Dario Amodei, openly acknowledged that the company lacks complete understanding of its own artificial intelligence.

In an essay recently published on the executive’s website, Amodei indicated:

It’s frequently shocking for those outside AI development to discover that we don’t comprehend the inner workings of our self-made artificial intelligence. And they should indeed be worried, as this lack of understanding is virtually unheard of in technological history.

He outlines the potential severe risks associated with this situation, which could result in undesirable consequences. To avert such unfavorable situations, Amodei advises AI research facilities to prioritize transparency over their work, as the AI may eventually develop beyond human control if not addressed earlier.

It’s crucial that these systems become integral to our economy, technology, and national security, as they possess a high degree of autonomy. It’s nearly unacceptable for humans not to fully understand their inner workings.

As a tech enthusiast, I must acknowledge that the CEO of Anthropic has pointed out the need for considerable effort to achieve the level of transparency necessary to effectively manage and control the ongoing advancements in artificial intelligence.

A current study indicates that OpenAI may be expediting its progress in AI innovation by reducing the duration spent on safety assessments, which could potentially pose risks. The report also highlighted that OpenAI employs such tactics to stay ahead of its competitors in the market.

On previous occasions, the creators of ChatGPT have faced questions about safety concerns. A large number of the original team members left, stating that they did so due to safety-related issues. Some suggested that the emphasis on developing attractive technologies such as Artificial General Intelligence had overshadowed the importance of a strong safety culture and procedures.

It seems that every significant tech company is eagerly competing to claim a piece of the booming generative AI market, including Microsoft, which has pledged an impressive $80 billion towards AI innovation and investments. This enthusiasm for the technology is evident, yet it’s also clear that investors are cautious due to its high financial requirements and uncertainty regarding potential profitability.

For several months now, there’s been growing discussion about privacy and security issues associated with this tool, as it spreads and becomes widely used. However, what might be even more crucial to consider is the potential lasting effect it could have on our collective human existence.

Leading tech figures, such as Microsoft co-founder Bill Gates, foresee AI taking over many tasks from humans. What particularly grabs my attention (or what strikes me the most) is the potential danger that this technology poses to our species.

Read More

2025-05-05 13:41