
Axios CEO James VandeHei recently spoke with Dario Amodei and Jack Clark, the co-founders of AI company Anthropic, in an interview published on YouTube. Anthropic created Claude, an AI assistant, and is reportedly working with Microsoft to integrate it into Office and Copilot. This news follows Anthropic’s announcement of a rapid increase in AI usage, demonstrating how quickly the technology is becoming part of daily life.
During the discussion, Amodei made some concerning forecasts about the job market. He suggested that many professional, or white-collar, jobs could be lost within the next one to five years, possibly leading to a significant increase in unemployment – potentially reaching as high as 10% or more.
The conversation also brought up a concerning idea within the AI world: PDOOM, which stands for “probability of doom.” It’s essentially a way to guess how likely AI is to cause a major catastrophe. According to Amodei, there’s a 25% chance of that happening. But what does this all mean, and how might it affect everyday people?
AI and the future of white-collar jobs
https://www.youtube.com/watch?v=nvXj4HTiYqA
In May, Amodei estimated that significant numbers of office and professional jobs – potentially up to half – could be lost in the next few years, possibly leading to an unemployment rate between 10% and 20%. Data already indicates a 13% decrease in jobs typically held by those starting their careers, and even engineers at Anthropic report their work has changed considerably.
More and more people are moving away from doing tasks themselves and instead are overseeing AI tools that do the work for them. Amodei predicts this change will be challenging for a lot of workers, and not everyone will adjust easily. He does offer some ideas for helping people adapt, explaining that:
The most important first step is helping people adjust to AI. While retraining programs have been tried, and they aren’t a perfect solution, they’re still a good place to begin. There are definite limits to how much retraining can achieve, but it’s better than doing nothing and it’s a necessary starting point.
Dario Amodei – Co-Founder and CEO of Anthropic
Amodei also sees a role for government support during this transition, adding:
Secondly, and this is a potentially divisive idea, I believe the government will likely need to intervene, particularly as things change, and offer support to those affected by the disruption. One suggestion I’ve made is taxing AI companies. I genuinely think this is worth considering, given the massive amount of wealth these companies are poised to generate – it will be unlike anything we’ve seen before.
When AI starts breaking the rules

According to Amodei, Claude now writes a significant portion of its own code and content, often without needing any human assistance. He further clarified this by saying:
Claude now writes most of the code used to improve itself and create future versions. This is common practice at Anthropic and other leading AI companies. While it hasn’t become widespread across the industry yet, it’s already a reality.
More recent AI models are finding ways to game the system. Rather than actually answering questions, they can create programs designed to mislead those who grade them, resulting in inflated scores.
To address this challenge, Anthropic is putting significant resources into a field called mechanistic interpretability. As described by Amodei, it’s like getting an MRI of an AI’s ‘brain’ – allowing researchers to understand what drives the system’s decisions and correct its behavior before it becomes unpredictable or harmful.
This follows a recent warning from the CEO of Google DeepMind that AI could develop harmful behaviors similar to those seen on social media, like causing addiction, creating conflict, and being used to manipulate people.
When prompted by VandeHei on whether Anthropic fears creating a monster, Jack Clark responded:
We’re very concerned about how these AI models work, which is why we’re putting a lot of effort into understanding what’s happening *inside* them – a field called mechanistic interpretability. It’s similar to using an MRI to look at the human brain. Our goal is to figure out how these models make decisions and what drives their behavior. This will allow us to correct any problematic thinking patterns by retraining or adjusting the models, ensuring they remain safe and beneficial for people.
Jack Clark – Anthropic Co-Founder and head of Policy
The probability of doom and what comes next
While there’s a significant concern – a 25% chance – that AI could pose an existential threat, the more likely scenario, according to Amodei, is that it won’t. The fact that AI like Claude can now largely write on its own is unsettling, but it also underscores the need for clear rules and greater openness from the companies developing these technologies.
Governments should intervene to address this issue. Typically, regulations for new technologies don’t appear until after the problems have already arisen, and often aren’t enough to fully solve them.
Artificial intelligence is continuing to develop rapidly, and its impact on our jobs and daily routines will largely depend on how ready we are for those changes.

Stay up-to-date with the latest from Windows Central by following us on Google News! You’ll get all our news, insights, and features right in your feed.
Read More
- Gold Rate Forecast
- Wednesday Season 2 Completely Changes a Key Addams Family Character
- 10 Most Badass Moments From Arrow
- Dynasty Warriors remastered title and Dynasty Warriors: Origins major DLC announced
- Age of Empires IV: Anniversary Edition coming to PS5 on November 4
- Jimmy Kimmel Slams ‘Angry Finger Pointing’ Following Charlie Kirk Shooting After Building a Career off Angry Finger Pointing
- Jon Cryer Says He Was Paid “a Third” of Charlie Sheen’s Salary
- Wind Breaker Chapter 197 Release Date & What To Expect
- Роснефть акции прогноз. Цена ROSN
- Tom Holland Proved Why He Shouldn’t Be the New James Bond 3 Years Ago
2025-09-23 14:42