Altman predicts AGI will reshape society before we’re ready — and that’s okay? Scary moments, sudden shifts, and late-stage adaptation await.

Generative AI is rapidly improving and impacting many areas, from healthcare to entertainment. Because of this progress, it’s getting harder to predict when leading AI companies like OpenAI, Anthropic, and Google might achieve artificial general intelligence (AGI).

Recent concerns suggest that some AI labs might struggle to create more powerful AI because they’re running out of the high-quality data needed for training. This is also causing scrutiny of Microsoft’s large investment in OpenAI, particularly as OpenAI shifts towards becoming a for-profit company.

The tech company canceled two large data center contracts because they decided to stop offering extra computing power for training ChatGPT. However, OpenAI’s CEO, Sam Altman, stated that his company no longer had issues with computing capacity.

This multi-billion dollar agreement includes a strict rule: if they reach a key milestone in artificial intelligence, the partners must end their collaboration. That milestone is defined as creating an AI system capable of generating $100 billion in profits.

In recent years, Sam Altman, CEO of OpenAI, has offered some thought-provoking predictions about a future shaped by advanced AI, specifically after the company reaches Artificial General Intelligence (AGI). Despite widespread concerns from users and regulators about AI privacy and security, Altman believes these issues won’t be relevant when AGI actually arrives.

OpenAI CEO says he expects AGI to cause scary stuff

Sam Altman predicts that artificial general intelligence (AGI) will arrive within the next five years, but its impact on society will be surprisingly minimal. In a recent interview with a16z, he stated that AGI will develop rapidly and almost unnoticed. He also emphasized that it won’t lead to a dramatic, world-altering event, often referred to as ‘the singularity.’

According to the executive:

Even if it involves pushing the boundaries with AI research, society will ultimately adjust and learn quickly. Looking back, we often realize people and communities are far more adaptable than we initially believe, and the changes will likely happen gradually rather than all at once.

A post shared by Artificial Intelligence (AI) (@artificialintelligenceee)

A photo posted by on

Just because AI hasn’t created a major catastrophe yet doesn’t mean it won’t in the future. It’s unusual to have so many people interacting with a single, powerful system. We might already be seeing subtle, large-scale changes in society as a result, things that aren’t necessarily frightening, but are simply different. Like with any new technology and investment, I anticipate some negative consequences will emerge.

OpenAI CEO, Sam Altman.

The executive believes that the company and society will find ways to safely manage the technology and prevent it from becoming dangerous. This follows OpenAI’s recent addition of parental controls to ChatGPT, which came in response to a rise in suicide cases among young people.

FAQ

What is AGI?

Artificial general intelligence (AGI) describes AI that can handle any mental task a human can, unlike current AI which is designed for specific jobs.

Why are Altman’s comments controversial?

Some experts worry that reacting to new technologies *after* they’ve already had an impact is dangerous, especially when those technologies have the potential to drastically change our economies, governments, and basic rights. Delaying a response could result in problems that can’t be fixed.

Is Altman optimistic or cautious about AGI?

He understands that these changes could cause worry and upheaval, but he believes society will eventually adapt and thrive, even if the process isn’t smooth.

What does this mean for policymakers and the public?

The text argues that we need to start planning now with rules, ethical guidelines, and public education, even though we don’t fully understand the potential of advanced AI. Getting ready now could help avoid potential problems that experts like Altman foresee.

Read More

2025-10-24 00:10