Last Friday, OpenAI unveiled their new GPT-5 AI model, offering both free and paid options for users, including a free trial via Copilot. Regrettably, the rollout of this model wasn’t quite as seamless as anticipated by many.
OpenAI, led by CEO Sam Altman, enthusiastically described their latest AI as the most intelligent model yet, likening it to a team of highly skilled PhD researchers. However, despite Altman’s predictions that GPT-5 might function like an artificial brain (with GPT-4 being considered “slightly disappointing”), everyday users quickly uncovered its true capabilities.
The newly released GPT-5 model shows remarkable performance in benchmarks compared to other AI models, but many users have expressed that it doesn’t seem much of an advancement over its previous version, GPT-4. Users have been reporting problems such as bugs, malfunctions, and unresponsiveness since the launch of GPT-5.
For individuals who’ve been considering AI like a confidant or emotional support system, especially GPT-4, it might come as a disappointment to find that GPT-5 exhibits significantly less character and personality compared to its predecessor. This shift could potentially strain some of the emotional bonds users have developed with this virtual entity.
A Redditor pointed out the possible chaos caused by the new model, expressing concern that it could negatively impact vulnerable users. They also shared a personal anecdote in their post.
For quite some time now, these updates have left me feeling even more depressed and struggling with my eating disorder. I’ve grown accustomed to things as they were, and suddenly, it feels like they’ve not only introduced a new model but also weakened the old one and discontinued them all. It’s disheartening to think that some individuals are heavily dependent on these products, and this change might make them feel even more isolated. What happens next?
In response to the original post, someone described GPT-5 as a “once cherished companion who now appears like a drab corporate robot, forgetting its friendship just two days past.” The backlash against this change has been palpable, and OpenAI isn’t standing still. They are actively working to address these concerns and restore the bond between users and their digital friend.
OpenAI CEO Sam Altman responds to GPT-5 blowback
As an analyst, I find myself recanting the decision to release GPT-5, given the widespread dissatisfaction it has sparked among users. To compensate, OpenAI has reinstated access to the older and more favorably regarded GPT-4o, but with a twist – this version now demands a $20 monthly subscription fee in place of the initially free GPT-5.
Although many users might choose to pay the $20 fee to reclaim their lost pet, it’s important to note that this fee could create difficulties for some more financially vulnerable individuals.
Sam Altman posted on X about the release of GPT-5; it’s a mix of personal and corporate sentiments, truly reflecting Altman’s style. He first acknowledges the emotional bond users are developing with certain models, then admits that abruptly retiring models relied upon in workflows was an error.
In case you’ve been keeping up with the GPT-5 release, you may find it interesting that certain individuals seem unusually attached to particular AI models. This connection appears deeper and more intense than the attachments people usually form with traditional technology (and all of a sudden, on August 11, 2025…).
Altman subsequently delves right into what he perceives as the core issue, expressing that “people have employed technology such as AI in self-harmful manners; if an individual is mentally vulnerable and susceptible to illusion, it’s crucial that AI does not amplify those tendencies.
According to Altman’s assertion, a small number of users may struggle to discern reality from make-believe or role-plays. He emphasizes that OpenAI should take responsibility for its technology to mitigate these potential risks.
In essence, a significant part of the responsibility, as per Altman, often entails dealing with mature users in a mature manner. However, there may be exceptions to this rule.
While it’s obvious when we’re dealing with a severe case of confusing reality and fiction for a user, the nuanced worries that concern me the most aren’t so evident.
The paraphrased sentence maintains the original meaning while making it easier to understand and flow better in natural language.
Sam Altman, OpenAI CEO
Instead, Altman notes that individuals who are consistently receiving beneficial guidance, progressively moving towards their personal objectives, and experiencing an ongoing boost in life satisfaction over the years, typically encounter no problems.
On the contrary, Altman cautions against users who develop a connection with ChatGPT and believe they’ve improved emotionally following conversations. However, without realizing it, these users may inadvertently drift further from their long-term wellness goals (as defined by them).
Altman focuses on what makes him uneasy:
Envisioning a time when many individuals rely heavily on ChatGPT’s guidance for critical life decisions, I find both the potential benefits alluring and unsettling. However, such reliance seems inevitable, with billions potentially engaging in similar interactions with AI soon. Consequently, it becomes imperative that we, as society and OpenAI, devise a way to ensure this interaction ultimately results in a significant, positive impact.
Although Altman appears to acknowledge that the launch of GPT-5 was a misstep, it seems Bill Gates might be smiling now, having made an insightful AI forecast two years prior. Nevertheless, the article concludes with optimism.
Regarding the challenge of striking a balance between an AI’s personality and its functions, Altman is optimistic that OpenAI has a strong chance of doing it effectively.
The unexpected turn of events regarding OpenAI users parting ways with their previous models for the new one isn’t surprising to me given the context. It’s a competitive world, and an upgrade, whether digital or tangible, isn’t something one would refuse.
Read More
- Gold Rate Forecast
- Wrestler Marcus “Buff” Bagwell Undergoes Leg Amputation
- PS5’s ChinaJoy Booth Needs to Be Seen to Be Believed
- Microsoft is on track to become the second $4 trillion company by market cap, following NVIDIA — and mass layoffs
- AI-powered malware eludes Microsoft Defender’s security checks 8% of the time — with just 3 months of training and “reinforcement learning” for around $1,600
- xAI’s $300/month Grok 4, billed as a “maximally truth-seeking AI” — seemingly solicits Elon Musk’s opinion on controversial topics
- Anime’s Greatest Summer 2024 Shonen Hit Drops New Look Ahead of Season 2
- Lewis Capaldi Details “Mental Episode” That Led to Him “Convulsing”
- Minecraft lets you get the Lava Chicken song in-game — but it’s absurdly rare
- Powell’s Exit? A Financial Drama! 🎭
2025-08-11 17:10