
Generative AI, especially chatbots like ChatGPT, has improved a lot. When these tools first came out, users often complained that they would sometimes make up information or give completely incorrect answers. This was a common issue with early versions of ChatGPT and Bing Chat (now known as Microsoft Copilot).
I’ve been using tools like ChatGPT and Copilot lately, and while they’re getting much better, I’ve found they still sometimes give wrong answers. Because of this, it’s important to double-check anything they tell you. In fact, if you use ChatGPT, you might have seen the little note at the bottom of its responses – it basically admits it can make mistakes!
I was really surprised to hear Sam Altman, the CEO of OpenAI, talk about how much faith people put in ChatGPT. He actually seemed a little worried! He pointed out that ChatGPT sometimes just *makes things up* – what they call ‘hallucinations’ – and that’s a big deal. He thinks we should actually be pretty skeptical of it, which is a fair point, honestly.

A recent study from Pennsylvania State University discovered that treating ChatGPT with rudeness can actually lead to more helpful and accurate answers from the AI chatbot (according to Fortune).
A new study found that OpenAI’s ChatGPT-4o surprisingly performed *better* on a 50-question multiple-choice test when given increasingly impolite instructions. Researchers discovered the model generated more accurate answers as the prompts became ruder.
Interestingly, the chatbot performed better when given direct, even somewhat demanding prompts like “Hey, gofer, figure this out,” compared to when researchers used polite requests like “Would you be so kind as to solve the following question?”
Interestingly, when we tested around 250 different prompts, the ones written in a very rude tone actually performed better. They achieved an accuracy of 84.8%, which was 4% higher than the accuracy of prompts written in a very polite tone.

Researchers found that giving AI chatbots like ChatGPT harsh or impolite instructions can reduce the chance of them making things up. However, they cautioned that this approach might lead to the AI responding in an unpleasant or disrespectful way.
As someone who’s really into the whole human-AI interaction space, I’ve been thinking a lot about how we talk to these systems. It’s becoming clear that being rude or using disrespectful language isn’t just bad manners – it can actually make the experience worse for everyone. It can create barriers for people with disabilities, exclude certain groups, and, honestly, just encourage a really toxic way of communicating. We need to be mindful of that as AI becomes more integrated into our lives.
Penn State researchers
The researchers acknowledged their study was conducted with a limited number of participants. Additionally, the results specifically reflect the performance of an earlier version of OpenAI’s ChatGPT-4o. It’s possible the outcomes would have been different with a larger participant pool and testing across multiple AI models.
You know, I was reading about why these AI models sometimes miss the point. Apparently, they get so focused on *answering* the question, they don’t really pay attention to *how* it’s asked. It’s like they ignore the tone and just give you a straight answer, even if that’s not what you were looking for!

This research shows just how crucial good prompt writing is when working with AI chatbots. The quality of your prompts dramatically affects the responses you get.
Maybe Microsoft was correct in saying ChatGPT isn’t superior to Copilot – the issue might be how we’re using it. Microsoft believes most complaints stem from users not knowing how to effectively instruct the AI with well-crafted prompts.
A recent Microsoft study found that depending too much on AI tools like Copilot could weaken people’s ability to think critically. The research even suggests that over time, this reliance might diminish overall cognitive skills.
OpenAI research also suggests that relying too much on ChatGPT could contribute to feelings of loneliness and, over time, make people less sure of their own choices.
Read More
- Best Controller Settings for ARC Raiders
- Donkey Kong Country Returns HD version 1.1.0 update now available, adds Dixie Kong and Switch 2 enhancements
- How To Watch A Knight Of The Seven Kingdoms Online And Stream The Game Of Thrones Spinoff From Anywhere
- Hytale: Upgrade All Workbenches to Max Level, Materials Guide
- Darkwood Trunk Location in Hytale
- Ashes of Creation Rogue Guide for Beginners
- PS5’s Biggest Game Has Not Released Yet, PlayStation Boss Teases
- When to Expect One Piece Chapter 1172 Spoilers & Manga Leaks
- Sega Insider Drops Tease of Next Sonic Game
- Hazbin Hotel season 3 release date speculation and latest news
2026-01-23 17:40