Don’t listen to Microsoft Copilot, David Attenborough and William Shatner are very much alive

Don't listen to Microsoft Copilot, David Attenborough and William Shatner are very much alive

What you need to know

  • When asked about notable people who died this year, Microsoft Copilot may share a list of people who are still alive.
  • In my testing, Copilot shared a list of living persons and people who died in previous years.
  • AI chatbots, including Copilot and Google Bard, have issues with sharing factually correct information and often “hallucinate.”

As someone who has spent countless hours interacting with AI chatbots over the years, it’s become increasingly evident that these digital companions are still far from perfect. The latest incident involving Microsoft Copilot sharing misinformation about the demise of Sir David Attenborough is just another chapter in this ongoing tale of AI hallucinations.


Microsoft’s AI model, Copilot, has provided incorrect information once more. Upon being queried about famous individuals who passed away in 2024, Copilot mentioned Sir David Attenborough, a person still alive as of now. Unless there’s some unreported tragic news, Attenborough is very much among us.

Earlier this week, a school in Leicestershire got a letter from him. By the way, David Attenborough was recently recognized as one of Britain’s top cultural figures in a poll – an accolade usually bestowed posthumously, like when Queen Elizabeth was hailed as a cultural icon, but in his case, he received the honor while still alive.

The phenomenon was noticed by several people who took to X (formerly Twitter) and other platforms. When asked if he was okay, William Shatner jokingly responded that he was not fine after reading about his death. The Verge shared other examples, including one listing Attenborough as deceased. I’ve seen similar results in my testing. In addition to listing living people as dead, Copilot incorrectly stated several deaths from previous years occurred in 2024.

Not after reading this.😱 https://t.co/eHWpeOZtM8October 2, 2024

I’ve noticed yet another instance where AI seems to have stumbled – this time, it’s Copilot that’s been caught sharing incorrect information, specifically about the U.S. election. There are those who feel that perhaps the intelligence of ChatGPT, one of the components behind Copilot, may have declined since its initial launch. In its early phases as Bing Chat, this AI chatbot was known for some rather peculiar and eerie responses.

I have first-hand experience with AI chatbots spreading false information. Last year, I wrote an article about how Google Bard incorrectly stated that it had been shut down. Bing Chat then scanned my article and wrongly interpreted it to mean that Google Bard had been shut down. That saga provided a scary glimpse of chatbots feeding chatbots.

As a researcher, I find it fascinating how AI, including tools like Copilot, can grapple with logic and reasoning. This isn’t shocking when you delve into the mechanics of how AI operates. Unlike humans, these systems aren’t employing logical reasoning in a manner that we would recognize. Instead, they frequently stumble over the phrasing of prompts and overlook crucial details in questions. Add to this their difficulties in comprehending satire, and it sets the stage for potentially misleading information.

Fixing AI

Microsoft revealed significant improvements for Copilot yesterday, aiming to make the AI more conversational and companionable. As described by our Senior Editor Zac Bowden, “Microsoft encourages users to think of the updated Copilot as more than just an artificial intelligence resource. They suggest treating it like a friend, whether that’s seeking advice on asking out a crush, sharing work frustrations, or simply engaging in casual conversation because, after all, people often do this with friends.

As I observe, the latest iteration of this chatbot, dubbed Copilot, is designed to foster a more conversational experience. It boasts a feature known as “Copilot Voice” which strives to mimic human-like dialogue. Furthermore, it’s equipped to propose topics for discussion and even provide summaries of the day’s latest news, making it a versatile companion in various contexts.

An updated interface and voice capabilities could potentially give the impression of a more personalized Copilot experience. However, it’s crucial that both my human and digital companions refrain from spreading untruths, such as incorrectly stating that living cultural icons have passed away. Maybe with additional training, our digital companion can become more reliable in terms of providing accurate information.

Read More

2024-10-02 22:09