Microsoft unveils new Correction tool to address AI hallucinations but there are early concerns — “Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water”

Microsoft unveils new Correction tool to address AI hallucinations but there are early concerns — "Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water"

What you need to know

  • Microsoft debuts a new Correction tool to prevent AI hallucinations.
  • Experts claim the tool might address some of the issues impacting the technology, but it’s not a silver bullet for accuracy and may create critical issues, too.
  • They further claim that the tool may create a false sense of security among avid AI users, who might perceive it as a safeguard against erroneous outputs and misinformation. 

As a seasoned researcher with years of experience delving into the intricacies of artificial intelligence, I find Microsoft’s new Correction tool to be an intriguing development. While it undoubtedly holds promise for improving the reliability and trustworthiness of AI-generated content, my personal experience has taught me that no technological advancement is a silver bullet.


Microsoft has recently introduced several innovative artificial intelligence safety mechanisms aimed at enhancing security, privacy, and dependability. One of these tools is called Correction, which functions to rectify errors produced by AI. In simpler terms, this tool identifies and corrects factual inaccuracies and mistakes in responses given to text-based queries.

The new tool, belonging to Microsoft’s Azure AI Content Safety API, can be utilized across multiple AI text-generation models such as Meta’s Llama and OpenAI’s GPT-4o. As shared with TechCrunch, this tool is designed using both small and large language models capable of cross-referencing its findings from reliable sources for fact-checking purposes.

However, experts warn that the tool might not be a silver bullet to AI’s hallucination episodes. According to OS Keyes, a University of Washington PhD candidate:

Attempting to remove hallucinations from AI-generated content is comparable to trying to take hydrogen out of water – it’s a fundamental aspect that powers the technology.

Hallucinations isn’t AI’s only issue

Back in the initial stages of my work with Bing Chat (now known as Microsoft Copilot), users voiced their concerns over its user interface, pointing out instances of incorrect outputs. To address this issue, we implemented character limits to minimize lengthy conversations and possibly avoid confusing the AI. Although hallucinations have noticeably diminished, it’s essential to remember that we still have work ahead to ensure a flawless experience.

As a researcher, I recently noticed an unusual incident involving Google’s AI Overviews feature. It seemed to be suggesting actions that were not only unconventional but potentially harmful, such as eating rocks, applying glue, and even contemplating suicide. The company promptly addressed the issue, attributing it to a “data gap” in their system, and released allegedly authentic screenshots to explain the incident.

The Microsoft representative asserts that the new Correction tool will markedly enhance the dependability and credibility of AI-produced content, thereby aiding developers in minimizing “user discontentment and potential harm to their reputation.” Although the spokesman acknowledges that the tool won’t resolve the precision problems inherent in AI, it will ensure its outputs align with reference documents.

Keyes argues that while this correction tool could help resolve some problems in AI, it may also generate fresh issues in its wake. Some believe the tool could foster a misguided trust, leading users to view inaccurate information produced by AI as indisputable facts.

I’m curious to find out if the latest Correction tool appeals to dedicated AI users, and if it effectively identifies AI hallucinations, thereby enhancing the reliability of AI-produced responses.

In other news, Microsoft has introduced Copilot Academy – a training program aimed at maximizing user proficiency with the Copilot AI tool. However, according to a different article, one common issue faced by Microsoft’s Copilot team is that the AI doesn’t perform as effectively as OpenAI’s ChatGPT. Microsoft has attributed this to users not utilizing its features optimally.

Read More

2024-09-25 12:13