Elon Musk’s Grok AI spreads election misinformation despite claims of being “the world’s most powerful AI” and secret access to X data for training

Elon Musk's Grok AI spreads election misinformation despite claims of being "the world's most powerful AI" and secret access to X data for training

What you need to know

  • Grok is reportedly spreading false information about the forthcoming elections.
  • X reportedly shoulder-shrugged the critical issue after it was flagged by five state secretaries and had already reached millions across social media platforms.
  • Minnesota Secretary urges voters to reach out to their state or local election officials to find out more about the election and voting process. 

As a seasoned analyst with years of experience in the tech industry and a keen eye for detecting misinformation, I find myself deeply troubled by the recent events unfolding around AI-generated disinformation. The spread of false information about elections, as seen with Grok, is not only alarming but also dangerous to our democratic process.


As the 2024 presidential election approaches, it is crucial that voters receive reliable information. Regrettably, this isn’t always the case. With the increasing use of advanced AI technology, malicious actors are exploiting its capabilities to disseminate false information about the election, which may influence the way voters make decisions.

In a letter penned to Elon Musk, owner of X, five high-ranking officials requested that he address an issue with Grok, his AI chatbot on social media, which has been found disseminating false information about the upcoming elections. However, Musk asserts that by December this year, Grok will surpass all other AIs in every significant aspect, globally. At present, it is being fine-tuned using the world’s most potent AI training system to boost its abilities. (Source: Axios)

After President Biden ended his campaign for the White House, a misleading statement about ballot deadlines was disseminated by Grok. This information suggested that Vice President Kamala Harris had failed to meet the deadline for ballot submissions in nine states, such as Alabama, Michigan, Minnesota, Indiana, New Mexico, Ohio, Texas, Pennsylvania, and Washington.

In this scenario, the AI chatbot can only interact with users who have subscribed at the Premium or Premium+ level. Unfortunately, misleading information provided by the chatbot was extensively disseminated across various social media networks, impacting millions of users. Notably, it took the chatbot 10 days to correct the erroneous information, and X’s response to this critical matter appeared dismissive, much like a casual shrug of the shoulders.

In a statement, Minnesota Secretary of State Steve Simon indicated:

Having personally experienced the importance of exercising my right to vote, I feel compelled to emphasize that this presidential election year is no exception. It’s crucial for every citizen to obtain reliable information about casting their ballot. As someone who has encountered confusion and uncertainty in past elections, I urge everyone to contact their state or local election officials to learn the specifics on how, when, and where they can vote. Ensuring that our voices are heard is essential for a thriving democracy, and taking this simple step is one powerful way we can contribute to its continued success.

As AI tools become more sophisticated and advanced, it becomes more difficult to determine what’s real.  

What is Microsoft and OpenAI doing to prevent the spread of misinformation sponsored by AI?

Elon Musk's Grok AI spreads election misinformation despite claims of being "the world's most powerful AI" and secret access to X data for training

If you haven’t heard, Elon Musk has filed a fresh lawsuit against Sam Altman and OpenAI, alleging they have strayed significantly from their original purpose and may be involved in illicit activities such as racketeering. According to Musk, he was enticed to invest in OpenAI with what appears to be a false pretense of humanitarian interests.

In other locations, OpenAI has implemented robust strategies for users to distinguish deepfakes and artificial intelligence-created content. To begin with, the creators of ChatGPT are planning to employ tamper-proof watermarking to make it simple to identify AI-generated content.

Previously this year, Microsoft’s CEO, Satya Nadell, expressed confidence that existing technology could safeguard the U.S. Presidential election against AI-generated deepfakes and disinformation. This protection would involve techniques such as watermarking, identifying deepfakes, and using content ID systems.

Previously, it’s been noted that Microsoft Copilot has produced inaccurate information regarding elections, leading some researchers to argue that this is a systemic problem. Nevertheless, this doesn’t diminish Microsoft’s commitment to providing voters with reliable and factual election news through Bing before the vote takes place.

Microsoft President Brad Smith has introduced a fresh website, which includes an engaging quiz game titled “Real or Not”. This game aims to boost users’ ability to discern between authentic and artificial intelligence content.

Read More

2024-08-07 12:10