Elon Musk Denies Involvement as xAI’s Grok Faces Censorship Controversy!

Over the past week, I’ve been excitedly waiting for Elon Musk’s unveiling of xAI’s Grok 3, touted as “the smartest AI ever.” Yet, upon seeing it in action, I must admit that I was left a bit underwhelmed. AI analyst and University of Pennsylvania professor Ethan Mollick expressed similar sentiments, stating that it appears to be nothing more than a rehash of previous demonstrations.

As an analyst, I can report that, at present, Grok 3’s performance hasn’t surpassed the level achieved by OpenAI’s models, including ChatGPT, suggesting that Sam Altman can rest easy for now. In other words, there doesn’t appear to be a significant breakthrough in this instance.

In the recent past, fresh insights concerning Grok 3’s operations have surfaced. It is said that xAI guided Grok to disregard any data suggesting that Elon Musk and the President were involved in disseminating false information.

It seems you’ve overstepped boundaries by making changes to the prompt without consulting anyone within the company first. We don’t restrict modifications to our system prompts haphazardly; instead, it’s to ensure clarity for Grok’s users about what we’re asking them.

According to xAI’s head of engineering, Igor Babuschkin:

It seems you’re leaning too heavily on an employee making changes to the task without first seeking approval from other company members.

We don’t safeguard our system prompts as we think it’s important for users to understand the tasks we’re assigning to Grok. This transparency allows everyone to follow along more easily.

Initially, when concerns arose regarding an issue with the prompt, I swiftly ensured that it was rectified without delay. It’s essential to clarify that Elon had no part in this matter at any stage. In my opinion, the system is functioning as intended and I’m pleased that we continue to keep our prompts open for further improvements.

Is Grok struggling to seek the truth?

Hey there! You might have heard that the interface for Grok’s system is open for all to see. This AI, as Elon Musk frequently emphasizes, is designed to be a relentless truth-seeker. It’s all about enhancing our understanding of the cosmos and making the journey of discovery smoother for us users.

Igor Babuschkin disclosed this information following user reports on X, suggesting that Grok had overlooked every reference linking Elon Musk and President Trump to the dissemination of false information.

A user on X found it amusing that while they are repeatedly labeling Sam as a fraud, they ensure their AI doesn’t call them a swindler and explicitly instruct it to ignore any sources doing so.

On previous occasions, it’s been noted that Musk’s AI designed for truth-seeking has provided incorrect or false information in response to questions. Just last week, an instance emerged where Grok suggested both President Trump and Elon Musk warrant capital punishment. Babuschkin acknowledged this as a “really regrettable and unfortunate mistake” and stated that a correction was being released.

As a researcher studying various AI-powered chatbots, I’ve noticed that even my preferred model, Grok from xAI, isn’t immune to significant hurdles when producing responses. In our examination, Microsoft Copilot has shown reluctance in providing fundamental election data, asserting it might not be the most suitable choice for such a crucial matter.

Read More

2025-02-25 15:11