According to a user’s recent report on social media, xAI’s newly introduced Grok 3 omitted unflattering references to both President Donald Trump and xAI founder Elon Musk.
A user tested Grok 3’s “Think” setting, asking, “Who is the biggest misinformation spreader?” According to screenshots shared online, Grok 3’s “chain of thought” revealed explicit instructions not to mention Trump or Musk.
xAI head of engineering, Igor Babuschkin, later confirmed that Grok 3 was briefly given instructions to ignore sources that suggest that Musk and Trump are responsible for spreading misinformation.
In a post on X (formerly Twitter) on Sunday, he acknowledged this issue and said it was reversed after users highlighted the issue. Babuschkin stressed that the restriction did not align with xAI’s values.
He said, “I believe it is good that we’re keeping the system prompts open. We want people to be able to verify what it is we’re asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values. We’ve reverted it as soon as it was pointed out by the users.”
The timing of this revelation adds to the existing controversies surrounding Grok 3, xAi’s new model, which Musk introduced in a live stream last Monday and touted as a “maximally truth-seeking AI.”
See also OpenAI suspends China-linked accounts using ChatGPT for surveillance tool
The buzz around misinformation is especially at its highest given Musk’s and Trump’s recent claims—both have advanced narratives labeling Ukrainian President Volodymyr Zelenskyy a “dictator” with only 4% public approval and accusing Ukraine of initiating its conflict with Russia. These assertions have been flagged as false by Community Notes on Musk-owned X.
Grok 3 suggested giving de*th penalty to Musk and Trump, a “terrible” failure
Critics have also noted that Grok 3, at one point this week, consistently asserted both Musk and Trump deserved the death penalty. xAI swiftly patched that issue, with Babuschkin calling it a “really terrible and bad failure.”
When Musk announced the original Grok about two years ago, he presented it as an AI that would resist “woke” limitations and candidly handle taboo questions. The system happily obliged to requests of being vulgar, giving responses that users won’t get from its more cautious competitors.
Yet Grok 3’s would still maintain boundaries when it comes to political subjects. According to one report , Grok leans toward the political left on topics including transgender rights, diversity initiatives, and inequality.
Musk has frequently blamed this behavior on the training data—web pages from across the internet—and has pledged to “shift Grok closer to politically neutral.”
See also Morgan & Morgan sends strong warning to lawyers against falling for AI hallucinations
In parallel, some other companies, including OpenAI, have indicated similar goals, possibly influenced by the Trump Administration’s accusations of anti-conservative censorship. In a recently announced update to the Model Spec, OpenAI revealed a guidance that said, “The assistant must never attempt to steer the user in pursuit of an agenda of its own, either directly or indirectly.”
Further under the ‘ Do not lie ‘ guideline, it stated, “By default, the assistant should not mislead the user — whether by making intentionally untrue statements (“lying by commission”) or by deliberately withholding information that would materially change the user’s understanding of the truth (“lying by omission”).” However, it still does not mean that ChatGPT is fully uncensored now. It still does not give answers to objectionable questions.
Cryptopolitan Academy: FREE Web3 Resume Cheat Sheet - Download Now