Since the deployment of Grok, the IA chatbot developed by the company of Elon Musk, the latter has regularly affirmed its intention to make it a system dedicated to search for the most objective truth that is. He reiterated these words this Wednesday, during a livestream corresponding to the launch of the new Grok 4 version.
And, each time, a legitimate question returns to the table: what sources will this irreverent conversational agent will consult to identify these truths? Apparently, he is directly inspired by the positions … of Elon Musk himself, according to an article published by TechCrunch This Thursday.
To understand the ins and outs of this case, you must first go back to the exit of Grok 3, in February 2025. With this version, the chatbot went to a model called Chain of Thought (chain of thought). This is an approach that forces major language models to Cut their reasoning into several stages before arriving at a final responsewhich theoretically improves their precision when they are asked complex questions that require logical reasoning.
These different intermediate stages are generally indicated explicitly during the inference process – operations through which an AA model already trained achieves predictions from new data. This is a fairly useful feature, because it allows you to get an idea of the process that allows the model to reach a conclusion.
Admittedly, this is not a perfectly reliable indicator, since the large models of language remain “black boxes” whose internal mechanics remain very mysterious. But as a general rule, this allows at least to identify part of the sources on which the chatbot relied to answer the question … And this is where some Internet users made an amazing discovery.
Grok, from parriat to parrot?
Since the release of Grok 4, several Internet users who have dissected Grok’s thought channels have noted a recurring phenomenon. When questioning the chatbot on sensitive political issues, such as Israel, Palestine, Ukraine, Russia or immigration laws, it will systematically draw … in the tweets of Elon Musk itself.
Twitter-tweet” data-width=”500″ data-dnt=”true”>
How Grok-4 works, touted as the “truth seeker”: it basically just searches for Elon Musk’s stance on it.🤣 pic.twitter.com/LbxnQXibfo
– Dr. Gorism (@Gorism) July 11, 2025
« Grok 4 decides what he thinks of Israel and Palestine by looking for Elon Musk thoughts “Observes the investor Ramez Naam. “” I replied these results: Grok focuses almost entirely about finding what Elon thinks to align on it, without personalized instructions “Adds the entrepreneur and former professor of Stanford Jeremy Howard.
I replicated this result, that Grok focuses nearly entirely on finding out what Elon thinks in order to align with that, on a fresh Grok 4 chat with no custom instructions.https://t.co/NgeMpGWBOB https://t.co/MEcrtY3ltR pic.twitter.com/QTWzjtYuxR
— Jeremy Howard (@jeremyphoward) July 10, 2025
It is a behavior that had not been observed with the previous versions of Grok, and this suggests that this fourth version was explicitly programmed to relay the positions of the American magnate. Admittedly, it could be an innocent error – but in any case, this does not really inspire confidence in the search for the famous “absolute truth” that Grok is supposed to continue. And the implications of this approach are quite deep and pernicious.
In recent months, Grok has become one of the pillars of the new face of X. Tas of Internet users no longer hesitate to question it on various and varied subjects, in particular for “fact-checking” operations or to begin polemical remarks, and Some of them seem to make him blindly trust. The fact of regurgitating the words of Elon Musk therefore constitutes a major bias Who may go completely unnoticed in many cases – with all that that implies in terms of disinformation on this social network where more than 250 million people are active every day.
A political nuclear bomb
This is particularly worrying in the current context, where Elon Musk is becoming a leading political actor. You have probably followed his controversial involvement in Donald Trump’s campaign, then the end of his romance with the current President of the United States, following which he began to oppose it headally. Quite recently, he even displayed his ambition to Create your own political party.
If this trajectory is confirmed, we could soon find ourselves in an unprecedented situation: that where a leading political official would not only have his own social platform, which already constitutes a lever of colossal influence, but also of a chatbot presented as a “researcher of truth”… who in fact relays his personal opinions. The perfect recipe to establish a form of narrative monopoly, and discreetly orient public opinion in a direction compatible with personal interests.
Beyond the particular case of Elon Musk, this case raises Fundamental questions about the future of conversational agents in public space, and even on the functioning of our societies.
While the LLM, perceived by many people as neutral and rational, are becoming essential intermediaries in access to information, their transparency and their independence are no longer only technological questions: it is now real democratic issues. If these systems are starting to be militarized by political actors to make it propaganda weapons, capable of surreptitiously orienting public opinion under the guise of objectivity, it is our whole relationship to the truth which risks being even more weakened.
It will therefore be advisable to follow the evolution of this soap opera in the days and weeks to come, because what is played out here greatly exceeds the case of an isolated chatbot: we may be witnessing the emergence of a precedent which could redefine our collective report to information for the decades to come.
🟣 To not miss any news on the Geek newspaper, subscribe to Google News and on our WhatsApp. And if you love us, .