Since last week, Twitter/X users have had free access to the Grok chatbot from xAI, Elon Musk’s company specializing in artificial intelligence. This tool is intended to compete with OpenAI products, such as its famous ChatGPT conversational agent or its DALL-E image generator. But unlike Sam Altman’s company, xAI has imposed very few limits on its multimodal model, leaving users the possibility of generating content that is sometimes… questionable, or even downright problematic.
In the space of a few days, lots of synthetic images generated by Grok have popped up all over the web, and particularly on X. Most of these images highlight the level of performance of the tool, which is capable of generating photorealistic images that are often very consistent with the user’s textual queries, even in scenarios that require a certain degree of abstraction — an exercise where even the most efficient models often encounter major problems.
I asked Grok and ChatGPT to generate a clock with the time 3:38. Grok was off by 2 hours and ChatGPT was in a whole other dimension pic.twitter.com/MQNxBl9jVy
— greg (@greg16676935420) December 13, 2024
But other users quickly went a little further. To stand out from OpenAI, with which the CEO of Tesla, X and SpaceX is in open conflict, the creators of Grok gave him an irreverent, unfiltered and sarcastic “personality”. And above all, the xAI chatbot has no control over sensitive topics, unlike OpenAI and Google tools which have safeguards limiting this type of request and the misinformation that can be associated with it.
A torrent of problematic diversions
However, it turns out that this also applies to images. Unlike DALL-E which strives not to generate images of real people to avoid the production of deepfakesGrok has no problem producing realistic images of celebrities. Obviously, Internet users had a field day representing personalities, particularly political ones, in totally aberrant situations. This post by researcher Ari Kouts is an excellent illustration.
It seems that the rules are not very strict, and therefore we can do things like this pic.twitter.com/755BuWekaR
— Ari Kouts (@arikouts) December 11, 2024
The post above obviously does not not intended to harm anyone’s reputation; Ari Kouts’ goal is only to illustrate what Grok is capable of. But this is not necessarily the case for everyone. You only need to browse X for a few minutes to come across overtly vulgar, obscene, insulting or misleading examples which we will not relay in this article, for obvious reasons. And the ease with which an ill-intentioned Internet user can now access this type of tool still poses a myriad of quite uncomfortable legal and ethical questions.
For example, we can question the legality of this tool within the framework of French and European law. The GDPR explicitly prohibits processing personal data, including a person’s image without their consent (see articles 4, 5 and 17). However, Grok seems to have no qualms about harvesting images from all over the web in order to divert them. The French Penal Code and Civil Code also protect the right of individuals to preserve their image and reputation. But Grok in no way limits the ability of Internet users to produce potentially defamatory images. And to a certain extent, we can even consider that he encourages it through his facetious “personality”.
Is there a need for more specific law?
Certainly, users did not wait for generative AI to create questionable content; it was already entirely possible with more traditional tools like the famous Photoshop. Furthermore, Grok is far from the only AI-based system that can produce these kinds of images. But now, anyone can do it in seconds and without the slightest technical skill, and these manipulations are even increasingly encouraged by digital culture.
It will therefore be interesting to see if lawmakers will look specifically at systems that are capable of harvesting and diverting images of real people. Do these practices deserve to be regulated more strictly, particularly in the run-up to important political events such as presidential elections? Or would it, on the contrary, be a form of repression of freedom of expression and innovation? So many questions that deserve to be studied seriously, in a context where these generative AI tools are becoming more and more efficient with each passing day.
🟣 To not miss any news on the WorldOfSoftware, , .