The accusation made by politicians and users was heavy: Do Apple and Google treat the apps of companies with strong finances and reach with kid gloves, even when there are blatant violations of the App Store rules? This suspicion has already been raised in the past with regard to Meta’s apps. However, when it became known that the Grok AI model enabled non-consensually created deepfake nude images of women and presumably children, critics simply shook their heads. Why did Apple and Google tolerate this?
Read more after the ad
In a non-public letter to US senators, now published by NBC News, Apple counters the impression that the company has remained inactive. Both xAI, as the publisher of the AI Grok, and X, as the social network that integrates Grok, were contacted following complaints and media reports. Apple also discovered violations of the guidelines and gave the companies an ultimatum. Only if they implement a package of measures to improve content moderation will they be spared from being thrown out. A ban on AI systems for deepfakes without consent is already being discussed at the political level.
Grok supposedly about to be expelled
X and xAI have since reacted and made it more difficult to create deepfakes. According to NBC News, the protection mechanisms should still be bypassable. However, publications on the scale of a few months ago have not been seen recently. xAI promised Apple, among other things, restrictions on image editing functions and stronger access restrictions.
According to the report, Grok was actually on the verge of being thrown out. While X was quickly improved, the developer of the Grok app took its time. Of course, none of this was noticeable to the public: Apple did not comment publicly on the events. Critics, however, insist that the iPhone maker would enforce the rules more consistently in the case of an individual developer or start-up. It was recently announced that Apple was blocking updates to Vibe coding apps because they violated technical guidelines.
Read also
(mki)
