Apple quietly threatened to kick Elon Musk's AI app, Grok, from its App Store in January over its failure to curb the surge of nonconsensual sexual deepfakes flooding X, according to NBC News.
In a letter obtained by NBC News, Apple told US senators it “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal” and demanded that the developers “create a plan to improve content moderation.”
At the time, xAI’s chatbot Grok was freely accessible on X and as a standalone app, with flimsy safeguards that allowed users to easily generate and share sexualized deepfakes and “undress” images of real people, disproportionately women and some of them apparently minors.
Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women.
Despite Apple’s approval and xAI's claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease. Cybersecurity sources told me they have been able to create explicit images of celebrities and political figures using the tool, and I have been able to produce similar images of myself and other consenting adults.







