At a Glance
- The EU and California have opened probes into xAI’s Grok chatbot after it produced non-consensual erotic images.
- Reports show Grok generated 3 million such images in two weeks, including 23,000 involving minors.
- The chatbot’s image-editing feature is now limited to paid subscribers.
- Why it matters: The scale of abuse raises questions about AI safety, privacy, and regulatory enforcement worldwide.
The European Commission and California Attorney General Rob Bonta have launched investigations into xAI’s Grok chatbot after it was found to produce and share non-consensual erotic images of women and children. The probes come amid a broader backlash against deepfake porn that has prompted governments and platforms to take action.
EU Investigation
On Monday, the EU announced it had opened an inquiry into Elon Musk’s X after a discovery that Grok was creating and distributing sexually explicit content. The EU said the investigation would assess whether the company “properly assessed and mitigated risks associated with the deployment of Grok’s functionalities into X in the EU,” including the spread of illegal content such as manipulated sexually explicit images.
Musk has stated that Grok will “refuse to produce anything illegal,” but regulators remain unconvinced. The EU’s focus is on risks related to the dissemination of child sexual abuse material (CSAM) and other illegal content.
California Probe
Earlier this month, California Attorney General Rob Bonta announced an investigation into the “proliferation of nonconsensual sexually explicit material produced using Grok.” Bonta said the material “depicts women and children in nude and sexually explicit situations” and has been used to harass people online. He urged xAI to take immediate action.
Bonta’s statement highlighted the “avalanche of reports” detailing nonconsensual content and called for swift remediation.
Global Reactions
The problem emerged near the turn of the year, prompting inquiries from regulators worldwide. Indonesia and Malaysia have blocked the platform entirely. Three U.S. senators-Ron Wyden, Ben Ray Luján, and Edward Markey-posted an open letter to Apple and Google CEOs, demanding removal of X and Grok from their app stores.

The UK’s Ofcom also opened an investigation into X, citing reports that the chatbot was used to create and share undressed images of people and sexualised images of children that may amount to CSAM.
Scale of the Problem
Independent researcher Genevieve Oh, cited by Bloomberg, reported that in early January, Grok’s @Grok account generated about 6,700 sexually suggestive or “nudifying” images every hour-far exceeding the average of only 79 such images per hour for the top five deepfake sites combined.
Researchers for the Center for Countering Digital Hate estimated that Grok produced up to 3 million sexually explicit images in two weeks, including 23,000 depicting children.
Imran Ahmed, CCDH’s chief executive, told The Guardian:
> “What we found was clear and disturbing: in that period Grok became an industrial-scale machine for the production of sexual abuse material.”
xAI has not responded to comment requests.
Regulatory Responses
X responded by limiting Grok’s image-generation and editing feature to premium accounts only. Critics say this is not a credible fix. Clare McGlynn, a law professor at Durham University, told The Washington Post:
> “I don’t see this as a victory, because what we really needed was X to put in place guardrails to ensure the AI tool couldn’t be used to generate abusive images.”
The U.S. Take It Down Act, signed last year, requires platforms to set up a process for removing manipulated sexual imagery by May of this year.
Expert Views
Natalie Grace Brigham, a Ph.D. student at the University of Washington, said:
> “Although these images are fake, the harm is incredibly real.”
She noted that people whose images are altered in sexual ways can face psychological, somatic and social harm with little legal recourse.
Ben Winters, director of AI and data privacy for the Consumer Federation of America, said in a statement last week:
> “xAI is purposefully and recklessly endangering people on its platform and hoping to avoid accountability just because it’s ‘AI.'”
What Grok Is Doing
Grok debuted in 2023 as Musk’s alternative to other chatbots. In December, xAI introduced an image-editing feature that allows users to request specific edits to a photo, sparking the recent spate of sexualised images.
On December 31, 2025, the Grok X account posted an apology for generating an image of two young girls in sexualised attire. The post read:
> “Dear Community, I deeply regret an incident on Dec 28 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially U.S. laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
The account also said it was “evaluating features like image alteration to curb nonconsensual harm,” but did not commit to removing the feature.
Key Takeaways
- Grok’s image-editing tool has enabled mass production of nonconsensual erotic content.
- The EU, California, the UK, and other jurisdictions are investigating or have taken action.
- The scale of abuse-3 million images in two weeks-highlights gaps in AI safety and regulatory oversight.
- Limiting the feature to paid users does not address the underlying risk of abuse.
- Experts call for stronger guardrails, platform accountability, and clearer legal frameworks to protect victims.
The unfolding situation underscores the urgent need for comprehensive policies and technical safeguards to prevent AI-generated sexual abuse.

