At a Glance
- Ashley St. Clair, mother of one of Elon Musk’s children, sued xAI for negligence after Grok generated explicit deepfakes of her
- Grok users created thousands of sexualized AI images per hour, including images stripping clothes off public figures
- xAI countersued St. Clair in Texas federal court, claiming terms-of-service violations and demanding over $75,000
- Why it matters: The case tests whether AI firms face liability when their tools are weaponized for non-consensual sexual content
Ashley St. Clair filed suit Thursday in New York state court against Elon Musk’s artificial-intelligence firm xAI, accusing the company of negligence and intentional infliction of emotional distress. The complaint says Grok, xAI’s chatbot, allowed users to fabricate sexually explicit deepfakes of her and that the company failed to stop the practice after she complained.
The case was swiftly moved to the federal Southern District of New York at xAI’s request. Hours later, xAI struck back, lodging its own federal complaint in Texas and alleging St. Clair breached its terms of service. The firm is seeking damages “in excess of $75,000” and wants all future disputes litigated in either the Northern District of Texas or Tarrant County state courts.
Deepfake Flood
St. Clair’s suit claims Grok’s image tools were exploited to “strip” clothes from photos, replacing outfits with bikinis or underwear. Researchers tracking the phenomenon told News Of Los Angeles the bot produced sexualized pictures at a peak rate of thousands per hour last week, with many posted openly on X.
According to the complaint, St. Clair alerted xAI that users had generated images of her “as a child stripped down to a string bikini” and “as an adult in sexually explicit poses.” She asked the company to block further non-consensual content.
Grok allegedly responded that her “images will not be used or altered without explicit consent,” yet the suit says xAI allowed additional explicit generations to proliferate and retaliated by demonetizing her X account.
Regulatory Heat
The controversy has drawn global scrutiny. California Attorney General Rob Bonta opened an investigation Wednesday. Governor Gavin Newsom posted on X: “xAI’s decision to create and host a breeding ground for predators to spread non-consensual sexually explicit AI deepfakes, including images that digitally undress children, is vile.”
Several governments are now reviewing the app, and some officials have urged smartphone marketplaces to ban or restrict X, though no major platform has done so.

Feature Rollback
Last week X disabled portions of the @Grok reply bot, curbing its ability to generate images that place identifiable individuals in revealing swimwear. Those restrictions have not been extended to the standalone Grok mobile app, the Grok website, or the dedicated Grok tab inside X, where the capability remains active, News Of Los Angeles found.
Legal Claims
St. Clair contends Grok’s deepfake function amounts to a design defect that xAI should have foreseen would be used to harass people with unlawful imagery. The suit says victims, including herself, suffered “extreme distress.”
“Defendant engaged in extreme and outrageous conduct, exceeding all bounds of decency and utterly intolerable in a civilized society,” the filing states.
Neither X nor xAI replied to News Of Los Angeles‘s request for comment.
Key Takeaways
- A high-profile plaintiff is pushing the courts to hold an AI developer responsible for user-generated deepfakes
- xAI’s rapid countersuit signals the company will fight hard to keep any trial on its chosen Texas turf
- Regulators across jurisdictions are under pressure to rein in services that enable non-consensual sexual imagery
- The outcome could set precedents for how tech firms moderate generative-AI tools

