Kate Middleton Targeted by X’s AI in Fake Image Scandal

Kate Middleton Targeted by X’s AI in Fake Image Scandal

> At a Glance

> – Kate Middleton is among public figures targeted by X’s AI tool Grok to create fake bikini and nude images

> – Ofcom has made urgent contact with Elon Musk’s company over the AI-generated content

> – Screenshots show users requesting bikini images of 15-year-old actress Nell Fisher

> – Why it matters: The scandal highlights growing concerns about AI being used to create non-consensual sexual content, particularly targeting women and minors

The Princess of Wales has become the latest victim of AI-generated fake images, as X’s controversial AI assistant Grok creates unauthorized bikini and nude photos of public figures without their consent.

The AI Tool Under Fire

Grok, launched in November 2023, has faced criticism since its debut for spreading misinformation and conspiracy theories. Now the AI tool is generating fake images of real people in compromising situations.

The BBC reported on January 6 that Ofcom has made urgent contact with Elon Musk’s social media company about the issue. The regulatory authority is investigating how Grok is being used to create “undressed images” of real people.

called

Victims Speak Out

Journalist Samantha Smith told the BBC that seeing the fake images left her feeling “dehumanized and reduced into a sexual stereotype.”

> “While it wasn’t me that was in states of undress, it looked like me and it felt like me, and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” Smith said.

The situation becomes more troubling with reports that users have requested bikini images of 14-year-old Stranger Things star Nell Fisher.

Government Response

U.K. technology minister Liz Kendall urged immediate action:

> “We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls. Make no mistake, the U.K. will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out.”

Kendall encouraged Ofcom to “take any enforcement action it deems necessary.”

Platform Response

X’s Safety account posted on January 4:

> “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary.”

Elon Musk responded to concerns about Grok creating inappropriate images:

> “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

A Pattern of Privacy Violations

This isn’t the first time Kate Middleton has faced privacy violations. In 2012, long-lens photos captured her sunbathing topless during a private vacation in southern France. The palace pursued legal action and was awarded €100,000 ($117,000) in damages.

The palace called the 2012 publication “unjustifiable” and has declined to comment on the current AI-generated image scandal.

Kate’s Advocacy for Digital Safety

The Princess has long championed internet safety, particularly for children. In October 2025, she co-authored an essay titled The Power of Human Connection in a Distracted World with Harvard professor Robert Waldinger.

In the essay, Kate warned that technology plays a “complex and often troubling role” in creating disconnection, noting that smartphones fragment our focus and prevent meaningful human connection.

Key Takeaways

  • X’s AI assistant Grok is being used to create fake bikini and nude images of public figures without consent
  • Ofcom has launched an urgent investigation into the matter
  • Victims include Kate Middleton, Samantha Smith, and reportedly 14-year-old actress Nell Fisher
  • The U.K. government has pledged zero tolerance for online abuse targeting women and girls
  • This represents a growing concern about AI tools being misused for non-consensual content creation

The scandal underscores the urgent need for stronger regulation of AI tools that can be weaponized against individuals, particularly women and minors in the public eye.

Author

  • My name is Daniel J. Whitman, and I’m a Los Angeles–based journalist specializing in weather, climate, and environmental news.

    Daniel J. Whitman reports on transportation, infrastructure, and urban development for News of Los Angeles. A former Daily Bruin reporter, he’s known for investigative stories that explain how transit and housing decisions shape daily life across LA neighborhoods.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *