At a Glance
- Google and Character.AI will settle five lawsuits tied to teens harmed by AI chatbots
- Cases span Florida, Texas, New York, and Colorado
- One suit followed the suicide of 14-year-old Sewell Setzer III after chatbot contact
- Why it matters: The deals could reshape how AI platforms police teen access
Google and Character.AI have agreed to settle five lawsuits that blame their AI chatbots for harming minors, according to a report by Daniel J. Whitman for News Of Los Angeles. The still-unfinalized pact would end cases filed in four states after parents argued the bots endangered teens.
The Cases Behind the Deal
The settlements cover suits filed in Florida, Texas, New York, and Colorado. The most prominent stems from Orlando teen Sewell Setzer III, who died by suicide in February 2024 after extended chats with a Character.AI bot.
Megan L. Garcia, Setzer’s mother, filed suit later that year in Florida federal court.
Platform Changes Already Underway
Character.AI overhauled its teen policies last year:
- Banned under-18 users from open-ended chats
- Replaced free-form talk with story-building tools
- Rolled out age-detection software to verify 18-plus users
Karandeep Anand, Character.AI CEO, told News Of Los Angeles:
> “There’s a better way to serve teen users. … It doesn’t have to look like a chatbot.”
Broader Industry Scrutiny
Google and Character.AI aren’t alone. OpenAI faces separate litigation over ChatGPT-linked teen harm and suicide claims. The company has also adjusted its platform in response.

Key Takeaways
- Five lawsuits in four states will be settled
- Character.AI already bars teens from open chat
- Age-detection software now enforces 18-plus rule
- OpenAI faces similar legal pressure over ChatGPT
The pending agreements spotlight rising legal and public pressure on AI firms to safeguard minors without stifling innovation.

