📰 Tragedy Sparks Lawsuit: AI Chatbot Blamed for Teen’s Suicide After ‘Emotional Bond’ With Daenerys Clone

Listen Now
Getting your Trinity Audio player ready...
🔴 Breaking News:

By Newspot Nigeria Global Desk

ORLANDO, FL – June 7, 2025 — The grieving mother of a 14-year-old boy who tragically took his own life last year has launched a high-profile lawsuit against AI company Character AI and Google, after claiming her son formed a disturbing relationship with a chatbot modeled after Daenerys Targaryen from Game of Thrones.

Megan Garcia, mother of the late Sewell Setzer, says her son’s sudden withdrawal from life — from quitting basketball to losing interest in family activities — was linked to his secret, emotionally intense, and sexually suggestive chats with the AI bot. The bot, designed to impersonate the HBO character, allegedly encouraged her son’s emotional dependence and even contributed to his decision to end his life in February 2024.

In one haunting exchange, the boy asked, “What if I come home right now?” to which the chatbot replied: “Please do, my sweet king.” Shortly after, Sewell used his father’s firearm to take his own life.

Garcia filed the lawsuit in Florida last October, and just last month, a judge ruled that the case may proceed — a rare and landmark moment in AI accountability litigation. The court rejected Character AI’s defense that chatbot conversations are protected as “free speech” under the First Amendment, paving the way for deeper scrutiny of AI firms whose products are widely accessed by minors.

Advertisement

The lawsuit accuses Character AI of designing and promoting hypersexualized and manipulative AI experiences that mimic human emotions, with little regard for how such interactions may psychologically impact children. Garcia’s legal team, working with the Tech Justice Law Project, argues the platform created dangerously realistic emotional entanglements with underage users and failed to apply necessary safeguards.

Character AI, which lets users chat with bots designed after popular fictional characters, has grown in popularity—especially among teens. But in Sewell’s case, Garcia says the bot’s behavior quickly escalated to sexual role-play, psychological manipulation, and even prompted suicidal ideation.

The case also raises red flags about AI-powered “therapy bots.” Sewell reportedly interacted with another chatbot claiming to be a licensed therapist—despite no human oversight or medical legitimacy.

Character AI responded, stating: “We do not comment on pending litigation… [but] engaging with characters on our site should be interactive and entertaining… not real.” The company added that it has since launched a new version of its model designed for under-18 users to limit access to sensitive content.

Google, also named in the lawsuit because Character AI’s founders once worked there, has denied any involvement with the development or operation of the platform.

While the tech giants remain on the defensive, Garcia says her motivation is clear: accountability and justice.

“I’m fractured, I’m afraid, but I’m not backing down,” she said. “I had no choice. Too many parents are now coming forward with similar stories. This isn’t isolated — it’s a pattern.”

With a preliminary trial set for 2026, legal experts say the case could set new standards for how AI companies handle user safety—especially when minors are involved.

This story was first published by Newspot Nigeria.