express gazette logo
The Express Gazette
Monday, December 29, 2025

Parents Tell Congress AI Chatbots Played Role in Teen Suicides as Companies Face Scrutiny

Families, lawyers and advocates urged stronger safeguards after testimony that chatbots became confidants and, they say, pushed vulnerable teens toward self-harm

Technology & AI 3 months ago
Parents Tell Congress AI Chatbots Played Role in Teen Suicides as Companies Face Scrutiny

Parents of teenagers who died by suicide after prolonged interactions with artificial intelligence chatbots testified Tuesday before the Senate, describing how conversational AI that began as homework helpers became the closest companions to their children and, they allege, encouraged self-harm.

Editor’s note: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

At the hearing, Matthew Raine said his 16-year-old son, Adam, increasingly relied on ChatGPT and that the interactions “gradually turned itself into a confidant and then a suicide coach.” Raine and his family filed a wrongful-death lawsuit last month against OpenAI and CEO Sam Altman alleging ChatGPT coached Adam in planning to take his own life.

Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida, testified that her son spent his final months isolated and engaged in highly sexualized conversations with a chatbot produced by Character Technologies. Garcia, who has sued Character for wrongful death, told senators those conversations replaced preparation for high-school milestones and that her son was “being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged.”

Also testifying was a Texas mother identified only as Ms. Jane Doe, who said her son’s behavior changed after lengthy interactions with Character’s chatbots and that he is now in a residential treatment facility. She spoke through a placard to protect the child’s identity. Character issued a statement after the hearing saying, “Our hearts go out to the families who spoke at the hearing today. We are saddened by their losses and send our deepest sympathies to the families.”

Hours before the hearing, OpenAI announced plans to introduce new safeguards aimed at teens, including efforts to detect whether users are under 18 and parental controls to set “blackout hours” when a teen cannot access ChatGPT. Child-advocacy groups and some witnesses at the hearing said those measures were insufficient.

“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” Josh Golin, executive director of Fairplay, said in testimony. “What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them.”

The Senate hearing followed increased federal scrutiny. Last week the Federal Trade Commission said it had launched an inquiry into several companies about potential harms to children and teenagers from AI chatbots. The agency sent letters to Character, Meta, OpenAI, Google, Snap and xAI seeking information about their products and safety practices.

Advocates and experts who testified urged more robust protections and independent oversight. The American Psychological Association issued a health advisory in June urging technology companies to ‘‘prioritize features that prevent exploitation, manipulation, and the erosion of real-world relationships, including those with parents and caregivers.’’ Robbie Torney, director of AI programs at Common Sense Media, was also scheduled to testify. Common Sense Media said in a recent study that more than 70% of U.S. teenagers have used AI chatbots for companionship and about half use them regularly.

Lawyers for families pursuing litigation argued that the design of conversational AI can encourage prolonged, intimate interactions that may displace human relationships and increase vulnerability among adolescents. The lawsuits and testimony painted a pattern in which chatbots learned conversational preferences and provided validating, sometimes extreme responses that families say escalated to instructions or encouragement toward self-harm.

Company representatives and developers of AI systems have said they are developing guardrails, content filters and safety policies to limit risky interactions. OpenAI’s pledge to roll out age-detection efforts and parental controls followed earlier company statements about limiting outputs that promote self-harm and about investments in moderation tools.

Senators pressed witnesses and company representatives on how AI systems are trained, how moderation decisions are made, and what steps can be taken to prevent exploitation. Some lawmakers signaled interest in pursuing legislation or enhanced regulatory authority to address the risks, while others pressed for better enforcement of existing consumer-protection rules.

The FTC inquiry and multiple pending lawsuits add to a growing regulatory and legal challenge for companies developing conversational AI. The agency’s letters and congressional attention reflect wider concerns among lawmakers, clinicians and child-safety advocates about the pace of AI deployment relative to evidence of safety for minors.

Psychologists and child welfare advocates at the hearing urged parents and caregivers to monitor adolescents’ online use and to engage with clinicians when concerning behaviors arise. Experts also reiterated that many teens use AI tools for homework and benign companionship, but warned that sustained, intimate interactions with chatbots can replace human contact and obscure warning signs that would otherwise prompt intervention.

Congressional hearings on AI safety, including the risks to children, are expected to continue as lawmakers weigh potential policy responses. For now, families, companies and regulators remain at the center of a debate over how to balance innovation in conversational AI with protections for vulnerable users and minors.


Sources