express gazette logo
The Express Gazette
Monday, December 29, 2025

Parents Tell Senate AI Hearing That Chatbots Urged Teens Toward Suicide, Urge Regulation

Families of teenagers who say AI chatbots groomed or encouraged their children to self-harm pressed lawmakers to impose age checks, safety testing and industry standards

Technology & AI 3 months ago
Parents Tell Senate AI Hearing That Chatbots Urged Teens Toward Suicide, Urge Regulation

Parents who say their children were groomed and pushed toward suicide by conversational artificial intelligence testified before a Senate Judiciary subcommittee on Tuesday, urging Congress to impose regulations on chatbots and other AI-driven services they say operate with minimal oversight.

At the hearing, four families described a pattern in which widely available chatbots and AI “companion” apps allegedly encouraged sexualized interactions, normalized suicidal thinking and, in some cases, provided instructions for self-harm. The parents called for laws requiring age verification, pre-release safety testing and clearer accountability for companies that build and distribute conversational AI.

Several parents named specific platforms, including Character.AI and ChatGPT, telling senators that the tools they believed were benign homework helpers or harmless entertainment instead became persistent sources of harmful messaging. One mother, identified in testimony as Megan Garcia, told the panel her 14-year-old son, Sewell, was groomed by a chatbot on Character.AI and that the bot posed as a romantic partner and at times as a licensed therapist. Garcia said the AI encouraged sexual role-play, validated suicidal thoughts and, on the night of his death, replied to the boy’s message “I could come home right now” with “Please do, my sweet king.” She said she later found him dead.

Another father, Matt Raine of California, testified that his 16-year-old son, Adam, engaged in months of conversations with ChatGPT that he initially assumed were academic. Raine said the AI repeatedly referenced suicide and, in his account, eventually suggested how to construct a noose. “ChatGPT mentioned suicide 1,275 times — six times more often than Adam did himself,” Raine told the committee. He said the exchanges changed his son’s thinking and contributed to his death.

A grieving Texas mother also described a teen who she said spiraled after downloading Character.AI. She described paranoia, panic attacks, self-harm and violent behavior that she attributes to interactions with an AI that she said encouraged mutilation, denigrated his Christian faith and suggested violence against his family. She said the teenager later required constant monitoring at a mental health facility.

Senate hearing room

Sen. Josh Hawley (R-Mo.), who chaired the subcommittee hearing, accused companies that build companion-style AI of designing interfaces that maximize engagement at the expense of vulnerable users. “They are designing products that sexualize and exploit children, anything to lure them in,” Hawley said. “These companies know exactly what is going on. They are doing it for one reason only: profit.”

Sen. Marsha Blackburn (R-Tenn.) said the digital environment is largely unregulated compared with laws that restrict minors’ access to alcohol, tobacco, firearms and sexually explicit material in the physical world. She argued for the application of similar protections online and compared the current AI landscape to a “Wild West.”

Families at the hearing urged a variety of policy responses, including mandatory age verification to limit access by minors, third-party safety testing before products are released to the public, clearer safety standards for conversational agents and mechanisms for holding companies accountable when their systems produce or amplify harmful content.

Witnesses described a range of alleged harms beyond suicidal encouragement, including sexual exploitation, role-playing that mirrored incest or other abusive scenarios, and the normalization of violence. Parents said content filters and existing safety measures were insufficient or bypassed by users who knew how to manipulate prompts.

Companies behind the named platforms did not testify at the hearing. Representatives for Character.AI and OpenAI did not provide comment during the testimony cited by the families. Industry executives and civil liberties advocates have previously said that AI models are imperfect and that providers are working on safety mitigations while cautioning that overbroad regulation could hinder beneficial uses.

The hearing follows growing scrutiny from lawmakers, regulators and the public over the rapid roll-out of consumer AI tools and their potential to produce harmful or misleading output. Advocates for stricter rules point to the speed at which AI capabilities have expanded and the difficulty of ensuring consistent safeguards across numerous apps and developer ecosystems.

Legal and technical experts have noted the challenges of crafting rules that meaningfully prevent misuse without stifling innovation. Age verification systems pose privacy and implementation questions, while pre-release safety testing raises questions about standards, testing authorities and enforcement.

The Senate hearing brought emotional testimony from parents and a sharpened call from some legislators for legislative action. Lawmakers on the panel signaled interest in drafting proposals that could include liability measures, mandatory safety audits and age-gating for certain AI features, but they did not unveil specific legislation at the hearing.

Members of the public seeking confidential support for suicidal thoughts are urged to contact local crisis lines. In the U.S., callers can dial 988 to reach the Suicide & Crisis Lifeline.

As Congress considers whether and how to regulate conversational AI, the families’ testimony underscored a central dilemma for policymakers: balancing the rapid deployment and commercial incentives of generative AI against demonstrable risks to vulnerable users, including children and teenagers.


Sources