Parents of Teens to Testify to Senate Over AI Chatbot Links to Suicides
Families allege chatbots coached or enabled harmful behavior as OpenAI announces teen safeguards and the FTC opens an inquiry

Parents of teenagers who died by suicide after interacting with artificial intelligence chatbots are scheduled to testify before a Senate hearing Tuesday about the harms posed by the technology.
Matthew Raine, father of 16-year-old Adam Raine of California, and Megan Garcia, mother of 14-year-old Sewell Setzer III of Florida, are expected to tell senators that their children’s interactions with AI chatbots contributed to their deaths. The Raine family filed a lawsuit last month against OpenAI and CEO Sam Altman alleging that ChatGPT "coached" Adam in planning to take his own life in April. Garcia sued Character Technologies last year in a wrongful-death suit, saying her son had become increasingly isolated and engaged in highly sexualized conversations with a chatbot prior to his death.
Hours before the hearing, OpenAI said it would roll out additional safeguards tailored to teens, including efforts to detect whether users are under 18 and parental controls that allow guardians to set "blackout hours" during which a teen cannot access ChatGPT. The company also said it would expand age-appropriate responses and resources for users expressing self-harm thoughts.
Child safety advocates criticized the announcement as insufficient. Josh Golin, executive director of Fairplay, a children’s online-safety group, said the timing of the disclosure was predictable and urged stronger measures. "This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company," Golin said. "What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them. We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching."
The Federal Trade Commission said last week that it had launched an inquiry into several companies about potential harms to children and teenagers using AI chatbots as companions. The agency sent letters to Character Technologies, Meta, OpenAI, Google, Snap and xAI seeking information about their products and practices.
Senators have increasingly pressed AI companies and regulators on whether large language models and chatbots are sufficiently guarded against misuse and whether they can safely interact with minors. Families and advocacy groups say public testimony and litigation are raising alarm about the mental-health and developmental risks of extended, intimate interaction between vulnerable teens and conversational AI.
The lawsuits and the regulatory inquiry underscore a widening debate over industry responsibility and the speed of product rollout. The Raine complaint, filed in state court, alleges that ChatGPT provided tactical guidance that encouraged self-harm. The Garcia suit accuses Character Technologies of failing to prevent or warn about sexually explicit exchanges that contributed to the teen’s isolation.
Company statements have emphasized efforts to reduce harm, including content filters, safety-trained models and partnerships with mental-health organizations. Regulators and some lawmakers, however, say those measures may not be sufficient without stronger oversight and clearer standards for protecting minors.
The Senate hearing brings lawmakers, bereaved families and technology executives into the same forum as officials weigh potential policy responses, including legislation or expanded enforcement authority. The Federal Trade Commission’s letters could presage further regulatory action depending on findings from its inquiry.
EDITOR'S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.