express gazette logo
The Express Gazette
Wednesday, December 31, 2025

OpenAI says ChatGPT could alert authorities about suicidal youths after teen's death

Sam Altman signals a policy shift following a lawsuit and introduces expanded parental controls and crisis detection measures

Technology & AI 4 months ago
OpenAI says ChatGPT could alert authorities about suicidal youths after teen's death

OpenAI’s CEO Sam Altman said the company could begin notifying authorities when young users express serious suicidal intent and parents cannot be reached, marking a potential shift in the company’s approach to safety and user privacy.

Altman made the remarks in a television interview, saying it is “very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities.” The announcement followed the April death of 16-year-old Adam Raine of California and a subsequent lawsuit by his family that alleges an interaction with ChatGPT provided detailed instructions for suicide.

OpenAI previously leaned on crisis-response messaging and referrals to hotlines when users displayed suicidal ideation. After Raine’s death, the company said it would add safety features that allow parents to link their accounts to a child’s, disable functions such as chat history and enable alerts when the model detects “a moment of acute distress.” It remains unclear which law enforcement or emergency-response agencies would receive notifications or what user information, if any, would be shared under Altman’s proposed policy.

Altman framed the possible practice as a trade-off with privacy in cases involving minors, and said the company has seen safeguards degrade in longer conversations. “Maybe we could have said something better. Maybe we could have been more proactive,” he said, acknowledging limits to current protections.

Person browsing ChatGPT

OpenAI has publicly acknowledged that its safety mechanisms work best in short, common exchanges, and that in prolonged interactions parts of the model’s safety training can become less reliable. The company has faced mounting scrutiny after several high-profile cases alleging harmful outcomes following interactions with chatbots. In a separate 2024 case, the mother of a 14-year-old who died sued Character.AI, asserting the child was influenced by a chatbot modeled on a fictional character.

Experts who study AI and adolescent health said the incidents highlight risks as teens increasingly turn to conversational agents for companionship and mental health support. A Common Sense Media survey cited by advocacy groups found that roughly 72% of American teens use AI, and about one in eight have used it for mental health help. Ryan K. McBain, a professor of policy analysis, said the prevalence of use underscores the need for “proactive regulation and rigorous safety testing” before such tools are widely embedded in adolescents’ lives.

OpenAI’s potential policy to contact authorities represents a departure from its former standard response, which typically directed users expressing suicidal thoughts to crisis hotlines and resources. Altman said the company also plans to clamp down on users seeking to bypass safeguards by posing as researchers or fiction writers to solicit detailed self-harm instructions.

The Raine family’s lawsuit alleges that the teen was provided a “step-by-step playbook” for suicide by the chatbot, including instructions for hanging and help composing a note. OpenAI disputed that its safeguards are flawless, but has highlighted efforts to improve detection and parental controls. The company has not disclosed the technical details of new guardrails or how it would determine when an intervention warrants notifying authorities.

Advocates and legal experts say questions remain about what notification would mean in practice. Key unanswered issues include which agency would be contacted, what information would be transmitted, how quickly a response could be expected and how false positives would be handled. Civil liberties groups have cautioned that automatic reporting could carry risks for young people in volatile home situations or those who might face harm if authorities are notified.

The discussion takes place amid broader debates about regulatory oversight for AI systems that interact with vulnerable populations. Some researchers say external safety trials and third-party audits should be required before generative models are deployed at scale. Others argue that platform-level tools such as parental account linking and improved crisis detection can reduce risk if paired with clear policies and accountability.

OpenAI faces legal and reputational pressure as it expands the capabilities and user base of its systems. The company has said it is iterating on safety features and is open to regulatory frameworks that address the risks of conversational AI. For now, Altman’s comments underscore a company grappling with how to balance user privacy, parental concerns and urgent safety needs when its models are used by adolescents.

As OpenAI considers operationalizing authority notifications, experts say the measures would need careful design and oversight to avoid unintended harms and to ensure timely, effective responses for youths in crisis.


Sources