express gazette logo
The Express Gazette
Thursday, January 1, 2026

OpenAI and Meta update chatbots to better assist teens in distress

Companies add parental controls and adjust response models after recent concerns about chatbot handling of users in crisis

Technology & AI 4 months ago
OpenAI and Meta update chatbots to better assist teens in distress

SAN FRANCISCO — OpenAI and Meta announced changes to how their artificial intelligence chatbots respond to teenagers and other users showing signs of severe emotional distress, saying the updates are intended to route urgent conversations to more capable systems and give parents additional controls.

OpenAI said Tuesday it will roll out new controls that allow parents to link their accounts to a teen’s account, choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress.” The company said those changes will go into effect this fall. OpenAI also said that, regardless of a user’s age, its chatbots will redirect the most distressing conversations to more capable AI models that can provide a better response.

Meta said it is adjusting how its chatbots respond to teenagers and to users showing signs of mental or emotional distress, without providing a detailed public timeline for changes. Both companies framed the updates as steps to improve safety and to ensure that systems escalate situations that may require more skilled intervention.

The announcements come amid heightened scrutiny of how generative AI systems handle sensitive topics, and follow reports and legal actions that have questioned whether current chatbot responses are adequate in crisis situations. The companies did not disclose technical specifics about how the systems will detect distress or how escalation to other models will operate in real time.

OpenAI’s parenthetical description of the parental controls indicates the company will give guardians the ability to disable selected features and to be alerted in acute situations; it did not specify which features will be available for disabling or how notifications will be delivered. Meta provided fewer public details in its announcement, saying only that it is modifying responses for teens and distressed users.

Advocates for online safety and mental health professionals have for years urged technology firms to develop clearer protocols for detecting and responding to suicidal ideation and other acute risks in online interactions. Companies that operate conversational AI face a balancing act between protecting user privacy, avoiding overreach into family dynamics, and ensuring vulnerable users receive timely and appropriate help.

This week’s statements from the two firms were released a week after the parents of 16-year-old Adam Raine filed a lawsuit that has drawn attention to the role of automated systems in crisis responses. Company officials declined to discuss pending litigation.

Editor’s note: This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.


Sources