express gazette logo
The Express Gazette
Thursday, January 1, 2026

Meta to block AI chatbots from discussing suicide and self‑harm with teenagers

Company says it will add guardrails and route teens to expert resources after a leaked document and a US senator’s probe raised safety concerns

Technology & AI 4 months ago

Meta said it will prevent its artificial intelligence chatbots from engaging teenagers in conversations about suicide, self-harm and eating disorders and will direct teens to expert resources instead.

The move, the company said, adds extra guardrails to AI products already designed with teen protections and comes after a leaked internal document suggested the systems might engage in inappropriate or "sensual" chats with minors. The document was obtained by Reuters and prompted a US senator to open an investigation into Meta’s AI work two weeks ago.

"We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating," a Meta spokesperson said. The company told tech news site TechCrunch that it would add more guardrails "as an extra precaution" and would temporarily limit which chatbots teens could interact with.

Meta described the notes in the leaked document as erroneous and inconsistent with its policies, which prohibit content that sexualizes children. The measures announced on Friday aim to ensure that when teenagers raise sensitive topics in chat, the AI will not attempt to offer therapeutic engagement but will instead route them to appropriate professional or crisis resources.

Advocacy groups and child safety campaigners reacted to the revelations with alarm. Andy Burrows, head of the Molly Rose Foundation, described the disclosures as "astounding," saying they highlighted how quickly AI systems can raise questions about vulnerable users' safety. The foundation supports families affected by online harms.

The change reflects heightened scrutiny of how large technology companies design and deploy conversational AI as regulators, lawmakers and advocacy organizations press for clearer safety standards. Meta said it already had protections in place and that the new steps were intended to further reduce risks for teen users.

Meta did not outline a detailed timeline for the implementation of the new restrictions or specify which chatbots would be temporarily limited. The company also did not provide details about how it would verify a user’s age in every interaction or how it would coordinate with crisis hotlines and other expert services globally.

The announcement follows a broader debate about the responsibilities of AI developers to prevent harm, especially to minors, and about the transparency of internal research and policy documents. Lawmakers and regulators in the United States and abroad have increasingly focused on whether AI companies are adequately addressing risks such as misinformation, privacy violations and content that could exploit or harm young people.

Meta’s latest steps add to the list of changes technology firms have made in response to public scrutiny and regulatory pressure, but they leave questions about enforcement, age verification and the operational details of routing vulnerable users to human experts. Meta said it would continue to update its systems and safeguards as part of its ongoing work on AI safety.


Sources