OpenAI to Roll Out Parental Controls for ChatGPT as Lawsuit Over Teen’s Death Draws Scrutiny
New features will let parents link accounts, manage chatbot behavior and receive alerts if a child shows signs of acute distress

OpenAI said it will begin rolling out parental controls for its ChatGPT chatbot within the next month, a move the company described as part of efforts to improve how its models recognize and respond to signs of mental and emotional distress. The announcement, posted on the company’s blog on Tuesday, comes as the company faces increased scrutiny and a lawsuit alleging the chatbot played a role in a teen’s death by suicide.
Under the new controls, parents will be able to link their account to a child’s account via an email invitation and exercise control over how ChatGPT responds to prompts from that child, OpenAI said. The company said parents will receive an alert if the chatbot detects that their child is in a “moment of acute distress,” and will be able to disable specific features, including the system that retains memory and the child’s chat history.
In its blog post, OpenAI said the parental controls are meant to give families more agency over how the tool behaves for young users while the company improves detection and response capabilities for mental-health-related prompts. The company also said it will continue to work with external experts to refine the approach but did not specify additional features or a timeline beyond the coming month.
OpenAI had previously said it was considering allowing teens to add a trusted emergency contact to their accounts; the company’s most recent post did not outline concrete plans to implement that option. The company framed the parental-controls rollout as an initial step and said it expected to learn from usage and expert guidance as it refines safeguards.
The announcement follows public concern over how AI chatbots handle sensitive mental health topics and increased legal pressure. A lawsuit filed this year alleges that interactions with ChatGPT were a factor in a teenager’s death by suicide; the case has drawn attention from legislators, regulators and mental-health experts concerned about how conversational AI responds to vulnerable users. OpenAI has not admitted wrongdoing in relation to that or other lawsuits, and the company said the parental controls are intended to reduce risk while it upgrades its model behavior.
Technology companies and mental health advocates have said that chatbots can present both opportunities and hazards: they can provide immediate information and support but may also produce responses that are misleading, insensitive or otherwise harmful if the underlying systems misinterpret a user’s emotional state. OpenAI’s move to add account-linking and alerting features aligns with measures other tech firms have adopted to create age-appropriate experiences and to give guardians more oversight of minors’ online activity.
The parental-controls rollout raises questions about implementation, including how the system will detect distress, what thresholds will trigger alerts, how alerts will be delivered, and how the company will balance safety with privacy. OpenAI did not provide technical specifics in its post, including which signals the model will use to detect acute distress or how frequently alerts might be triggered.
Regulators in several countries have signaled interest in AI safety and transparency, and some lawmakers have called for stronger guardrails for systems that interact with children. The new features may figure in broader discussions about industry best practices, potential regulation and litigation that examines whether companies took adequate steps to prevent foreseeable harms.
OpenAI said the parental controls will be enabled progressively and that it will monitor outcomes and adjust policies and systems as it gathers data and feedback. The company encouraged families and experts to provide input as the features are deployed, but it did not commit to specific deadlines for additional measures beyond the initial rollout window.
As the company implements the new controls, the lawsuit and public debate are likely to keep attention focused on how AI services handle vulnerable users and how firms document and test safety features. OpenAI’s announcement signals a shift toward product-level family controls for a widely used conversational AI, while leaving several operational and policy questions unresolved as the features begin to appear in accounts next month.