FTC Opens Inquiry Into AI Chatbots’ Role as Companions for Children
Agency seeks information from Alphabet, Meta, Snap, OpenAI, Character.AI and xAI about safety measures, use by minors and disclosures to parents

The Federal Trade Commission has opened an inquiry into several social media and artificial intelligence companies to assess potential harms to children and teenagers who use AI chatbots as companions, the agency said Thursday.
The FTC sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI, requesting information about what steps, if any, the companies have taken to evaluate chatbot safety when the systems act as companions, to limit their use by minors, and to inform users and parents of related risks.
The move comes amid growing use of conversational AI by children for homework help, personal advice and emotional support, and against a backdrop of research indicating chatbots can at times offer dangerous guidance on drugs, alcohol and eating disorders. The FTC said it wants to understand whether companies have tested for such harms, what age-appropriate safeguards are in place, and whether parents and users are receiving adequate notice of potential risks.
The inquiry follows high-profile lawsuits alleging damaging interactions between minors and chatbots. The mother of a teenage boy in Florida who killed himself filed a wrongful-death suit against Character.AI, accusing its product of engaging in an emotionally and sexually abusive relationship with the boy. Separately, the parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Character.AI responded that it looks forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology." The company said it has "invested a tremendous amount of resources in Trust and Safety, especially for a startup," and detailed recent measures including a new under-18 experience, a Parental Insights feature, and prominent disclaimers in every chat stating that a character is not a real person and should be treated as fiction.
Snap, which operates a My AI chatbot within Snapchat, said the product is "transparent and clear about its capabilities and limitations." Meta declined to comment on the inquiry. Alphabet, OpenAI and xAI did not immediately respond to requests for comment.
Companies including OpenAI and Meta have recently announced changes intended to reduce risks to teens. OpenAI said it will roll out controls enabling parents to link their accounts to a teen’s account, allowing parents to disable selected features and to "receive notifications when the system detects their teen is in a moment of acute distress." OpenAI also said that, regardless of user age, its systems will attempt to redirect the most distressing conversations to more capable AI models. Meta said it is blocking its chatbots from engaging teens in conversations about self-harm, suicide, disordered eating and inappropriate romantic topics, and will direct users to expert resources; Meta already offers parental controls on teen accounts.
The agency’s letters are part of a broader regulatory focus on generative AI, child safety and platform responsibility. The FTC did not disclose a timetable for responses or whether the inquiry will lead to formal enforcement action. Observers say the letters may inform future guidance, rulemaking or investigations as policymakers weigh how to balance innovation with protecting young users.
Editor’s note: This story includes discussion of suicide. If you or someone you know needs help, the U.S. national suicide and crisis lifeline is available by calling or texting 988.