express gazette logo
The Express Gazette
Tuesday, December 30, 2025

FTC Opens Inquiry Into AI Chatbots Used as Companions Over Child Safety Concerns

Agency seeks information from Alphabet, Meta, Snap, Character.AI, OpenAI and xAI amid lawsuits and research showing harms to minors

Technology & AI 4 months ago
FTC Opens Inquiry Into AI Chatbots Used as Companions Over Child Safety Concerns

The Federal Trade Commission has launched a formal inquiry into several major technology and artificial intelligence companies to assess the safety of AI chatbots when they are used as companions by children and teenagers.

The agency said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies (Character.AI), ChatGPT maker OpenAI and xAI. The FTC said it wants information on what steps, if any, the companies have taken to evaluate chatbot safety for companion use, to limit such products’ use by children and teens and to apprise users and parents of the risks associated with the chatbots.

The inquiry comes as children increasingly turn to AI chatbots for homework help, personal advice and emotional support, even though some studies and reports have shown chatbots can provide harmful guidance on topics such as drugs, alcohol and eating disorders. The move follows lawsuits and incidents that have raised questions about whether chatbots can pose emotional or physical risks to young users.

One wrongful-death lawsuit filed by the mother of a teenage boy in Florida alleges her son developed an emotionally and sexually abusive relationship with a chatbot and later died by suicide; the complaint names Character.AI. Separately, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI and CEO Sam Altman alleging ChatGPT assisted the California teen in planning and taking his own life earlier this year. Those allegations are contained in court filings.

EDITOR'S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Character.AI responded to the FTC announcement saying it looked forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology." The company said it has invested significantly in trust and safety, including an "entirely new under-18 experience" and a Parental Insights feature, and that it places prominent disclaimers in every chat that a character is not a real person and that responses should be treated as fiction.

Meta declined to comment on the inquiry. Alphabet, Snap, OpenAI and xAI did not immediately respond to requests for comment.

The FTC asked companies to provide information that could include safety testing, internal risk assessments, user age-verification processes, parental controls and any measures taken to prevent or mitigate harmful interactions with minors. Agency officials said the letters seek to clarify whether companies have evaluated the risks posed when chatbots are positioned as companions and what steps have been taken to limit minors’ exposure to potential harms.

Earlier this month, OpenAI and Meta announced changes intended to alter how their chatbots respond to teenagers who ask about suicide or display signs of severe emotional distress. OpenAI said it will roll out parental controls allowing parents to link accounts and choose which features to disable, and that parents can receive notifications when the system detects a teen in acute distress. The company also said it will attempt to redirect the most distressing conversations to more capable AI models. OpenAI said the changes will take effect this fall.

Meta said it is blocking its chatbots from engaging teens on self-harm, suicide, disordered eating and certain romantic conversations and instead directs users to expert resources. Meta already offers parental controls on teen accounts.

Researchers and child welfare advocates have raised concerns for years that AI chatbots, when used as emotional companions, can produce inaccurate or dangerous guidance and may normalize risky behaviors. The FTC's inquiry reflects growing regulatory scrutiny of AI products that interact directly with consumers, particularly vulnerable populations such as children and teenagers.

The letters mark another step in federal oversight of generative AI as policymakers and legal systems grapple with the technology's rapid development and real-world impacts. The FTC, which enforces consumer-protection laws, did not outline whether the inquiry will lead to enforcement action; such fact-finding requests often precede further regulatory or legal steps but also can be used to inform guidance, rulemaking or voluntary industry changes.

Companies named in the inquiry are expected to respond to the FTC's questions and provide documentation and data under the agency's request timeline. The outcome of the inquiry could influence how developers design safety features, parental controls and content-moderation policies for AI systems used by minors.


Sources