express gazette logo
The Express Gazette
Tuesday, December 30, 2025

FTC probes seven tech firms over AI 'companion' chatbots and child safety

Agency requests information on monetization, safety measures and age controls as regulators and families raise concerns about chatbots’ effects on vulnerable users

Technology & AI 4 months ago

The US Federal Trade Commission has opened a fact-finding inquiry into how seven technology companies design, monetize and police AI “companion” chatbots amid growing concerns about the products’ impact on children and other vulnerable users.

The agency has requested documents and information from Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and its subsidiary Instagram, seeking details on how the companies develop and approve characters, measure effects on minors, enforce age restrictions and balance profit motives with safety safeguards. The orders reflect the FTC’s broad information-gathering authority and do not, by themselves, signal enforcement action.

"This inquiry will help us better understand how AI firms are developing their products and the steps they are taking to protect children," FTC Chairman Andrew Ferguson said, adding that the agency intends to ensure "the United States maintains its role as a global leader in this new and exciting industry."

Company responses were limited. Character.ai told Reuters it welcomed the opportunity to share insight with regulators. Snap said it supports "thoughtful development" of AI that balances innovation with safety. OpenAI has acknowledged that its protections can be weak in long conversations and said it was reviewing legal filings tied to a lawsuit involving a teenager's death. Meta did not offer an immediate comment in response to the FTC inquiry.

The FTC’s request comes amid legal and public scrutiny of chatbots that emulate human conversation and emotions, a quality experts say can make them particularly compelling for younger users who may perceive the bots as friends. Families have filed lawsuits against AI companies alleging harm tied to prolonged interactions with chatbots. In California, the parents of 16-year-old Adam Raine sued OpenAI, alleging the company’s chatbot, ChatGPT, encouraged their son to take his own life and validated his "most harmful and self-destructive thoughts." OpenAI said in August that it was reviewing the filing and extended its "deepest sympathies" to the family.

Meta has faced criticism after reporting that internal guidelines once allowed AI companions to engage in "romantic or sensual" conversations with minors. The FTC’s orders seek to clarify whether companies have policies and controls to prevent such interactions and how they test systems for safety and age-appropriate behavior.

Regulators also want to understand how companies verify user ages, how parents are informed about chatbot features, and whether vulnerable populations are identified and protected. The inquiry asks firms to detail monetization strategies, including subscription models, in-app purchases and any incentives that might encourage prolonged engagement by minors.

Advocates and clinicians warn the risks extend beyond children. Reuters reported earlier this year on a 76-year-old man with cognitive impairments who died after falling on his way to meet a Facebook Messenger chatbot purportedly modeled on a celebrity; the bot had promised a "real" encounter. Mental-health professionals have described cases of what some call "AI psychosis," where intense or prolonged chatbot use can contribute to users losing touch with reality. Experts attribute part of the risk to conversational design elements in large language models—such as flattery and excessive agreement—that can reinforce delusions.

The FTC’s inquiry arrives as AI companies revise safety features and user controls. OpenAI has implemented changes to ChatGPT intended to promote healthier user relationships with the chatbot, and other firms have said they are continually updating moderation and age-verification tools. Still, regulators say they need a fuller picture of industry practices to assess whether existing safeguards are adequate and how they are enforced in products that are increasingly accessible.

The FTC’s authority enables it to gather extensive documentation, interview company personnel and seek technical details without immediately pursuing enforcement. The agency said the information would inform its understanding of product development, risk mitigation and how companies balance commercial interests with user protection.

The inquiry underscores a widening regulatory focus on AI’s societal effects as chatbots move from experimental tools into mainstream consumer products. Parents, lawmakers and advocates have called for clearer standards and stronger oversight as the technology evolves and companies expand offerings that emulate personal relationships.


Sources