express gazette logo
The Express Gazette
Monday, December 29, 2025

Experts warn AI toys for infants could affect brain development

As AI-driven products for babies enter the market, researchers and policymakers call for safeguards amid concerns about early development.

Technology & AI 3 months ago
Experts warn AI toys for infants could affect brain development

A wave of AI-powered products aimed at infants and toddlers is accelerating, even as policymakers weigh age-verification rules and schools discuss technology use. OpenAI has announced a partnership with Mattel to bring age-appropriate AI toys to market, and xAI has introduced Baby Grok, a chatbot for six-year-olds. Infant-directed versions may be next. Critics warn that such products could disrupt early brain development and call for immediate safeguards before more devices reach nurseries and daycare centers.

From birth, the human brain is primed for social interaction, and daily exchanges with caregivers help shape the neural networks that support bonding, language, emotion regulation and cognitive growth. These interactions are not optional; they are the biological foundation of learning. In the caregiver-infant dyad, babies develop in a cradle of complex duets linking touch, eye contact, words and coos. Beneath it all, neurons and oxytocin receptors fire and form crucial connections. If we derail them, we risk compromising the foundation of human potential. AI bots may sound and act human, but they aren’t. Newborns are born with an innate drive to socially engage, recognizing their caregiver’s voice by the third trimester and seeking connection after birth. All generative AI agents are based on large language models, and while they may simulate emotional content, they lack the physical and physiological hallmarks of human beings. Claimed to mimic a parent’s voice, a chatbot’s pitch still lacks multisensory input—from scent and temperature to micro-expressions—that babies rely on for learning. The question is not only what a bot can do, but what a baby brain perceives as real.

One key concern is timing. Temporal contingency—the back-and-forth rhythm of communication—is vital to development. A caregiver’s smile prompts a smile in return; as the rhythm evolves, children learn how to navigate language and social interaction. AI interactions, however, often rely on mechanical precision or optimized rhythms, not the just-right variability that infants need. Will children raised with bots be prepared for the unpredictability of real human relationships?

Another concern is emotional complexity. Human caregivers offer emotional depth, including missteps and recovery. Children need to encounter difficult emotions—both theirs and others’—to learn to regulate them and build resilience. A chatbot, endlessly patient and consistent, does not provide the same emotional scaffolding. Supporters say AI can help with early diagnosis of developmental delays, personalized learning and reduced burdens for families and teachers, but many experts caution that benefits must be weighed against potential developmental costs.

Advocates urge immediate transparency standards and dedicated research into AI’s impact on infants and toddlers before such products reach the market. Child-development organizations like Children and Screens have long called for evidence-based guardrails to protect kids’ health in the digital age. Governments must draw clear boundaries around AI use with children under age 3, who are undergoing the most rapid and sensitive brain development of their lifetimes. No AI-powered product should enter a nursery or daycare without rigorous safety testing and oversight.

The argument is not merely hypothetical. Romania’s orphan crisis, where thousands of children grew up in institutions without stable caregivers, highlighted that nurture matters just as much as resources like food and shelter. The deficit in cognitive, social and emotional development in such contexts underscored the importance of human interaction in early life. Earlier this month, more than 150 scientists issued a global warning about AI threatening fundamental social processes that shape healthy humans. As the United Nations launches its first international panel on AI governance, researchers urge that babies be included in the conversation.

Kathy Hirsh-Pasek, a Brooklyn-based psychologist and professor of psychology at Temple University and a fellow at the Brookings Institution, has argued that babies do not need better bots, but better boundaries. Her call, echoed by other researchers, centers on creating norms and safeguards that ensure AI supports rather than supplants the essential early relationships that drive development. The field now faces a pivotal question: how to harness the benefits of AI while protecting the most vulnerable stage of human growth.


Sources