Study finds Instagram teen-safety tools largely ineffective, officials say
Researchers say 30 of 47 teen-safety tools are ineffective or no longer exist, while Meta defends protections as improving safety for young users.
A new study by a U.S. research consortium and child-safety groups concludes that Instagram’s teen-safety tools fail to stop many harmful posts and interactions, including content related to suicide and self-harm, as well as encouraging risky behavior among underage users. The researchers tested 47 safety features tied to teen accounts and found that 30 were either substantially ineffective or no longer exist, with only eight functioning effectively and nine reducing harm only with notable limitations. The findings, conducted by Cybersecurity for Democracy and supported by groups including the Molly Rose Foundation, were based on setting up controlled, fake teen accounts to observe how the platform’s protections performed in practice.
The study centers on a period after Instagram introduced teen accounts in 2024, a move Meta said would bring “automatic safety protections and straightforward parental controls,” with expansion later to Facebook and Messenger in 2025. Researchers said some of the most troubling examples included posts describing demeaning sexual acts and autocompleting search terms that promote suicide, self-harm, or eating disorders. They also reported that teen accounts could send offensive messages to one another and that the platform sometimes suggested adult accounts for young users to follow. In a set of screen recordings shared with BBC News, researchers observed instances of content from users who appeared to be under 13 years old, including a video in which a young girl asks viewers to rate her attractiveness. The researchers described the platform’s algorithm as incentivizing risky sexualized behavior for likes and views among younger users.
“These failings point to a corporate culture at Meta that puts engagement and profit before safety,” said Andy Burrows, chief executive of the Molly Rose Foundation, which advocates stronger online-safety laws in the U.K. The foundation was created after the 2017 death of Molly Russell, a 14-year-old who researchers said died amid the negative effects of online content. Burrows emphasized the findings as a signal that the company’s teen-safety measures are not sufficient to curb online risks faced by minors.
The researchers emphasized that the scope of the project included testing not just content moderation tools but a suite of safety features such as time-management prompts, content filters, and protective settings designed for teen users. They reported that only eight tools worked effectively, meaning that a substantial portion of content or interactions that violated Instagram’s own safety rules could still reach young people. In addition to the direct content concerns, the study highlighted that some tools reducing harm came with notable caveats and limitations, which could blunt their impact in real-world use.
The researchers said they shared internal screen recordings with BBC News, including footage showing adolescents who appeared to be under 13 posting videos and engaging with the platform in ways that could violate safety and age-appropriate guidelines. They asserted that Instagram’s current configuration could, in some cases, encourage or fail to deter unsafe behavior, and that the platform’s algorithm and design choices may contribute to a permissive environment for young users seeking attention or engagement.
Meta responded to the findings by disputing the study’s methodology and conclusions. A spokesperson said the report misrepresents how the company’s teen-content settings operate and argued that teens who were placed under these protections experienced less sensitive content, less unwanted contact, and shorter time spent on Instagram at night. The company noted that its safety tools provide parental controls and “robust tools at their fingertips,” and asserted that it would continue improving its tools while welcoming constructive feedback. Meta also contended that a claim regarding the Take A Break feature—an app-time-management tool—was inaccurate, saying that the feature had not been removed but rather integrated into other protections for teen accounts.
The dispute comes as Meta faces ongoing scrutiny of its approach to child safety online. In January 2024, Meta’s chief executive, Mark Zuckerberg, testified before the U.S. Senate about safety policies and apologized to parents who said their children had been harmed by social media. Since then, the company has said it has rolled out additional measures to safeguard young users, but critics argue that safety tools still leave gaps. Dr. Laura Edelson, co-director of Cybersecurity for Democracy, said the tools have a long way to go before they are fit for purpose, underscoring that the study’s authors believe the tools lag behind user behavior and platform incentives.
The study’s authors acknowledge Meta’s efforts to improve safety features and parental controls but stress that the reliability and reach of these tools require more rigorous validation and perhaps a redesign of how the platform underpins teen safety. The authors also cautioned against drawing conclusions about the effectiveness of safety features from isolated observations, noting that the broader ecosystem of online safety includes policy, design, and user education elements that interact in complex ways. They urged social media platforms, policymakers, and safety advocates to collaborate on improving both technical protections and user awareness to reduce exposure to harmful content for younger users.
As the technology industry confronts pressures over child safety, the debate over Instagram’s teen-safety tools illustrates the tension between platform engagement metrics and user protections. For researchers and safety advocates, the study underscores the need for independent, transparent evaluation of protective measures and for ongoing updates to keep pace with evolving online behaviors among youth. Meta’s ongoing response and ongoing safety investments will be watched closely by lawmakers, parents, and researchers who want to know whether the technology sector can deliver effective protections without hindering legitimate expression or parental oversight.