Instagram rolls out AI-driven age enforcement in the UK
Meta uses facial‑analysis AI to flag accounts likely belonging to under-18s and automatically place them in Teen Accounts with parental controls.

Instagram began rolling out artificial intelligence in the United Kingdom to identify accounts that appear to belong to users under 18, even if the birthday listed on the profile suggests the user is older.
The rollout, which starts Monday, is part of a broader safety push to shield younger users from predators and scams by ensuring those aged 13 to 17 remain in Instagram’s Teen Account environment. The system uses a behind‑the‑scenes AI to assess signals that a user may be under 18, including facial features in photos and other indicators of age. When the software flags an account as likely under 18, it is automatically enrolled in a Teen Account, Instagram’s age‑appropriate version of the app.
Teen Accounts are private by default, and their messaging is restricted to followers or connected accounts; content restrictions apply to sensitive material in Explore or Reels; hidden word filters remove offensive phrases from comments and direct messages; time limits can be set by parents; and a suite of parental supervision tools lets guardians see who teens message and what topics they view. Teens must accept new followers, and their content is hidden from strangers. Parents can monitor who their child messages, see what topics they engage with, and set limits on app usage. In some cases, minors under 16 need a parent or guardian to alter these settings.
Users flagged as under 18 can review the decision and appeal within the app. Appeals can be filed by submitting a selfie, which will be verified by third‑party facial recognition firm Yoti, or by providing a copy of an ID.
Instagram has already been testing similar age‑detection measures in the United States, and the company said the capability will expand to the United Kingdom, Canada and Australia starting this week. Alongside the rollout, Instagram will begin notifying UK parents with information about how they can have conversations with their teens about the importance of providing the correct age online.
The shift reflects a broader push by Meta, which has owned Instagram since 2012, to make its platforms safer for younger users. The company has argued that understanding age online is an industry‑wide challenge and that proactive safeguards help place teens in age‑appropriate online experiences. In recent years, Meta has introduced a series of safety measures, including banning messages from strangers to users under 18 on Instagram and Facebook Messenger and automatically hiding certain content related to suicide, self‑harm and eating disorders from under‑18 audiences.
Yet the policy has drawn skepticism from some experts. Critics argue that 13 may be too young to use a smartphone or social media at all. Former U.S. Surgeon General Vivek Murthy has cautioned that social media at that age can harm self‑worth and relationships, saying in 2023 that he believes 13 is too early based on the available data. Still, Meta says the new AI‑driven enforcement aims to reduce risky interactions and ensure teens are steered toward safer, age‑appropriate experiences.
As the technology scales, privacy advocates and digital‑safety researchers will be watching how the age‑detection tools perform in practice, how often false positives occur, and how transparent the appeals process remains for families navigating these safeguards. The move marks another step in the ongoing evolution of Technology & AI policy as platforms balance safety, privacy and user autonomy.