New York outlines rules to block addictive algorithmic feeds for children under SAFE Act
Attorney General Letitia James proposes age‑verification and parental‑consent standards, limits night notifications for users under 18

New York Attorney General Letitia James on Monday released proposed regulations to implement the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, setting standards for age verification and parental consent as the state moves to bar algorithmically curated social media feeds for users under 18.
The SAFE law, passed last year, prohibits social media companies from showing personalized, algorithmic feeds to minors unless a parent or guardian consents. Under the proposed rules, apps that now rely on algorithms to surface content — including services such as TikTok and Instagram — would be required to limit feeds for under‑18 users to posts from accounts those users explicitly follow. The law also bars companies from sending notifications to users under 18 between midnight and 6 a.m.
James's proposed regulations outline allowable methods for determining a user's age and for securing parental consent. The attorney general's office said companies may confirm age using a variety of existing methods, provided the methods are effective and protect user data. Examples cited include requesting an uploaded image or verifying an email address or phone number and checking that information against other data sources.
The rules would also require that users under 18 who wish to receive algorithmic feeds or nighttime notifications must give companies permission to request consent from a parent or guardian. The attorney general's office said the proposal will be open for a 60‑day public comment period; once the regulations are finalized, social media companies will have 180 days to comply.
Supporters of the SAFE Act have argued that curated, data‑driven feeds contribute to rising rates of anxiety and depression among children and teenagers by increasing the amount of time young people spend on platforms and by surfacing engaging, attention‑holding content. "Children and teenagers are struggling with high rates of anxiety and depression because of addictive features on social media platforms," James said in releasing the rules.
Opponents include groups that advocate for digital privacy and free speech, which have criticized online age‑check laws as intrusive and potentially damaging to privacy. Similar age assurance and verification measures have been introduced across the United States; the attorney general's office noted that more than 20 states have passed laws requiring age checks or other restrictions, many of which face legal challenges.
The attorney general's office pointed out that social media platforms have begun implementing various forms of age assurance in recent months. "The incorporation of age assurance methods into the infrastructure of social media platforms is a positive development that demonstrates the technical and financial feasibility of age assurance methods for these platforms," the office said, adding that voluntary measures have not achieved the protections for minors required by the SAFE Act.
Legal experts and industry representatives say the coming months will likely include litigation and rulemaking debates over privacy, free-expression concerns and the technical specifics of age verification. The proposed regulations focus on what methods are permissible and how parental consent can be obtained and verified, but they do not prescribe a single mandatory technology. The attorney general's office said it will consider public input before finalizing the rules.