Export Compliance Daily is a service of Warren Communications News.
SAFE for Kids Act

New York AG's Age-Assurance Plan Elicits Privacy Concerns

Possible New York regulations aimed at protecting kids against addictive feeds raise significant privacy concerns, tech industry and consumer privacy groups agreed in comments reviewed Tuesday by Privacy Daily. The groups weighed in Monday on a Sept. 15 NPRM from the state attorney general's office to implement the Stop Addictive Feeds Exploitation (SAFE) for Kids Act.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

However, the Age Verification Providers Association said the NPRM “reflects a sophisticated understanding of age assurance and the importance of privacy-preserving technical design.”

New York Gov. Kathy Hochul (D) signed the SAFE Act into law in June 2024 (see 2406070065). On Sept. 15, 2025, Attorney General Letitia James (D) issued an NPRM and comments were due Monday (see 2509150045). The AG office now has until Dec. 1 next year to finalize the rules, which will go into effect 180 days after the final rules are released.

The proposed rules offer advice for social media companies about how they should restrict their platforms' addictive features to avoid harming the mental health of children. Also, they ban platforms from sending notifications to minors between midnight and 6 a.m. without “verifiable parental consent.” The law grants the attorney general authority to seek injunctive relief and civil penalties up to $5,000 per violation.

The proposed rules also address age assurance, as social media platforms must verify that a user is an adult before the user can access algorithmic feeds or nighttime notifications. The rules suggest that users upload a photo or video or submit their email address or phone number to verify their age. However, "companies may confirm a user’s age using a number of existing methods, as long as the methods are shown to be effective and protect users’ data."

In comments Monday, SIIA urged the AG’s office “to reconsider its highly prescriptive approach in favor of one that follows emerging national norms.” In particular, the draft rules “for age assurance are unduly burdensome, pose significant privacy risks, and rely on an overly strict interpretation of commercial and technical feasibility,” the trade group said.

“For many platforms, such as a content review service that may only collect persistent identifiers, the rule would force them to collect significantly more 'Information Persistently Associated' with the user,” SIIA said. “This new, mandatory collection of sensitive information, such as government identification, biometric data, or financial information … is unnecessary for the service's core function.”

The verification process could reduce equitable access since some users “may not be able to comply due to circumstance,” added SIIA. “Individuals who lack certain forms of identification, those with intellectual disabilities, and those who are especially privacy-conscious may not be able to, or may not want to, provide the sensitive data required,” it said. “While the rule mandates alternative methods, removing the least privacy-invasive option forces users to choose among the remaining, potentially more intrusive, methods like biometric assessments.”

In addition, the proposed rule problematically would force websites to keep non-identifying data records about age determination for at least 10 years, SIIA said. “This is an excessive length of time inconsistent with standard data minimization practices, and creates an unnecessary security risk.” The AG’s office should reduce it “to a more reasonable and defensible period that aligns with standard audit practices and the principles of data minimization.”

TechNet and the State Privacy & Security Coalition also raised concerns in joint comments. "In particular, the draft regulations expand the Act’s 'actual knowledge' trigger into constructive monitoring and impose data collection and retention duties that conflict with data minimization principles," they said.

The Center for Democracy & Technology recommended safeguards to mitigate potential privacy risks from mechanisms used to verify age and confirm parental consent. “Requiring users to prove their age to access content or services leads to more data collection and retention by already data-rich services,” CDT commented. “The data use restrictions and deletion requirements currently in the SAFE For Kids Act and the proposed regulations do not go far enough to ensure sensitive age assurance-related information will be protected. Strong privacy protections are particularly important because users forced to hand over identity information or other personally identifying data lose the ability to access the web anonymously.”

CDT suggested that the AG’s office “establish an expectation that covered operators will use the highest available privacy standards for each age assurance method they provide.” Additionally, it should “require platforms to provide users with alternative age estimation methods that do not use biometric data such as facial scans to estimate users’ ages."

Among other CDT suggestions, the rules “should specify that age assurance methods chosen by a covered platform should be narrowly tailored and proportional to the risk posed by the platform” and require covered entities “to minimize or entirely prevent linkability between where users provide age-related data and the issuer of that data.” Operators should also consider data minimization when verifying parental-child relationships, CDT said.

However, the AVPA dismissed others’ privacy concerns with the rules. Anchoring proposed requirements to standards like ISO/IEC 27566 and IEEE 2089.1 “support high-accuracy, independently audited and proportionate age assurance without unnecessary data collection,” the age-verification association said. “Your inclusion of zero-knowledge proof options, strict deletion requirements, and safeguards against reidentification reflects best practice in privacy by design and is in the public interest.”

Common Sense Media generally supported the rules to implement the SAFE for Kids Act, though it recommended tweaks. The “landmark law” will remove “two design choices that are known to cause harm to kids while allowing full access to the internet and community of friends, family, and areas of interests for those same users,” Common Sense said.

Meanwhile, the American Civil Liberties Union of New York commented that “even the most fastidious regulation is unlikely to save the SAFE Act from the constitutional challenges on the horizon.” In the 18 “months since SAFE was signed, lower courts have considered similar statutes in multiple states, and almost no court has ratified any platform design feature that substantially infringes upon online expression,” NYCLU said.