Age-Assurance Laws May Raise More Questions Than Answers, Panel Says
Whether age-gating measures truly protect children online or just raise other legal concerns is unclear, speakers said during Hogan Lovells' The Data Chronicles podcast Thursday, which focused on age assurance in the U.S. and U.K.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The U.S. approach is "fragmented but fast-moving," privacy and cybersecurity lawyer Scott Loughlin said. Data protection attorney Sophie Baum added that some measures require companies to predict or assume a user's age and allow them to rely on self-certification. She noted a trend toward age verification and estimation requirements that rely on IDs, digital identification, facial recognition and other mechanisms.
The popularity of laws to protect children online is growing at the state level, Baum said. Such measures might include regulation of design features, restrictions on messaging between unaffiliated user accounts or risk-mitigation requirements.
Risk-assessment rules are sparking a great deal of pushback on First Amendment grounds, Baum noted. Asked whether it's possible that any law that targets content will receive strict judicial scrutiny, she said the U.S. Supreme Court delivered part of the answer in Free Speech Coalition v. Paxton.
That decision upheld a Texas law requiring websites publishing sexually explicit content to verify that potential users are older than 18 (see 2506270041). Justices applied an intermediate rather than strict scrutiny standard, finding the law only incidentally burdensome to free speech, Baum said. It's unclear, however, whether that standard will also be applied in other contexts, such as age-appropriate design codes or laws targeting the algorithmic ordering of content, she added.
The U.K. Online Safety Act regulates illegal content and material that's legal but harmful to children, including such topics as eating disorders, hate and violence, said public law and policy lawyer Georgia Crawford. Determining what content falls into the latter category has prompted debate among service providers, although the Office of Communications has provided extensive guidance, she noted.
In addition, the U.K. GDPR requires data minimization, but age-gating mechanisms expand the amount of personal information collected, Loughlin said. Asked how to navigate that conflict, privacy and cybersecurity attorney Rob Fett said the ICO has been clear that when it comes to kids' data, the overriding objective is the best interests of the child. The watchdog would likely come down on the side of collecting the necessary information on that ground, he said.
Separately, a policy paper by the European Parliament's Renew Europe Group warned Wednesday that children and adolescents across the EU "are facing a growing mental health crisis driven by the unchecked influence of algorithmic techniques."
While key laws such as the Digital Services Act, GDPR and AI Act provide important tools and protections for minors online, "they seem to be not enough to fill the existing gaps in order to prevent children from being exposed to risks such as illegal content and psychological harm," the paper said.
Lawmakers recommended, among other things, that the EU require platforms that are accessible to kids to implement robust, easy-to-use age-verification mechanisms that respect EU privacy and data protection standards and data minimization rules. The EU should also consider harmonizing the minimum age at which children can access certain services, they wrote.
On Thursday, the European Parliament Internal Market and Consumer Protection Committee approved a report expressing concerns about major online platforms' failure to adequately protect minors. Lawmakers proposed an EU-wide digital minimum age of 16 for access to social media, video-sharing platforms and AI companions unless there's parental consent.