Character.AI to Adopt Age Assurance, Limit Access for Kids Under 18
Amid intensifying regulatory pressure about kids' online safety, Character.AI said Wednesday that it will roll out age assurance and remove “the ability for users under 18 to engage in open-ended chat with AI" on its platform by Nov. 25. The changes respond to questions raised by regulators and in recent news reports “about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens,” the chatbot platform said.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
However, Common Sense Media said it would hold its applause until it sees how the policy change works.
“Between now and then, [Character.AI] will be working to build an under-18 experience that still gives our teen users ways to be creative -- for example, by creating videos, stories, and streams with Characters,” it said. Character.AI said it will start with a two-hour limit and gradually wean users off the chat by Nov. 25.
Meanwhile, Character.AI said it has built age-assurance functionality will combine it with "leading third-party tools including Persona,” an identity verification platform. Tech industry groups have previously resisted age-verification requirements due to alleged privacy concerns that may come from taking sensitive documents to confirm users are who they say they are.
Also, Character.AI said it's creating and funding an AI safety lab, “an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features.” It added, “We're inviting a number of technology companies, academics, researchers and policy makers to join.”
“These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers,” Character.AI added. “But we believe they are the right thing to do.”
In August, a group of 44 bipartisan state attorneys general raised concerns about AI harming children in a National Association of Attorneys General (NAAG) letter to many AI companies, including Character.AI (see 2508250045). Earlier that month, Texas AG Ken Paxton (R) opened a probe into Character.AI and other chatbot platforms (see 2508180025). NAAG and the Texas AG office didn’t comment Thursday.
AI chatbots also are at the center of broad privacy concerns. A recent Duke University study found people are increasingly using general-purpose AI chatbots for emotional and mental health support, with many unaware that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) fail to cover these sensitive conversations (see 2508070022).
Additionally, Meta AI users posting what's typically private information for everyone to see on the app has raised questions about whether they know when they’re sharing AI queries with the world (see 2506120082). Also, Fast Company reported in July that Google indexed many ChatGPT conversations containing personal details.
Common Sense said in a statement Wednesday that the Character.AI decision to ban users under 18 “acknowledges” that “AI companion chatbots platforms pose unacceptable risks to young people.”
However, while Common Sense supports “this policy change, these good intentions aren't matched with clarity about how it will actually be enforced, and there's nothing that can be tested today to show that this works,” it said. “Our research shows over and over again that until we can independently test to see if safety features actually work, we’re just taking companies at their word -- and that’s not good enough when kids’ lives are at stake. We will be testing it when it launches in November and report on our findings.”
Also on Wednesday, a group led by western state AGs announced that the popular gaming platform Roblox will be an industry partner in the Attorney General Alliance’s youth online safety initiative (see 2510290020). Like Character.AI, Roblox has faced legal scrutiny from multiple state AGs.