Advocates Target AI Chatbot Privacy Concerns With 'People-First' Model
Consumer advocates released a model bill on AI chatbots Tuesday that aims to address growing privacy concerns around the technology. Consumer Federation of America, Electronic Privacy Information Center (EPIC) and Fairplay have already shared the "People-First Chatbot Bill" with lawmakers in several states and plan to talk to more legislators soon, EPIC counsel Kara Williams told Privacy Daily.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
While the model bill wouldn't ban chatbots, it “gives lawmakers a straightforward approach to address the harms caused by chatbot products that have been developed and deployed by tech companies with little oversight or transparency,” the advocates said in a preface to the proposal. “The bill addresses key data privacy violations present in almost all commercially available chatbots and sets clear limits on the use of users’ chat logs for harmful practices like targeted advertising … It restricts the use of personal data to significantly reduce the ability for chatbots to employ dangerous and manipulative companion-like features.”
The groups added the model bill is meant to complement existing authorities, including law on privacy and unfair and deceptive acts and practices.
Vermont Rep. Monique Priestley (D) recently listed “People-First AI Companions” as a pending draft for 2026 (see 2512080015). Priestley told us she’s been reviewing multiple circulating models of chatbot bills; she started basing the draft on a Young People Alliance model and now is updating it with language from EPIC's. “I'll be working from multiple models to get something finalized for early January.”
EPIC and the other advocates shared the People-First model “with some state lawmakers already but only really people we have existing relationships with,” Williams emailed us. “We're planning to do a wider push to get it out to new lawmakers” and “states we haven't worked with before” after the bill is publicly released, she said. The EPIC official added that the organization also plans to speak with lawmakers who previously filed bills on this subject “because, unfortunately, we've seen some chatbot legislation circulating that we think has constitutional problems and wouldn't withstand legal challenges, so we'll try to encourage those lawmakers to switch to this model instead.”
So far, the consumer privacy groups have received interest from state lawmakers in Arizona, New Mexico, Utah, Vermont and Virginia, Williams said. The consumer privacy groups haven’t “shared the model with chatbot providers specifically, although we have had technologists review it for feasibility,” she said.
Under the EPIC model bill, companies would be prohibited from using chats for targeted advertising and from using personal data “from outside of chatbot interactions to inform chatbot outputs.” Also, the proposal would limit businesses’ ability to use personal data or chats for profiling users. Additionally, it would ban companies from using data input by minors for training chatbots, while requiring that companies seek opt-in consent to do that for adults older than 18.
The model bill would consider chatbots to be products with liability for injuries. It would provide a private right of action for possible violations of privacy, data security and disclosure protections, with proposed penalties of at least $5,000 per violation of the privacy and security requirements. Government regulators could also enforce the proposed law, which would additionally give rulemaking authority to the state’s attorney general.
Moreover, the proposed text says that a “chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider.”
AI chatbots have been at the center of broad privacy concerns. A recent Duke University study found people are increasingly using general-purpose AI chatbots for emotional and mental health support, with many unaware that privacy regulations like HIPAA fail to cover these sensitive conversations (see 2508070022). Additionally, Meta AI users posting what's typically private information for everyone to see on the app has raised questions about whether they know when they’re sharing AI queries with the world (see 2506120082).
Earlier this year, California Gov. Gavin Newsom (D) signed legislation concerning AI chatbots that could be used by children, while vetoing a separate bill (see 2510140010). Meanwhile, amid intensifying regulatory pressure about kids' online safety, the chatbot company Character.AI said in October that it will limit children’s ability to have an open-ended chat with AI on its platform (see 2510300015).