Export Compliance Daily is a Warren News publication.
‘Early Warning Shot’

Rosenworcel: TCPA Provided Key Authority Against Voice Cloning of Biden

Statutory language in the 1991 Telephone Consumer Protection Act allowed the FCC to act against those responsible for illegal voice-cloning in the New Hampshire presidential primary election (see 2408210039), Chairwoman Jessica Rosenworcel said Wednesday.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Working with New Hampshire Attorney General John Formella (R), the FCC issued a $2 million fine against Lingo Telecom for allegedly distributing robocalls imitating President Joe Biden’s voice telling consumers not to vote. The FCC settled with the carrier for $1 million and issued a $6 million fine against Steve Kramer, the political consultant allegedly responsible for the calls.

Rosenworcel said the agency relied on TCPA prohibitions against artificial and prerecorded voices. “When this happened, I turned to my colleagues and said, ‘OK, this is the early warning shot,’” she said during a panel with FTC Chair Lina Khan and Consumer Financial Protection Bureau Director Rohit Chopra during the Aspen Cyber Summit. “We have got to figure out what law prevents this because I don’t want to live in a world” where voice-cloning fraud is permissible. The FCC, Rosenworcel said, sent a “very clear, early signal” in this case, but “we’re going to have a lot of work to do because it’s so cheap and easy to do this, and as we near the election, we all have to be on guard for more of this fake stuff coming our way.”

There are no statutory exemptions for AI technology, said Khan. The FTC is focused on using existing authority to enforce regulations against unfair and deceptive practices, she said: But it's an “open question” how thoroughly agencies can use existing statutes to enforce against AI-driven deception. For example, AI technology raises new questions about companies using surveillance tactics to charge discriminatory prices for individual consumers, said Khan. She shared two potential examples: consumers getting charged higher food prices because allergies make them dependent on certain products and an airline charging consumers higher fares because they know those passengers' travel plans.

It’s good to update laws regularly, but agencies have “derivative laws” for addressing unfair and deceptive practices, said Chopra. AI technology isn’t the first novel and transformative technology, he said: “We’re going to use what we've got, and we’re not going to pretend that we’re powerless and helpless.”

Rosenworcel said there’s a lot of “hand-wringing” and “pearl-clutching” in Washington when it comes to AI, but transparency is something that can be better regulated. Consumers have a right to know when they’re communicating with chatbots, listening to AI robocalls or viewing AI-manipulated media in political ads, she said: “That baseline of transparency will allow us to understand its effect on our civic life and economy” and allow consumers to make “informed choices.”

Khan addressed a question about Communications Decency Act Section 230 and the prospect of imposing new liability for AI-related harm. Section 230 needs some “fine-tuning,” she said: There shouldn’t be a system where the entities “enabling and facilitating” the law-breaking in a “way that is somewhat foreseeable” are, “at the end of the day, totally able to walk away from the situation.”