Telecom Groups: FTC Should Focus AI Rules on Scammers, Not Industry
The FTC should narrow the scope of its online impersonation rule, preventing unnecessary liability for broadband and wireless providers, NCTA, CTIA, USTelecom and the Consumer Technology Association told the agency in comments posted through Wednesday. Consumer advocates urged the agency to make the rule broad enough to stop companies from turning a blind eye to scams.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
The FTC in February requested public comment on new rules for AI-driven impersonation and other online scams (see 2402150045). The agency said it’s committed to detecting, deterring and stopping impersonation fraud.
As drafted, the rule includes “overly broad” provisions that could mean liability for companies conducting legitimate business activity, NCTA commented. The association said the rule’s means-and-instrumentalities (M-and-I) standard could be used to penalize service providers’ legal activity. M-and-I is a concept the FTC uses to impose liability against merchants that knowingly expose consumers to deceptive or misleading products and services. NCTA urged the agency to apply liability only to those “who intentionally enable impersonation schemes.”
The FTC asked stakeholders if the rule's definitions are “ambiguous” or could be “improved,” noted CTIA. CTIA recommended “targeted” edits. For instance, the agency should make clear that liability doesn’t apply to companies that unknowingly allow fraudsters to use their services, said CTIA. Fraudsters, like anyone else, use legitimate goods and services such as phones, broadband, banks and software, said CTIA: “There is no reason to place these legitimate companies in the crosshairs of law enforcement, absent reasons to think that they are deliberately facilitating and encouraging fraud.”
The agency should apply a “reasonableness standard” to service providers, Consumer Reports commented. Companies that know or have reason to know they’re providing services for scams must take “reasonable measures” to protect consumers, said CR. The “actual knowledge standard” NCTA proposed would “incentivize companies to essentially put their heads in the sand,” said CR. “Companies that wish to avoid liability could choose not to gather any information that would reveal that their products or services are being used for fraud.”
The FTC’s proposal to ban companies from providing goods and services used for unlawful impersonation “is necessary to efficiently combat AI-empowered impersonation fraud,” a group of consumer advocates told the agency. Center for American Progress, Consumer Action, Consumer Federation of America, Electronic Privacy Information Center, National Association of Consumer Advocates, National Consumer Law Center and National Consumers League signed. If the agency holds AI developers liable for facilitation of illegal impersonation, developers will have “strong incentive to implement use restrictions and design their products with safety in mind.”
Public Knowledge and Electronic Frontier Foundation offered joint comments in support of consumers and innovation. Many technologies like AI have lawful and unlawful uses, and it’s “essential” for the rule to target bad actors “who knowingly facilitate impersonation scams while not stifling innovation or unduly restricting the development and use of technologies with substantial legitimate uses,” PK and EFF said. Selling a “general-purpose tool” with knowledge it might be used for illegal activity shouldn’t constitute knowledge under the new rule, they said: “Holding the sellers or developers of these tools liable for the misuse of their products, absent specific knowledge of unlawful use, would be detrimental to technological progress and consumer welfare.”
The proposed rule includes “vague” language that “will create confusion and deter innovation,” CTA commented. The association urged the FTC to clarify the new rule applies only to goods and services that are “inherently misleading and designed to deceive.” Trying to hold service providers liable for bad-actor-created third-party content could conflict with Communications Decency Act Section 230, which is designed to protect companies from liability for third parties' bad behavior, said CTA.
The commission has been “consistent throughout” the rulemaking proceeding that the M-and-I standard for liability wouldn’t apply to “those that simply serve as intermediaries of the impersonation, such as phone and internet service providers,” said USTelecom. The association suggested the FTC further refine the rule language to avoid “any misinterpretations.” Clarity in the text is “increasingly important in light of the ongoing interest of federal and state policymakers to adopt legislation to address AI concerns,” said USTelecom.