Witnesses Say Congress Should Remain ‘Tech Neutral’ on AI Copyright
Copyright concerns related to AI can be addressed using existing law and litigation, so Congress should avoid new legislation, legal experts told the House Intellectual Property Subcommittee during a hearing Wednesday.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
Copyright law should “continue to operate in a technology-neutral fashion,” said George Mason University law professor Sandra Aistars. She argued the Supreme Court’s 1991 decision in Feist Publications v. Rural Telephone Service can be “readily applied” when courts are trying to decide if a work is “sufficiently original.”
The high court ruled in that case that Rural couldn’t copyright phone directory information Feist copied because it didn’t meet a test for minimal originality. If Congress establishes liability for AI training models using similar data, it could harm legitimate inventors and creators, said Aistars.
Similar questions have been answered about computer-generated material dating back at least 60 years, said Joshua Landau, senior counsel-innovation policy at the Computer and Communications Industry Association. The basic question is whether the creation was conceived and made through human ingenuity, he said: Material entirely created using AI can’t be protected.
Adding another layer of regulation for AI creations could put the U.S. at a competitive disadvantage, said Claire Laporte, an IP fellow at Ginkgo Bioworks: “Other economies are not worried so much about who the inventor is. They’re just worried about patenting inventions that should be patented.”
Only humans can be granted copyright protection, and AI issues can be handled through existing statutes and federal litigation, said Kristelia Garcia, law professor at Georgetown University. She noted policymakers are still learning a lot about AI-generated creations, so legislation now would be premature.
The witnesses essentially recommended Congress “do nothing,” which “sometimes is Congress’ best fallout position,” said House Judiciary Committee ranking member Jerry Nadler, D-N.Y. He noted how commentators have drawn attention to the competition issue, particularly with Chinese rivals.
It’s “reasonable” to have concerns that overregulation requiring AI disclosure could inhibit “legitimate inventions,” said Laporte: AI-related engineering has so much human involvement that inhibiting inventions is a greater risk than underregulating technology.
“We shouldn’t do anything here?” Nadler asked. Laporte recommended Congress eliminate the Patent and Trademark Office’s inventorship guidance for AI-assisted inventions. The guidance potentially opens a “Pandora's box of potential litigation complications” that could cause problems for perfectly legitimate inventions that meet all statutory criteria, she said. It “doesn’t make any sense” to treat AI-related inventions differently from other inventions. The PTO didn’t comment.
Laporte made a “powerful case” in her testimony, said Rep. Zoe Lofgren, D-Calif., and Congress should “tread carefully.” If copyright liability is imposed on machine-learning systems for data collection, it could put the U.S. at a “tremendous disadvantage” against China and other nations, said Lofgren. However, she said monetary compensation is needed when copyrighted works are used. In that instance, a compulsory license system, often employed with IP, would be helpful, said Garcia.
Everyone agrees creators of original work should be compensated when AI systems use their material, said Chairman Darrell Issa, R-Calif. Aistars agreed Congress should resolve liability questions when it comes to AI training models. However, she said it's “counterproductive” to deny IP protection for otherwise protectable works created by humans who use AI.