Export Compliance Daily is a Warren News publication.
AI Regulation Hearing Tuesday

AI Needs Equivalent of Accountants and Audits: NTIA's Davidson

Much like the accountants and audit standards that safeguard financial systems, the generative AI universe needs an ecosystem of organizations, rules and people to oversee the technology and ensure it works as promised, NTIA Director Alan Davidson said during a talk Thursday at the MIT-IBM Watson AI Lab in Cambridge, Massachusetts. Davidson said the federal government is sorely lacking in the technical expertise it needs to wrestle with AI-related policy questions. While the government's technical knowledge is improving, "a huge gap" remains, Davidson said. Rep. Suzan DelBene, D-Wash., said Thursday that the U.S. is falling behind other nations in AI policy development (see 2409120035).

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

The Senate Judiciary Privacy Subcommittee, meanwhile, plans a Tuesday hearing that will gather views of AI experts on government regulation of the technology. The hearing will begin at 2 p.m. in 226 Dirksen. It will include testimony from former GoogleAI research scientist Margaret Mitchell and former OpenAI technical staffer William Saunders, the Judiciary Committee said Thursday. Also on the docket: David Harris, California Initiative for Technology and Democracy senior policy adviser, and Helen Toner, Georgetown University Center for Security and Emerging Technology director-strategy and foundational research grants.

Rep. Maria Elvira Salazar, R-Fla., filed Thursday a companion to the Senate's Nurture Originals, Foster Art and Keep Entertainment Safe Act (S-4875). The measure would establish liability for sharing AI-driven content without the original creator’s consent (see 2408010019). The No Fakes Act “will strengthen federal protections for your individual right to your voice and likeness and protect our ability to express ourselves creatively for the world to see,” Salazar said.

Much of Davidson’s talk and a panel discussion afterward centered on the advantages -- and risks -- that come with open weight AI models. NTIA in July recommended against imposing immediate restrictions on the wide availability of open model weights in the largest AI systems (see 2407300019) -- open and closed model weights referring to the core components of AI systems and whether the models are open source, with public access to data, or private. Davidson said that while there was a supposition that open-weight models are inherently riskier, the NTIA report tries to make clear they have benefits, too.

One big challenge is the lack of good ways to measure or assess the marginal risk or benefit of models, Davidson said. He said more R&D in measuring and quantifying such risks is needed. Risk assessment ultimately is a sector by sector issue, Davidson argued, as broad, overarching rules don't work well with different applications of generative AI. For example, using generative AI in writing a school paper or in making medical decisions carry different societal weights.

Ensuring transparency in how AI models are trained and operate is essential for understanding benefits and risks, Davidson said. However, that transparency is declining due to increasing competitive pressures, he said. This rise in opaqueness in AI models is "deeply concerning."

Open-weight models need to be accessible to researchers and businesses so they can experiment, said Asu Ozdaglar, MIT Schwarzman College of Computing deputy dean-academics.