Export Compliance Daily is a Warren News publication.
Everything at Risk

Fears Shouldn't Dominate AI Policymaking, Speakers Say

Concerns about a “doom” scenario from AI and risks from generative AI are overstated, Adam Thierer, senior fellow-technology and innovation team at the R Street Institute, said during a Broadband Breakfast webinar Wednesday. “Things have gotten really out of control, and we’re being led around by a lot of people who have Terminatoresque fantasies floating through their heads,” Thierer said. Other speakers said AI poses potential risks but could have widespread benefits. The discussion comes as policymakers explore controls (see 2311150054), with the FCC looking at the technology's benefits and threats (see 2311150042).

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Dystopian dread is just dripping off of every page of what is being written about AI these days,” Thierer said. In congressional hearing rooms “we hear crazy scenarios” that are “usually influenced directly” by science fiction, he said. “We’ve got to move away from that,” he added.

The risk of an existential threat from AI has “an extremely low probability,” Thierer said. He advised a focus on “day-to-day” risks that we know exist, including concerns about privacy and data collection, bias and discrimination, cybersecurity issues, national security and law enforcement and child safety. AI raises many of the same issues policymakers have been fighting over for 25 years, he said.

The U.S. can't do anything to stop China, Russia or any nation “from developing their own computational or algorithmic capabilities,” Thierer said. “We need to focus on the real problems that we know exist … using existing governance mechanisms,” he said.

Camille Crittenden, a tech researcher at University of California, Davis, said AI “is a double-edged sword," "isn't a “monolithic kind of thing” and is used “in all kinds of applications from all kinds of companies and nonprofit organizations." She said some people “can imagine how to use it for quite nefarious purposes,” but “we really need AI for all kinds of applications in health and climate, and education.” There have been calls for a six-month moratorium on AI development, but it’s not clear that would do any good, she added.

Crittenden, who has worked on AI matters for California, said a report this week (see 2311210045) looks at how the state's government can use AI and at the need for potential guardrails. That focus makes sense because the state’s government is so large, she said. “We would love to see California lead the way in at least investigating ... the options and possibilities,” she said.

The effect on education is proving very challenging, with students losing the ability to think critically, said Robert Clapperton, associate professor at the Creative School at Toronto Metropolitan University. Educators need to get better at teaching students how to use AI “to make them smarter” but not for it to do their work for them, he said. “That is not happening on the level that it needs to,” he said.

Everything is at risk in terms of academic integrity,” Clapperton said. “We can’t tell whether or not students are putting in prompts” and using AI, he said. Schools are making little progress figuring out how to detect that generative AI is being used, he said. Deployed correctly, AI can change how students learn and ... assess how they’re progressing through a course, he said.

Banning AI won’t work, and most schools don’t have the budget for ChatGPT detection software, Clapperton said. Use of generative AI is also almost impossible to detect, he added. Program sponsors include Utopia Fiber, Comcast, Nokia, Google Fiber, BroadbandNow, Michael Baker International and the Institute for Local Self-Reliance.