Debate Said Needed on Threat of Artificial Intelligence to Humanity
Before artificial intelligence technology advances too far, policymakers, researchers and regulators must talk about what is and isn’t acceptable, speakers said Tuesday at an Information Technology and Innovation Foundation event. Artificial intelligence can bring many benefits to humans, but if constructed poorly, AI comes with a lot of risks, said Machine Intelligence Research Institute Executive Director Nate Soares. Others said artificial intelligence is part of the evolution of humanity, and restrictions shouldn’t be put on a technology when researchers are still discovering its capabilities.
Sign up for a free preview to unlock the rest of this article
Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.
By 2050 it’s expected a team of robots will play soccer in the World Cup against human players, said Carnegie Mellon University Computer Science professor Manuela Veloso. They'll also have the ability to speak other languages and walk down the street, she said. The machines will be part of humankind, Veloso said. AI could probably solve a lot of medical problems or other issues and it could happen without any warning sign, but safety is paramount, said University of California-Berkeley Electrical Engineering & Computer Sciences professor Stuart Russell. It may take 150 years before AI technologies become a real concern, or it may happen within the next five years, he said.
A number of successful visionaries including Stephen Hawking and Bill Gates have said AI is a significant threat to humanity, said ITIF Vice President Daniel Castro. There has been a rise in public consciousness on AI concerns, but proponents of the technology argue AI contributes to significant advancements such as Apple’s Siri and Google’s autonomous vehicles, Castro said. People just accept technology and don’t think about it, said Georgia Tech Associate Dean for Research and Space Planning School of Interactive Computing Ronald Arkin, who has presented concerns before the U.N. about lethal autonomous weapons (see 1504090032).
Not all predictions on technology have come true or come close to true, so there's no need to worry about AI for a long time, said ITIF President Robert Atkinson. Cal-Berkeley's Russell said not enough work is being done to ensure AI isn't a threat to the human race, saying humans can’t agree on what values are important. There's no way to ensure that when programming a machine “we like what it does,” he said. There’s a professor at Carnegie Mellon who argues that if AI robots take over, that’s just evolution, Arkin said.
“It would be a shame for humans to not make good use of this technology,” said Carnegie Mellon's Veloso. Texting while driving leads to accidents, she said, but the benefits outweigh the risks so no one is advocating texting capabilities be removed from cellphones, she said. Benefits of AI may outweigh the problems, Veloso said. Comparing AI to texting isn't a good comparison because a domestic robot accidentally falling over on an individual is not what we’re talking about, even though that could happen, Russell said. A reasonable comparison is to nuclear weapons and climate change, where something that goes wrong is on a global scale, he said. If an individual went back to the late 19th century to the development of the internal combustion engine and said the coal-fire combination would lead to global warming and an irreversible increase of carbon dioxide, we might not be dealing with climate change now, Russell said. The threat of AI is irreversible, he said.
Georgia Tech's Arkin said he didn’t want to paint either a utopian or dystopian future of AI, but said there's a need to start acting carefully in terms of policies and regulations for AI as soon as possible. Soares agreed AI research doesn’t have to stop because there are extraordinary benefits, but research needs to be done on the safety of AI and how to contain it. What happens if a robot begins to do something humans don’t like, Russell asked. The robot is carrying out a goal you gave it and it may outsmart the individual who programmed it by spreading itself out across the Web and in other machines to ensure the job it was programmed to do gets done, Russell said.
There already is a lot of research on the safety of AI, Veloso said. Robots are taught to not run over people or walls, she said. The challenge is that safety and trust are experience-based concepts, she said. Arkin disagreed with Veloso that all of their robotics researcher colleagues are concerned with safety of AI. “We’re an arrogant crew that think it knows what is best, but we need help,” Arkin said. Having doubt about what we’re actually creating is not part of our general regime, he said. Safety standards must exist, he said.