The administration should promote data-sharing when updating national artificial intelligence strategy (see 1812040056), industry groups and Amazon said in comments to the Networking and Information Technology Research and Development Program posted last week. Microsoft and the Electronic Privacy Information Center (EPIC) voiced support for “de-identification” techniques for anonymous data gathering and sharing. Access to large data sets is essential for Al and machine learning research and development, Amazon said. The e-commerce platform also argued against policies and regulations that might “hamper” tech R&D. The Information Technology Industry Council called data the “gasoline that fuels AI engines,” cautioning that data and privacy concerns must be considered. Sharing data would allow industry to better train algorithms, ITI said. In 2018, the U.S. didn’t properly fund AI R&D, the Information Technology and Innovation Foundation’s Center for Data Innovation said, citing better-financed competition from China, France and the U.K. Exposing source code for AI technology wouldn’t be useful, ITIF said, arguing “transparency guarantees neither accurate nor unbiased results.” The Software & Information Industry Association highlighted passage of the National Quantum Initiative Act, which President Donald Trump signed last week, authorizing $1.2 billion over five years for quantum activities across the federal government. People have a right to transparency, including data on human decision-making and the identities of groups behind the technology, EPIC said, offering 12 core principles endorsed by more than 200 experts and 50 NGOs. No group should be able to maintain “secret” profiling systems, and groups should be obligated to terminate a system if “human control of the system is no longer possible,” EPIC said. Echoing comments from EPIC, Microsoft backed de-identification data-sharing, or methods that preserve confidentiality, privacy and security. “However, AI systems that are used in contexts that involve people would need access to data about people to make informed decision[s],” Microsoft said.
Congress should include facial-recognition regulations in any new federal privacy law, Microsoft President Brad Smith said Thursday. Microsoft is one of the companies leading the way in facial recognition technology (see 1811210044). Smith also blogged about six principles for regulating the technology. Microsoft supports transparency measures and third-party testing of the technology.
Artificial intelligence will benefit society enormously and doesn’t pose the humanity-threatening, science fiction-based scenarios Tesla CEO Elon Musk warned about, said Information Technology and Innovation Foundation President Robert Atkinson in a Fox Business opinion Tuesday. Atkinson dismissed Musk’s Terminator-like scenarios in which robotics control humans, while playing up the benefits of autonomous vehicles and smartphones. Tesla didn't comment.
The California Public Utilities Commission plans a March 20-21 summit on wildfire technology in Sacramento, the agency said Monday. Tech industry, academia and state and local governments will discuss tools to better manage wildfires, the CPUC said. The summit’s website says topics will include: artificial intelligence-based “visual recognition technology to analyze satellite imagery to determine vegetation risks in proximity to utility lines; Machine learning and automation inspections for increased regulatory compliance assurance; State-wide deployment of weather stations and cameras paired with meteorology and fire behavior modeling; and Widespread adoption of LiDAR and advanced imaging for vegetation management and infrastructure inspections.”
Participants for a Nov. 30 FCC artificial intelligence and machine learning event (see 1811070039) include academics and tech executives. Arizona State University professor Subbarao Kambhampati will give the keynote. Chairman Ajit Pai will moderate two panels. Kambhampati, Microsoft Director-Technology Policy Carolyn Nguyen and MIT-IBM Watson AI Lab Director David Cox are on the first. Participants for the second are CTA Senior Manager-Government Affairs Michael Hayes, Qualcomm Senior Director-Engineering Yongbin Wei, Nokia Lab Leader Chris White, Frame.io Data Head Matthew Ruttley and Georgetown University Medical Center Chief Data Scientist Subha Madhavan.
Consumer protection for artificial intelligence systems is a lot harder for the FTC without clear visibility into system decision-making, said Electronic Frontier Foundation Tech Projects Director Jeremy Gillula Tuesday during the agency’s seventh policy hearing. Some companies have made an effort, but it’s an ongoing problem, he said. Consumers and researchers might not necessarily need every detail about machine learning and artificial intelligence decisions, said Google Brain Team Senior Staff Research Scientist Martin Wattenberg. Google isn't giving “the full matrix of every weight in the neural network, but we’re giving them information that’s useful at the level that they want in terms of a concept that they’re actually interested in.” Wattenberg emphasized progress made in coming up with ways to understand these systems: “They no longer need to be considered black boxes.” Google recommended practices for fair artificial intelligence use, which covers interpretability, privacy and security. Computer & Communications Industry Association Competition and Regulatory Policy Director Marianela Lopez-Galdos questioned whether laws that focus on consumer welfare are sufficient to address machine learning issues.
The FCC plans a Nov. 30 forum with artificial intelligence and machine learning experts, Chairman Ajit Pai announced Wednesday. The public discussion, at FCC headquarters, will key on technology’s impact on the communications industry. Pai has been eyeing such an event (see 1806140049), saying now that "because so much of AI intersects with the Commission’s technological and engineering work, we want to explore what it means for the future of communications."
The U.S. military should have access to the best technology, and Microsoft will continue bidding on military contracts, while weighing in on ethical debates about autonomous weapons, Microsoft President Brad Smith blogged Friday. Microsoft will advocate for artificial intelligence and other new technologies to be used “responsibly and ethically,” he said. The company recently came under fire internally for bidding on the DOD’s $10 billion Joint Enterprise Defense Infrastructure cloud project. Microsoft employees are free to decline assignments on projects they disagree with, Smith said.
Government should help industry demonstrate how artificial intelligence algorithms produce specific results by providing access to government data, Intel Global Privacy Officer David Hoffman and Global Director-Privacy Riccardo Masucci blogged Monday. New legislative efforts on AI technology should support the free flow of data, and increased automation shouldn't mean less privacy, they wrote.
NTIA Administrator David Redl urged stakeholders to communicate with the White House as it develops an artificial intelligence strategy. In Brussels Thursday, Redl cited the White House Select Committee on Artificial Intelligence’s comment solicitation. The effort will “ensure that U.S. R&D investments remain at the cutting edge,” Redl said. On NTIA's effort to develop privacy principles (see 1810120053), the goal should be to “provide high levels of consumer protection while giving business legal clarity and flexibility to innovate,” he said.