The Commerce Department’s unclear rollout of an export control on geospatial imagery software is causing industry confusion and could lead to broad, unintended impacts on exports of certain artificial intelligence, industry representatives said in interviews last week. “For these companies who now have to supply software without AI, it's like supplying a human body without blood,” said Sanjay Kumar, CEO of the World Geospatial Industry Council. “It will make it impossible for some companies to continue doing business how they were before.” The interim final rule released in January was criticized by industry for unclear definitions that made it difficult for some in the AI field to determine whether the rule applies to them. “That lack of definitions is creating, at best, confusion,” said Jennifer O’Bryan, chair of Commerce’s Sensors and Instrumentation Technical Advisory Committee. Some terms in the rule, such as deep convolutional neural networks, rotational normalization and rotational patterns, “desperately need some sort of definition to make sure that they don't intrude on certain purely commercial technology spaces that use similar AI software,” said O’Bryan, government affairs director for SPIE, an international society for optics and photonics. Stakeholders wish Commerce had issued the rule for public comment before it took effect, saying industry expertise is critical for export controls that involve complex technologies. “To put out an interim rule or a draft rule and have it go into effect immediately before there's any period of public comment just seems incongruous with the principles that we grow up with in this democracy,” said Barbara Ryan, World Geospatial Industry Council policy adviser. The department's Bureau of Industry and Security declined comment.
The FTC Wednesday cautioned companies about risks to consumers of artificial intelligence technology used to make predictions, recommendations or decisions. AI could enable “unfair or discriminatory outcomes” or the perpetuation of existing socioeconomic disparities, blogged Competition Bureau Director Andrew Smith. Companies using AI should take care not to mislead consumers about the nature of an interaction, he said, citing “engager profiles” of attractive potential mates to induce customers to sign up for a dating service. How companies get data is important: Secretly collecting sensitive data could give rise to an FTC action, Smith said, noting Facebook misled consumers last year by telling them they could opt in to facial recognition even though the setting was on by default. Companies that make automated decisions based on third-party information may have to provide consumers with an “adverse action” notice, explaining their right to see information reported about them and to correct inaccurate information.
A New York dealer employed the first known sales robot at a car dealership, said Promobot, a manufacturer of autonomous service robots. At Primary Motors in Commack, New York, the Promobot robot will greet visitors, request contact information and ascertain the customer's interests, said the robot maker. It can give information on available cars, pricing options and best solutions for a customer, and it can help a customer schedule a test drive. After a consultation, the robot refers visitors to a sales manager based on customer requirements, said the company, pitching the process as a way to have fewer associates handling the same number of customers. The robot communicates using artificial intelligence and can move independently while avoiding obstacles. Robot owners can add new expressions, create customized movements, emotions and applications, said Promobot, whose bots work in airports and museums, on security teams, in doctors' offices, and as building managers around the world.
Industry groups contend OMB is prioritizing innovation and the groups suggested voluntary standards for artificial intelligence regulation. The Center for Democracy & Technology criticized the AI draft guidance for lack of concrete suggestions. OMB guidance "aims to support a U.S. approach to free-market capitalism, federalism, and good regulatory practices,” said the request for comment. The draft memorandum provides guidance to agencies for developing regulatory and nonregulatory approaches for sectors and “consider ways to reduce barriers to the development and adoption of AI technologies.” Comments were due Friday and are posted here. The guidance provides a “strong foundation for federal agencies to craft regulations while maintaining the regulatory flexibility that companies need to innovate,” the Computer & Communications Industry Association said. CCIA recommended any new regulations be “technology-neutral, and apply to outcomes rather than prescribing specific technical practices.” The Information Technology Industry Council applauded the agency’s “emphasis on using a risk-based approach to AI regulation” and guidance that federal agencies consider AI applications individually rather than applying horizontal regulation. ITI supports the draft memo’s “emphasis on using voluntary standards as an alternative to new regulation.” BSA|The Software Alliance agreed with the guidance that “the goal of AI governance should be to promote innovation and enhance public trust.” Nonregulatory policy approaches, like sector-specific frameworks and voluntary consensus standards, “can play a critically important role” in promoting AI applications, BSA said. CDT approves of the agency’s common sense approach but said it “offers little guidance beyond our understanding of the basic tenets of sound policymaking and good administrative processes.” CDT sought concrete steps agencies should take in regulating and said “agencies should consider and clarify existing legal and policy approaches first.” ACT|The App Association supports voluntary consensus standards for AI application. It recommended the U.S. ensure such standards promote a balanced approach to standard-essential patent licensing, saying a number of owners of fair, reasonable and nondiscriminatory terms SEPs are abusing their unique positions (see 2002140026).
Balanced intellectual property protections are needed to ensure artificial intelligence technology fulfills its potential, the Computer & Communications Industry Association commented Friday. “Widespread availability of patents on AI generated inventions would lead to less innovation by placing ordinary creativity into the realm of monopoly and chilling the rationale to pursue such creativity,” CCIA told the World Intellectual Property Organization as it works through a draft issues paper on AI and IP. The white paper will be “a milestone for Europe’s regulatory vision on how to advance innovation and help European companies thrive,” Information Technology Industry Council CEO Jason Oxman said. It’s important the EU not look only at potential AI harms but also consider the potential “social harms of limiting the use of AI, which may decrease its positive impact,” ITI wrote.
Apple emphasized privacy Thursday in the U.S. launch of redesigned Maps. The app is “deeply integrated” into apps customers use daily, said the company, and with the iPhone, iPad, Mac and Apple Watch. No sign-in is required and it’s not connected to an Apple ID, said the company. Personalized features, such as suggesting departure time to make the next appointment, use on-device intelligence, it said. Data collected are associated with random identifiers that continually reset. Through “fuzzing,” Maps obscures a user’s location on Apple servers. Flight and transit details are among many features.
House Science Committee ranking member Frank Lucas, R-Okla., introduced legislation that would establish a national science and technology strategy targeting Chinese threats. Citing risks of losing U.S. leads in quantum information science, artificial intelligence and advanced manufacturing, Lucas introduced the Securing American Leadership in Science and Technology Act during a hearing Wednesday. It would authorize “doubling of basic research funding over the next 10 years at the Department of Energy, the National Science Foundation, the National Institute of Standards and Technology, and the National Oceanic and Atmospheric Administration.” He said in his opening remarks: “To support the industries of the future, we need workers with STEM skills at all levels.” Chair Eddie Bernice Johnson, D-Texas, didn’t sign onto the bill, which has gotten only Republican support. “I do not want to cause any confusion about where I stand. I remain as firmly committed as ever to our investments across all fields of science and engineering as well as the humanities,” she said in prepared remarks, citing the need to maintain U.S. competitive advantages. America should be gearing policy and legislation to compete “effectively” in the 2030s, testified ex-Google CEO Eric Schmidt, founder of Schmidt Futures. He cited AI trends suggesting China will overtake the U.S. in five to 10 years. On changes in R&D since 2000, “China has accounted for almost one-third of the total global growth,” testified National Science Board Chair Diane Souvaine. Georgia Institute of Technology Executive Vice President-Research Chaouki Abdallah urged a commitment “to the long-term increase and certainty in federal investment.”
Don't ban facial recognition technology due to privacy concerns (see 2001140063), said one witness for Wednesday's House Oversight Committee hearing. Industry "has taken many steps to ensure the safe and responsible deployment," testified Information Technology and Innovation Foundation Vice President Daniel Castro. "Congress should focus on steps to improve oversight and accountability of commercial use of facial recognition technology." Castro said "even narrow bans can have unintended consequences, given the widespread integration of facial recognition technology." Committee staff noted such tech is "increasingly in home security systems, social media sites, shopping malls, and elsewhere for advertising, security, access, photo and video data identification, and accessibility." Committee members of both parties hope to have a bipartisan bill. Chairwoman Carolyn Maloney, D-N.Y., expects one to be introduced and marked up “in the very near future.” Facial identification tech is "just not ready for prime time," said Maloney. “Despite these concerns, we see facial recognition technology being used more and more.” It's "completely unregulated at the federal level,” she noted. Ranking member Jim Jordan, R-Ohio, appreciates "your willingness to work with us on this legislation," he told Maloney at the start of his opening remarks (see 21-minute mark). “All sides are trying to work together." It's a “powerful technology,” with a market of some $9 billion expected by 2022: “We understand and appreciate the great promise that this technology holds.” For him, "the urgent issue” to “tackle ... is reining in the government's unchecked use of this technology when it impairs our freedoms and our liberties.” He cited the First and Fourth amendments. “This issue transcends politics,” Jordan said. He fears a “patchwork of laws” arising from localities. Studies find "significant variance" between facial recognition algorithms, said National Institute of Standards and Technology Information Technology Laboratory Director Charles Romine. "Some produce significantly fewer errors than others." Don't "think of facial recognition as either always accurate or always error prone," he said. The staff memo said the committee held two 2019 hearings on the subject.
“Industries of the future,” including artificial intelligence, next-generation wireless and infrastructure, will be the topic of a Jan. 15 Senate Commerce Committee hearing, the committee said Wednesday. National Institute of Standards and Technology Director Walter Copan, National Science Foundation Director France Cordova, U.S. Chief Technology Officer Michael Kratsios, and FCC Commissioners Mike O’Rielly and Jessica Rosenworcel are scheduled to testify. The hearing follows a 10 a.m. executive session in 216 Hart.
The White House proposed 10 artificial intelligence regulatory principles to govern private sector AI, U.S. Chief Technology Officer Michael Kratsios announced Tuesday. The principles focus on ensuring public engagement, limiting regulatory overreach and promoting trustworthy AI, Kratsios said.