The National Institute of Standards and Technology should set voluntary artificial intelligence standards to help industry avoid “bias and discrimination,” four House members wrote to NIST Thursday. Republicans Anthony Gonzalez of Ohio and Pete Olson of Texas and Democrats Suzan DelBene of Washington and Darren Soto of Florida signed. NIST plans to develop AI standards (see 1907020078). The agency didn’t comment Friday.
The Patent and Trademark Office will accept public comment through Jan. 10 on artificial intelligence technology’s impact on intellectual property law and policy, PTO announced Tuesday.
The Council of Europe is exploring the need for global laws on artificial intelligence, it said. The human rights organization is taking part in an Internet Governance Forum event this week in Strasbourg during which representatives from governments, international organizations, business, civil society, academia and the IT sector will discuss internet policy issues such risks to human rights from advanced technologies. A CoE ad hoc committee on AI (CAHAI), which includes representatives from the organization's 47 members, met Nov. 18-20 to consider the feasibility of rules on developing, designing and using AI based on CoE standards on human rights, democracy and the rule of law. The implications for human rights and democracy "are manifold and we need to be able to answer the challenges for individuals but also for the whole society," emailed Jan Kleijssen, CoE director-information society, action against crime. The meeting "clearly revealed the high level of interest paid by member States to CAHAI." Participants agreed to a feasibility study that will map work on AI already done within the CoE and other bodies, plus relevant legally binding and soft-law instruments. The exercise is expected to help identify the main risks and opportunities from the development and use of AI, Kleijssen said. Participants will look at what principles should be applied to creating and using AI, and consider what the most suitable legal framework is. They discussed the impact of AI on people and society, plus different AI policies, particularly those of the U.S., France, Germany and Russia. The panel will report to the CoE Committee of Ministers in May on its progress, and will launch a "comprehensive consultation to build a legal framework that answers to the need and expectations of the citizens." Asked whether, given numerous ongoing AI activities, it will be difficult to set any sort of global rules, Kleijssen noted EU common standards on respect for human rights, rule of law and democracy to which all CoE members are committed. The council is part of global efforts to address challenges of using digital technologies, including AI, and cooperates closely with other organizations such as the EU, UN and Organisation for Economic Co-operation and Development.
The U.S. and the EU need to cooperate and defend innovation against technological threats from adversaries like China, U.S. Chief Technology Officer Michael Kratsios said Thursday in Portugal. He accused China of pushing an authoritarian state that favors “censorship over free expression and citizen control over empowerment.” China also uses technology to surveil and control its citizens and steals intellectual property from the U.S., he said, noting the importance of 5G and artificial intelligence. “If we don’t act now, Chinese influence and control of technology will not only undermine the freedoms of their own citizens, but all citizens of the world,” he said. The Chinese embassy in D.C. didn't comment.
The Senate Homeland Security Committee passed legislation Wednesday to provide artificial intelligence expertise to federal agencies and promote AI policy research (see 1905090019). The Artificial Intelligence in Government Act (S-1363/HR-2575), from Sens. Brian Schatz, D-Hawaii; Cory Gardner, R-Colo.; Rob Portman, R-Ohio; and Kamala Harris, D-Calif., awaits House consideration. The legislation would “provide training and resources that will be critical to the ongoing effort to modernize the federal government’s IT infrastructure,” BSA|The Software Alliance said.
Amazon came under scrutiny again for privacy issues for artificial intelligence data gathering. Bloomberg reported Thursday that dozens of Amazon workers based in India and Romania review select video clips captured by Amazon’s Cloud Cam, and the clips are used to train AI algorithms to distinguish between real threats and false alarms. It said at one point, on a typical day, some Amazon auditors were each annotating about 150 video recordings, which were typically 20 to 30 seconds long. Snippets for review came from employee tests and from Cloud Cam owners who submit clips to troubleshoot issues such as inaccurate notifications and video quality. Bloomberg said Amazon’s Cloud Cam user terms don’t spell out explicitly that human beings are training the algorithms behind the motion detection software and that some clips have included activity “homeowners are unlikely to want shared, including rare instances of people having sex.” Although Amazon has tight security policies -- employees in India aren’t allowed to use their mobile phones -- "that hasn’t stopped other employees from passing footage to non-team members," the article said. An Amazon spokesperson emailed Thursday the company takes privacy seriously and puts Cloud Cam customers “in control of their video clips. Only customers can view their clips, and they can delete them at any time by visiting the Manage My Content and Devices page.” Customers can share a specific clip using the feedback option in the Cloud Cam app to help improve the service, she said. “When a customer chooses to share a clip, it may get annotated and used for supervised learning to improve the accuracy of Cloud Cam’s computer vision systems. For example, supervised learning helps Cloud Cam better distinguish different types of motion so we can provide more accurate alerts to customers.” A FAQ on Amazon’s customer service page detailing who can view Cloud Cam clips, says: “Only you or people you have shared your account information with can view your clips, unless you choose to submit a clip to us directly for troubleshooting. Customers can also choose to share clips via email or social media.”
Since 5G is nascent and won't be ubiquitous for a while, smart cities deploying advanced technology can't rely solely on that standard for tech advancement, ISP and other executives told local telecom officials. "5G is going to be great, a lot of your cities will start seeing pilots soon" from carriers, said Patti Zullo, Charter Communications' Spectrum Enterprise senior director-smart cities. Until it's the protocol of choice, "5G will not have that much effect in your city," she added Tuesday at NATOA in Tampa. With IoT and cities, "we’re not at a point of mass adoption" yet, said Comcast Vice President-IP Services Patti Loyack. Municipalities don't seek to become smart cities to get more data, Zullo noted. "They have plenty of data," she said. "They need to look at the data and have some 'aha' moments." Smart tech can help localities "tackle major problems," but such deployments haven't advanced fully, said US Ignite Executive Director Bill Wallace. Places deploying it could do real-time diagnostics and have open data, he said. "None of this is easy, and none of this will happen overnight." Moving to such a model is "hard work and it requires leadership to really take a leap of faith," Wallace said. "A lot of them take a while to pay back," he said of such investments, and "for citizens to see the benefit."
Policymakers should be aware that existing law already applies to artificial intelligence activity, said the U.S. Chamber of Commerce Monday. Written by the chamber’s Technology Engagement Center and Center for Global Regulatory Cooperation, the document offers 10 principles for governing AI use and regulation. It recognizes the need to build public trust in AI development, risk-based approaches to AI governance, public-private investment in research, the need for an “AI-ready” workforce, the promotion of open and accessible government data and “robust and flexible” privacy regimes. It also supports intellectual property frameworks that protect innovation, a commitment to cross-border data flow and the need for following international standards.
The Trump administration will request nearly $1 billion in nondefense spending on artificial intelligence R&D for FY 2020, U.S. Chief Technology Officer Michael Kratsios announced Tuesday. “This uniquely American ecosystem must do everything in its collective power to keep America’s lead in the AI race and build on our success,” Kratsios said at an Information Technology and Innovation Foundation Center for Data Innovation event. “Our future rests on getting AI right.” He noted the federal government in 2016 spent a billion dollars on AI R&D, including defense spending. AI will support jobs, drive economic growth and advance national security, he said. He contrasted U.S. strategy with authoritarian regimes' use of the technology, saying the administration’s AI strategy is rooted in rule of law. Other countries are using the technology to “surveil their population, limit free speech, and violate fundamental rights,” he said. President Donald Trump's February executive order mandates agencies to “prioritize investments” in AI R&D (see 1902110054).
Examining ways that a "natural person can contribute” to an artificial intelligence invention and qualify to be a "named inventor” on the patent is one of about a dozen issues on which the Patent and Trademark Office seeks comment said a Federal Register notice Tuesday. PTO “has been examining AI inventions for decades and has issued guidance in many areas,” it said. It now wants to “engage with the innovation community and experts in AI to determine whether further guidance is needed,” it said. The goal is to “promote the predictability and reliability of patenting such inventions and to ensure that appropriate patent protection incentives are in place to encourage further innovation in and around this critical area,” it said. PTO has taken no position on AI patenting, nor is it “predisposed to any particular views,” the agency said. Comments are due Oct. 11, and PTO prefers them to AIPartnership@uspto.gov.