Export Compliance Daily is a Warren News publication.
Not Legally Binding

OECD Members Sign AI Principles; Stakeholders Seek More Details

Governments around the world should promote public-private investment in research and development to spur innovative and safe application of artificial intelligence technology, the Organisation for Economic Co-operation and Development said Wednesday. OECD’s 36 members, including the U.S., and six other countries signed a set of AI principles at its annual Ministerial Council Meeting in Paris.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

Privacy advocates in interviews expressed hope that future documents address more in-depth issues, as well as the need for regulation or legislation. An OECD staffer said that will happen. Wednesday's release was expected (see 1905150022).

The principles, which aren’t legally binding, call for transparent and “responsible disclosure” of AI system operations. This will help ensure people “understand when they are engaging with them and can challenge outcomes,” OECD said. The document urges continuous assessment and management of AI systems to ensure safety throughout machine lifetimes. The systems should include appropriate safeguards, like “enabling human intervention where necessary -- to ensure a fair and just society,” OECD said. AI system operators should be “held accountable for their proper functioning in line with” the principles, OECD said.

AI designs should respect the rule of law, human rights and democratic values, the multinational group said. The organization also recommended fostering of “accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge.” The paper urged AI workforce training and cross-border data sharing. The document suggested “inclusive growth, sustainable development and well-being.”

Next month, the organization will release an overview of AI with additional information, said OECD Head-Digital Economy Policy Anne Carblanc in an interview. And late this year or early next, it will issue what she called "practical guidance" that will more fully flesh out the new 12-page report. It will be "a living document" with "best practices for implementation of the recommendations" and wide input from stakeholders, she said. "It will provide a number of illustrations when it comes to implementing the principles at the national level."

It's the first significant international commitment to “common” AI principles, said Deputy Assistant to the U.S. President for Technology Policy Michael Kratsios. He noted the document reaffirms a commitment to strengthen public trust, protect civil liberties and remain true to democratic principles. AI innovation is “an essential tool to drive economic growth, empower workers, and lift up quality of life for all.”

Center for Democracy & Technology Vice President-Policy Chris Calabrese generally agrees with the release overall and particularly the first half that's devoted to AI principles. "I thought the document could have been stronger in talking about how we will enable and shape the environment to do these policy things" that could incorporate such principles, he added. "We need to also recognize that the government has a crucial role in solving the explainability, redress, bias issues." The document "asks the right questions, particularly around values" to keep in mind on AI, Calabrese said. "I just wish there was a little more translating that into action in the second half of the document." There’s not "a whole lot of connection between all the issues" and how "policies of nations are going to address them," he added. The second half talks about some applications of the technology and how to promote it.

OECD should keep in mind that privacy is part of broader human rights, said Human Rights Watch's Amos Toh. This is "mentioned in passing in this document," noted Toh, a senior researcher on AI and human rights. It raises "a lot more questions than it provides answers" to, and he hopes for "practical guidance on some of these questions." Mechanisms that those hurt by AI can pursue for remedies and other things aren't spelled out, Toh said. On the systematic risk management section, "it's not clear to me what principles are going to guide this" approach: "It really begs the question of what foundational principles" will be used.

OECD's Carblanc said laws and rules apply now to AI, and the coming more comprehensive report "will provide a number of illustrations when it comes to implementing the principles at the national level." There’s "a need for kind of an inventory of what is already applicable" law- and rule-wise, she said. "For the moment, it’s premature" to recommend government mandates here, the staffer said. "Once we are finished with the in-depth work on what does it mean, really, with the privacy communities, then we are able to provide something more meaningful" on fleshing out issues like privacy and data security, Carblanc said.

The European Commission also supported the principles, which will be up for discussion at the upcoming G20 Leaders’ Summit in Japan. The international standards are designed to “ensure AI systems are designed to be robust, safe, fair and trustworthy,” OECD said.

With companies quickly moving to develop AI tech, there's "mounting concern about whether it will be used for good ends and in a way that is protective of consumers," said Robert Silvers, who co-chairs Paul Hastings' AI practice group, in a statement to us. "It's significant that so many countries have now agreed to a framework for responsible use. The details of implementation will matter a huge amount, but this is an important starting point for aligning governments and industry on some basic rules of the road. People want to know that their data will be protected, that they'll be kept safe, that someone will be held accountable if an algorithm makes a decision that hurts them. These principles form the governance architecture for trustworthy and human-focused AI."