Export Compliance Daily is a Warren News publication.
Harmful Economic Impact?

EC AI Draft Divides Stakeholders Over Biometric ID, Regulatory Burden

Tech sector and civil society continue to disagree on aspects of a proposed EU AI regulation, they said. The draft Artificial Intelligence Act (AIA) seeks to create trust in AI technologies via a risk-based approach that categorizes them as prohibited, high-risk or limited risk (see 2104210003). The proposal got generally positive though mixed reviews when it emerged in April. A European Commission consultation that closed Friday showed public interest and consumer groups remain worried about the measure's potential impact on human rights, while technology companies are concerned the law is too broad and could be onerous.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

One key area of contention remains the use of remote biometric identification in public spaces. The draft "purports to specifically prohibit certain systems" such as those that use "real-time biometric identification in public area for law enforcement," said the Electronic Privacy Information Center: But the proposal's prohibitions are "too narrowly drafted, applying solely to a small subset of harmful systems," and biometric categorization systems should be "fully banned." The measure should "comprehensively prohibit the use of remote biometric identification in publicly accessible spaces for any purpose," said European Digital Rights.

The proposal outlaws AI used to intentionally distort behavior, causing physical or psychological harm to someone, said the European Consumer Organisation. However, "AI that manipulates, discriminates, misleads or otherwise harms consumers in a manner that causes economic and societal harm is not covered." The Centre for Democracy & Technology, Europe said the proposal "rightly classifies biometric surveillance by law enforcement in publicly accessible places as an ‘unacceptable risk,’ but the derogations include some of the highest risks to human rights and will swallow the rule."

The risk-based approach may not even be appropriate, said AlgorithmWatch, an advocacy group that analyzes the impact of automated decision-making systems on society. Instead of categorizing AI applications under the proposed three risk levels, every system deployed by public or private actors should be required to have a risk assessment, with risk levels determined case-by-case.

Tech sector concerns focused more on flexibility and potential burdens on companies.

Many of the measure's requirements aren't feasible or "assume best practices or standards that do not yet exist," said Facebook. The Information Technology Industry Council wants clearer definitions of AI and AI "providers" and said the law should "enshrine strict and meaningful safeguards to allow for responsible deployment of real-time, remote biometric identification for national security or law enforcement purposes." TechUK urged the European Parliament and Council to "explicitly clarify" the distinction between biometric identification and biometric authentication, saying this will ensure that "uses of facial recognition with implications for fundamental rights are restricted, while applications that merely compare the one-to-one likeness of a person to a document should not be subject to the same requirements."

Imposing a blanket ban on some uses and fields of AI, including social manipulation and real-time biometric identification systems, "doesn't align at all with a scaled risk management approach, said ACT|The App Association. Google and BSA|The Software Alliance urged the EC to allocate responsibility between AI stakeholders in a way that reflects the complexity of the ecosystem. The act should focus on outcomes and processes rather than being prescriptive, said Microsoft.

The Consumer Technology Association voiced concern that the full impact of the costs from AIA "may not be fully appreciated." It cited a July Center for Data Innovation study saying AIA will cost the European economy 31 billion euros ($36.5 billion) over the next five years "and reduce AI investments by almost 20 percent." Unquantifiable costs the EC didn't include in its impact assessment include: Deterring investment in European AI startups, slowing digitization of the economy and encouraging "brain drain" as European entrepreneurs move to countries with fewer bureaucratic hurdles.

Separately, the Department of Homeland Security unveiled an artificial intelligence and machine learning strategic plan June 30, identifying research it will carry out to understand the opportunities and risks of AI and ML and their impacts to DHS missions.