Export Compliance Daily is a service of Warren Communications News.
'Know Your Venn Diagram'

Practical Compliance Advised as ADMT Regulation Heats Up

Businesses face a raft of incoming California regulations on automated decision-making technology (ADMT) from a variety of sources, but privacy lawyers said this week that resulting compliance plans need not be elaborate.

Sign up for a free preview to unlock the rest of this article

Export Compliance Daily combines U.S. export control news, foreign border import regulation and policy developments into a single daily information service that reliably informs its trade professional readers about important current issues affecting their operations.

While this week’s big headline was final administrative approval of the California Privacy Protection Agency ADMT rules, which takes effect Jan. 1 (see 2509230036), California Civil Rights Council rules on employers’ use of automated decision systems (ADS) become effective Oct. 1 (see 2506300056). In addition, the California legislature recently passed a bill (SB-7) focused on employers’ use of ADS (see 2509150026), though it still requires the signature of Gov. Gavin Newsom (D) before it becomes law.

“You have to be mercilessly practical and know your Venn diagram,” advised Kelley Drye privacy attorney Alysa Hutnik in an interview Wednesday. “So much of this is really about not discriminating and having some granularity so that you can reasonably monitor for discrimination.” Regulators are zeroed in on areas where automated decisions could significantly impact people’s lives, such as their employment and financial stability, she said. And they give points to companies that try to comply, she added. As such, she advised, “Get a quick-and-dirty first [effort] and then make it better over time.”

Hogan Lovells privacy attorney Katy Milner said the growing amount of California regulation around automated decisions “absolutely speaks to the grip this topic has on our discourse.” In a Tuesday email to Privacy Daily, she said, “Regulators want to understand how businesses are using ADMT. They are also trying to look ahead to future uses. They have the challenge of crafting policies that address both the knowns and unknowns about ADMT.” Milner predicted that states “with robust privacy regimes are going to continue to have particular interest in this topic, as so much of AI policy centers around the use of people’s data.”

Concerning focus, Hutnik noted that the CPPA’s ADMT rules “are probably the most granular” regarding what analysis is necessary because they “go in tandem with the risk-assessment reports” that the agency will also soon require. Therefore, Hutnik advised starting with the CPPA rules and then building in the Civil Rights Council measures and the new legislation as “subparts, so that you’re not doing separate exercises.”

Milner agreed that the CPPA’s ADMT rules are “well deserving of client focus," as they add "significant new [guardrails] on how businesses can use ADMT and what rights consumers have in connection to this use.” They “address much more than just the use of the technology for employment decisions,” she added. “The rules define ‘significant decision’ to include decisions on financial and lending services, housing, education, and healthcare services -- in addition to employment.”

Businesses should plan now, since some dates in the CPPA rules, such as a Jan. 1 deadline to conduct a comprehensive risk assessment before using ADMT for a significant decision, are coming up fast, Milner said.

As for the Civil Rights Council rules that take effect in a week, Milner said companies using AI tools for hiring or employment-related decisions should be prepared to comply by the Oct. 1 effective date. “There are new record-keeping obligations to note,” the lawyer said. “But largely, this law clarifies that discrimination based on protected characteristics is unlawful -- whether done by a human or by a machine.”

Civil Rights Council Rules

The council’s updated Fair Employment and Housing Act (FEHA) regulations address AI usage in employment-related decisions. They clarify that using an ADS may violate state law if it negatively affects applicants or employees based on gender, race, disability or other protected characteristics. The rules also require employers and covered entities to maintain employment records, including automated-decision data, for at least four years from the date of creating the record or the personnel action involved, whichever is later. In addition, the new rules say ADS assessments, including tests, questions or puzzle games that elicit information about a disability, may constitute an unlawful medical inquiry.

Hutnik advised focusing on complying with the rules’ pre-notice requirements. “What is visible is always going to get most attention, and it's the easiest [for regulators] to do a sweep on.” Review the rules, then consider who needs to be consulted to find out if the business is using any of the covered ADS technology with California employees, she said.

Fox Rothschild privacy attorney Odia Kagan boiled down the council’s rules in an Aug. 20 blog post: “If artificial intelligence results in employment discrimination, employer needs to realize it is still discrimination. But a bias audit can help.”

Erin Connell and other Orrick attorneys blogged on Aug. 12 that businesses should understand what the updated FEHA rules require -- and what they don’t. “While the FEHA and its implementing regulations already prohibited discrimination based on protected characteristics -- including discrimination effectuated by artificial intelligence -- the amendments now explicitly state that existing anti-discrimination protections apply to discrimination occurring through the use of an” ADS.

The Civil Rights Council rules don’t require bias testing, noted the Orrick attorneys. “However, if a covered entity faces a discrimination claim related to its use of an ADS, the regulations provide that evidence (or lack of evidence) of anti-bias testing or similar proactive efforts to avoid unlawful discrimination is relevant to assessing the validity of the claim. So, while not required, bias testing may be helpful for employers and other covered entities seeking to defend against a discrimination claim.”

“How the regulations will be interpreted and enforced remains to be seen,” the lawyers added. “By their plain language, the regulations do not materially expand a covered entity’s obligation not to discriminate under California law. Instead, they codify existing judicial interpretation of the FEHA while imposing new recordkeeping requirements. Nonetheless, employers and other covered entities should expect that government enforcement agencies and plaintiffs alike will be watching closely and ready to challenge any alleged AI-related discrimination."