October 20, 2021

Volume XI, Number 293

Advertisement
Advertisement

October 19, 2021

Subscribe to Latest Legal News and Analysis

October 18, 2021

Subscribe to Latest Legal News and Analysis
Advertisement

Employers Beware: The EEOC is Monitoring Use of Artificial Intelligence

Earlier this month, the Equal Employment Opportunity Commission (EEOC) held a webinar on artificial intelligence (AI) in the workplace.  Commissioner Keith Sonderling explained that the EEOC is monitoring employers’ use of such technology in the workplace to ensure compliance with anti-discrimination laws. The agency recognizes the potential for AI to mitigate unlawful human bias, but is wary of rapid, undisciplined implementation that may perpetuate or accelerate such bias.  Sonderling remarked that the EEOC may use Commissioner charges—agency-initiated investigations unconnected to an employee’s charge of discrimination—to ensure employers’ are not using AI in an unlawful manner, particularly under the rubric of disparate impact claims.

The EEOC’s interest in this topic is not new.  The agency previously held a public meeting in October 2016 discussing the use of big data in the workplace and the implications for employment law practitioners.  But the most recent webinar likely reflects the EEOC’s response to a November 2020 letter, authored by ten U.S. Senators, asking the agency to focus on employers’ use of artificial intelligence, machine-learning, and other hiring technologies that may result in discrimination.  We previously blogged about this letter here.

Many attorneys and AI commentators agree that AI, such as automated candidate sourcing, resume screening, or video interview analysis, is not a panacea for employment discrimination.  The technology, if not carefully implemented and monitored, can introduce and even exacerbate unlawful bias.  This is because algorithms generally rely on a set of human inputs, such as resumes of high-performing existing employees, to guide their analysis of candidates.  If those inputs lack diversity, the algorithm may reinforce existing institutional bias at breakneck speed.  This can lead to claims of disparate impact discrimination.  The EEOC would most assuredly take a heightened interest in any such claims.

Although the EEOC has flagged these issues, it has not yet issued written guidance on the use of AI in employment decisions.  In his remarks, Sonderling confirmed that the most relevant guidance document is over 40 years old.  He was referring to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures.  That guidance, written in the wake of the 1960s civil rights movement, outlines different ways employers can show that employment tests and other selection criteria are job-related and consistent with business necessity.  Although dated, the same principles that justified the validity of selection procedures in the 1970s can guide employers using AI today.  One such method, called the 80% rule, explains that a selection rate for any race, sex, or ethnic group which is less than eighty percent (80%) of the selection rate for the group with the highest selection rate constitutes a “substantially different rate of selection,” indicating possible disparate impact.  According to the Uniform Guidelines, this rule of thumb may be used by employers to test AI tools prior to implementation and to regularly audit such tools after implementation.

Employers should be mindful of the EEOC’s awareness on this topic and the availability of Commissioner charges to uncover disparate impacts without the need for an employee charge.  As Commissioner Sonderling explained, job applicants and employees are often unaware that they were excluded from a certain job because of flawed or improperly calibrated AI software.  As a result, Commissioner charges may become an important tool for the EEOC.  To avoid being the target of such investigations, employers should focus on careful implementation of AI, which includes auditing such tools to validate any AI-influenced decision-making.  As this is an emerging field, employers should also stay abreast of developments in the law and consult with employment counsel when deciding whether and how to use AI in the workplace.

Copyright © 2021, Hunton Andrews Kurth LLP. All Rights Reserved.National Law Review, Volume XI, Number 264
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

About this Author

Daniel J. Butler Labor & Employment Litigation Hunton Andrews Kurth Miami, FL
Associate

Dan advises and represents businesses facing complex employment law issues.

As part of his litigation practice, Dan represents employers in state and federal courts in discrimination, harassment, and retaliation lawsuits, whistleblower claims, and wage and hour collective actions. He also has experience representing companies before state and federal administrative agencies, including the Florida Commission on Human Relations and the Equal Employment Opportunity Commission.

To help clients avoid litigation, Dan regularly performs internal investigations and...

305-810-2519
Kevin J. White Employment Lawyer Hunton AK
Partner

Kevin co-chairs the firm’s labor and employment team and  has a national practice that focuses on complex employment litigation and employment advice and counseling.

In particular, Kevin has extensive experience representing clients in the retail, energy and financial services industries in discrimination class action litigation, governmental agency systemic discrimination investigations, and wage and hour litigation. Other significant aspects of his practice include conducting internal investigations, advising clients regarding executive...

202 955 1886 direct
Advertisement
Advertisement
Advertisement