May 22, 2022

Volume XII, Number 142

Advertisement
Advertisement

May 20, 2022

Subscribe to Latest Legal News and Analysis

EEOC, DOJ Warn Artificial Intelligence in Employment Decisions Might Violate ADA

The U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ), on May 12, 2022, issued guidance advising employers that the use of artificial intelligence (AI) and algorithmic decision-making processes to make employment decisions could result in unlawful discrimination against applicants and employees with disabilities.

The new technical assistance from the EEOC highlights issues the agency thinks employers should consider to ensure such tools are not used to treat job applicants and employees in ways that the agency says might constitute unlawful discrimination under the Americans with Disabilities Act (ADA). The DOJ jointly issued similar guidance to employers under its authority. Further, the EEOC provided a summary document designed for use by employees and job applicants, identifying potential issues and laying out steps employees and applicants can take to raise concerns.

The EEOC identified three “primary concerns:”

  • “Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;

  • Without proper safeguards, workers with disabilities may be ‘screened out’ from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and

  • If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams.”

The EEOC outlined examples of when an employer might be held liable under the ADA. For instance, an employer may be found to have discriminated again individuals with disabilities by using a pre-employment test—even if that test was developed by an outside vendor. In such a case, employers may have to provide a “reasonable accommodation” such as giving the applicant extended time or an alternate test.

The EEOC also identified a number of “promising practices” that employers should consider to mitigate the risk of ADA violations connected to their use of AI tools. Among other “promising practices,” the EEOC recommends:

  • Telling applicants or employees what steps any evaluative process includes (e.g., if there is an algorithm being used to assess an employee) and providing a way to request a reasonable accommodation.

  • Using algorithmic tools that have been designed to be accessible to individuals with as many different types of disabilities as possible.

  • Describing in plain language and accessible format the traits that an algorithm is designed to assess, the method by which the traits are assessed, and the variables or factors that may affect a rating.

  • Ensuring that the algorithmic tool only measures abilities or qualifications that are truly necessary for the job, even for people who are entitled to on-the-job reasonable accommodations.

  • Ensuring that the necessary abilities or qualifications are measured directly rather than by way of characteristics or scores that are correlated with the abilities or qualifications.

  • Asking an algorithmic tool vendor to confirm that the tool does not ask job applicants or employees questions likely to elicit information about a disability or seek information about an individual’s physical or mental impairment or health, unless the inquiries are related to a request for reasonable accommodation.

The technical assistance applies to the growing use of AI and algorithmic decision-making tools in recruitment, including to screen resumes and implement computer-based tests, and in other employment decisions, such as pay and promotions, the EEOC stated. It is not meant to be new policy but to explain existing principles for the enforcement of the ADA and previously issued guidance, the EEOC stated.

The new assistance comes after EEOC Chair Charlotte A. Burrows in October 2021, launched the agency’s Artificial Intelligence and Algorithmic Fairness Initiative to examine the use of AI, machine learning, and other emerging technologies in the context of federal civil rights laws.

A growing number of jurisdictions, including Illinois and New York City, have also begun to pass laws regulating the use of certain types of AI and algorithmic decision-making tools in employment decisions.

© 2022, Ogletree, Deakins, Nash, Smoak & Stewart, P.C., All Rights Reserved.National Law Review, Volume XII, Number 133
Advertisement

About this Author

Jennifer Betts, Ogletree, Litigation attorney
Shareholder

Jenn Betts represents and counsels employers regarding complex traditional labor and employment matters. Jenn has extensive experience with employment issues. She has defended numerous employment class and collective actions for clients in a wide array of industries including retailers, manufacturers, banks, and in the energy sector. 

Jenn also has broad National Labor Relations Act experience, having tried numerous unfair labor practice trials in front of NLRB administrative law judges involving claims such as workforce terminations, allegedly unlawful policies,...

412 246 0153
Advertisement
Advertisement
Advertisement