EEOC Issues Guidance on the Interplay between the Use of Artificial Intelligence in Employment Decisions and the ADA (US)
Wednesday, May 18, 2022

Many businesses use artificial intelligence (“AI”), algorithms, software, and other forms of technology to make employment-related decisions. Employers now have an array of computer-based tools at their disposal to assist them in hiring employees, monitoring job performance, determining pay or promotions, and establishing the terms and conditions of employment. As such, many employers rely on different types of software that incorporate algorithmic decision-making and AI at a variety of stages of the employment process.

For example, some employers use resume scanners that prioritize applications using certain keywords, and some use video interviewing software to evaluate candidates based on their facial expressions and speech patterns. Further, some employers use “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements. In addition, some employers use testing software that create “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit.” Others use employee-monitoring software that rates employees based on their keystrokes or other task-based factors. Employers may use these tools in a benign attempt to be more efficient, increase objectivity, or decrease the potential effects of implicit bias. However, the use of these tools may inadvertently disadvantage job applicants and employees with disabilities and may even violate the Americans with Disabilities Act (“ADA”).

Accordingly, on May 12, 2022, the U.S. Equal Employment Opportunity Commission (“EEOC”) released guidance advising employers that the use of AI and algorithmic decision-making tools to make employment decisions could result in unlawful discrimination against applicants and employees with disabilities. The EEOC’s technical assistance discusses potential pitfalls the agency wants employers to be aware of to ensure such tools are not used in discriminatory ways. Specifically, the guidance outlines how existing ADA requirements may apply to the use of AI in employment-related decisions and offers “promising practices” for employers to help with ADA compliance when using AI decision-making tools. This guidance is not meant to be new policy but rather is intended to clarify existing principles for the enforcement of the ADA and previously issued guidance.

The ADA and analogous state laws prohibit covered employers from discriminating against qualified employees and applicants based on known physical or mental disabilities, and also require employers to provide those employees with reasonable accommodations for their disabilities.  According to the EEOC, one of the most common ways that an employer’s use of AI or other algorithmic decision-making tools could violate the ADA is if the employer fails to provide a reasonable accommodation that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm. Further, ADA violations may arise if an employer relies on an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation. Moreover, employers may violate the ADA if they us an algorithmic decision-making tool that runs afoul of the ADA’s restrictions on disability-related inquiries and medical examinations.

With these issues in mind, the EEOC identified a number of “promising practices” that employers should consider to help alleviate the risk of ADA violations connected to their use of AI tools. Specifically, in order to comply with the ADA when using algorithmic decision-making tools, the EEOC recommends the following best practices:

1. Employers must provide reasonable accommodations when legally required, and the EEOC recommends the following practices that will help employers meet this requirement:

  • Training staff to: (i) recognize and process requests for reasonable accommodation as quickly as possible, including requests to retake a test in an alternative format, or to be assessed in an alternative way, after the individual has already received poor results; and (ii) develop or obtain alternative means of rating job applicants and employees when the current evaluation process is inaccessible or otherwise unfairly disadvantages someone who has requested a reasonable accommodation because of a disability.

  • If the algorithmic decision-making tool is administered by an entity with authority to act on the employer’s behalf (such as a testing company), employers should ask the entity to send all requests for accommodation promptly to be processed by the employer in accordance with ADA requirements. In the alternative, the EEOC advises employers to enter into an agreement with the third party requiring it to provide reasonable accommodations on the employer’s behalf, in accordance with the employer’s obligations under the ADA.

2. Employers should reduce the chances that algorithmic decision-making tools will disadvantage individuals with disabilities, either intentionally or unintentionally. According to the EEOC employers can do this by:

  • Using algorithmic decision-making tools that have been designed to be accessible to individuals with a myriad of disabilities, thereby decreasing the chances that individuals with different kinds of disabilities will be unfairly disadvantaged in the assessments;

  • Informing all job applicants and employees who are being assessed that reasonable accommodations are available for individuals with disabilities, and providing clear and accessible instructions for requesting such accommodations; and

  • Describing, in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating.

3. Employers may also seek to minimize the chances that algorithmic decision-making tools will assign poor ratings to individuals who are able to perform the essential functions of the job, with a reasonable accommodation if one is legally required. Employers may accomplish this goal by:

  • Ensuring that the algorithmic decision-making tools only measure abilities or qualifications that are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation; and

  • Ensuring that necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications.

4. Before adopting an algorithmic decision-making tool, employers should ask the vendor to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation.

As technology continues to develop and the use of AI in employment decision making becomes even more prevalent, the EEOC will likely expand on its guidance regarding employers’ use of AI and how it intersects with both the ADA and other federal anti-discrimination laws. As always, we will continue to update you on any new developments that may arise.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins