From The Jetsons to Reality, or Almost: What Employers Need to Know About Robots and AI in the Workplace
Many readers will remember The Jetsons – a futuristic world in which sophisticated robots in both the home and the workplace had the ability to do, think, learn, and interact with humans. While The Jetsons’ rendering of the “future” has not come to fruition, robots and artificial intelligence (AI) have made and continue to make their way into the modern workplace at breakneck speed, creating unprecedented opportunities and challenges for employers in nearly every sector of the economy. This series will explore those challenges, a topic of considerable importance to employers but one that has been overshadowed by the cost-savings and potentially positive economic impact that robots and AI can bring to a workplace. As the use of robots and AI in the workplace have increased and will continue to do so, employers must be proactive about identifying, understanding, and mitigating risks and areas of potential exposure. The future is coming, and in many ways is already here.
This article provides both an introduction to the series and an overview of AI, while briefly discussing the proliferation of robots and AI in the workplace. Next, we analyze the implications of AI in two areas already rife with legal exposure for employers and particularly ripe for legal exposure in this brave new world: employee hiring and firing.
In general terms, AI refers to the science and engineering of making intelligent machines capable of performing tasks that typically require human or natural intelligence, such as the ability to learn, problem-solve, and plan. The result of these advances is systems that are capable of performing tasks that were once the exclusive province of humans.
When you hear the words robots and AI, you may think of the manufacturing sector, an area of the economy that has historically used robots in large part due to the predictable, routine and physical nature of the tasks involved. While those jobs may be the most vulnerable to displacement due to AI  the number of robots in use worldwide – across all industries – has multiplied three-fold in the last 20 years, to 2.25 million . Indeed, the reach of AI has been expansive, spanning industries such as aircraft maintenance, hospitality, healthcare, food service, transportation and even independent professional baseball league umpiring in the form of an automated strike zone.
As the use of AI continues to expand and increase, the statistics relating to AI’s effect – in the form of job creation and destruction – are significant:
According to a September 2018 World Economic Forum Report (WEF), by 2025, machines will perform more current work tasks than humans, compared to 71 percent being performed by humans today .
WEF estimates that 75 million jobs may be eliminated because of AI, while 133 million new roles may emerge as a result of the technology .
McKinsey Global (MG) reports that by 2030, 400 million workers worldwide could be displaced by AI .
Moreover, between 75-375 million of those workers will need to change occupational categories and learn new skills to remain part of the workforce .
Big picture implications aside, AI has and will continue to affect employers on a more day-to-day/smaller scale basis and present new areas of potential exposure.
Employee Hiring Implications
A prime area of potential exposure for employers using or considering using AI is employee hiring. AI is transforming hiring, eliminating the need for HR departments to pore over countless applications and pre-screen applications to identify qualified candidates, schedule interviews, and answer applicant questions. According to a December 2016 article in the Harvard Business Review, AI is being used by businesses to screen out up to 70 percent of job applicants without any of the candidates having interacted with a human being. AI is also being used to analyze existing employee data to predict an applicant’s future success, working to more closely match candidates with existing top performers and the employer’s culture.
AI startup pymetrics, for example, offers game-based recruiting tools designed to help companies “predict the right person for the job.” Touting companies such as Unilever, Randstad, Coty, and Accenture as some of its 50+ global clients, pymetrics creates custom algorithms based on qualities exemplified by a company’s top performers and evaluates applicants based on how they score on the desired qualities through a series of online games.
Another example is HireVue, a platform that offers assessment algorithms that pick up on more than 20,000 visual and audio cues – facial expressions and body language – during video interviews and compares them with data collected from existing top performers. One of the world’s largest hotel chains currently uses this technology for call center positions.
The potential benefits of AI-enhanced hiring seem obvious—it saves time and can remove human bias that may cause a decision-maker to prefer one candidate over others (i.e., a shared alma mater or a mutual love of a particular sports team), leading to improvements in the fit and diversity of corporate teams. pymetrics, for example, advertises “bias-free algorithms,” a 75 percent reduction in hiring time, and twice the hiring yield. Users of HireVue have seen similar improvements—one major hotel chain has reduced time-to-hire from six weeks to seven days, and Unilever has reported a 16 percent increase in new-hire diversity.
But the use of AI in employee hiring can be problematic. Even if an algorithm appears facially neutral or does not intentionally screen out candidates in classes protected by state and federal anti-discrimination laws, it could still have a disparate impact, forming the basis for an employment discrimination lawsuit against the company. Seemingly innocuous criteria – a geographic requirement that applicants live in specified zip codes or a certain distance from the employer’s facility, for instance – can produce a disparate impact, excluding members of protected classes. For this reason, pymetrics advertises that it “[m]ethodically remove[s] bias from algorithms by iterative algorithm auditing processes, ensuring lack of bias.” HireVue claims to be similarly attuned, applying a comprehensive procedure to eliminate bias in each algorithm it builds.
An employer using or considering using AI in recruitment should follow suit, continually monitoring, stress-testing, auditing and fine-tuning its algorithms (either itself or through the third-party algorithm provider it selects) to ensure that they do not have a disparate impact on any protected class. If the employer determines that the algorithm does have such an impact, the employer should either change the algorithm to eliminate the disparate impact or ensure – and be prepared to defend with hard evidence – that the criteria being utilized are justified by legitimate business requirements.
At an even more basic level, employers should endeavor to understand how the algorithm works and how it makes its decisions. If the employer is partnering with a third-party provider, some of this information may be proprietary and/or confidential. Nevertheless, the employer should understand the process and be able to articulate why an applicant was rejected. Lack of this information, or failure of the employer to actively test or audit the algorithm, may hamstring an employer’s defense in an employment discrimination lawsuit filed by a rejected candidate.
AI also raises important implications for employers in the area of termination. As part of either a voluntary reduction in force or a mass layoff, employers identify which position(s) will be targeted for elimination, which typically reflects a deliberate, detailed, and time-consuming process through which an employer ensures the criteria applied to the selection process are business-related, objective, and legally defensible. In this regard, employers often use various methods to test the criteria being applied to ensure that the application does not have a disparate impact on protected classes. As is the case with hiring and recruitment, employers need to be certain that - to the extent AI is contributing to any downsizing analysis - the methodologies, algorithms, and outcomes are tested and sound.
Interestingly, some employers have minimized these areas of exposure by taking an entirely different approach, that is, by forgoing the RIFs/layoffs and, instead, retraining employees whose jobs have been displaced by automation to perform new and different tasks, many of which were created by automation itself. Bulk grocery e-commerce giant, Boxed, is a prime example. In 2018, Boxed replaced 75 percent of its warehouse jobs at its Union, New Jersey fulfillment center with robots. Instead of firing the displaced workers, Boxed announced that it would retrain the employees for new jobs. Along the same lines, e-commerce behemoth Amazon recently announced that it would commit $700 million to retrain 100,000 of its US workforce by 2025, a move meant to help them pursue new paths at the company which has embraced automation and AI in recent years .
AI is having a real, noticeable, and expanding effect in today’s workplace. Those employers that are best able to anticipate the challenges and leverage the benefits will have a competitive advantage as the role of AI continues to proliferate.
Read more on AI's implications.
 Available here.
 Available here.
 Available here.
 Available here.
 Available here.
 Many of the jobs created by AI – and degrees required to perform them – may not have existed 20 years ago (and may not even exist today). If members of protected classes are eliminated as a result, this may be an additional area of potential exposure for employers.