HB Ad Slot
HB Mobile Ad Slot
Applying FTC’s Observations on Finding Diverse Talent in Manufacturing Industry
Thursday, July 8, 2021

Touring a manufacturing facility where product demand exceeds labor supply, one can see owners who work long hours, alongside their workforce, to fill orders, trying a variety of ways to fill staff vacancies. Large and small employers share similar staffing experiences, i.e., having difficulty recruiting, hiring, and retaining diverse talent for today’s environment.

Over the years, employers and employment recruiters have successfully relied upon technology and artificial intelligence (AI) for sourcing and automating the recruitment process to obtain qualified candidates. AI may be used in the screening process, for example, to identify which candidates have the requisite experience in a particular field. How does a company make more strategic business decisions to enhance Diversity, Equity and Inclusion (DEI) initiatives, while trying to attract talent? Are there any unintended consequences of relying upon algorithms, AI, and resulting data analytics in staffing decision-making or other employment-related decisions? As the Federal Trade Commission (FTC) notes, careful steps need to be taken to minimize exposure to potential discrimination claims by adversely affected individuals, e.g., women or other members of protected groups.

This challenge may be particularly acute in some sectors of manufacturing, where companies are sometimes challenged to attract a good pool of qualified talent, especially in entry level and skilled positions.

The FTC has reminded companies that, despite the advances in AI, even “neutral” technology can produce racially bias results, resulting in unintended disparities for individuals of color. What can be done to avoid such unfavorable outcomes?

Based upon the FTC’s experience with data analytics, its hearing on algorithms, AI, and predictive analytics, along with its enforcement activities, the agency offers some observations for using AI in a “truthful, fair and equitable” way. These include:

  • Start off on solid ground, using complete data. Think about improving your data. Design models to account for any shortcomings or data gaps.

  • Let transparency and truthfulness guide what is said and done. In this instance, the FTC gives the following example: “Let’s say an AI developer tells clients that its product will provide ‘100% unbiased hiring decisions,’ but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination and an FTC law enforcement action.”

  • Periodically monitor outcomes to reduce risk. Testing may need to be done, before and periodically, to make certain a well-intentioned algorithm does not create unintended biased, whether on race, gender, or any other protected group.

  • If AI is doing more harm than good, revisit, because the FTC can challenge a “neutral” model that turns out to have an unfair result.

Finally, accountability is key. Companies need to take responsibility for the AI used; the FTC undoubtedly has that expectation.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins