As Facial Recognition Technology Surges, Organizations Face Privacy and Cybersecurity Concerns, and Fraud
Wednesday, July 28, 2021

Facial recognition technology has become increasingly popular in recent years in the employment and consumer space (e.g. employee access, passport check-in systems, payments on smartphones), and in particular during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, and now, more alarmingly, there is growing concern that “Faces are the Next Target for Fraudsters” as summarized by a recently article in the Wall Street Journal (“WSJ”).

In the last year, there has been an uptick in hackers trying to “trick” facial recognition technology, in a myriad of settings, such as fraudulently claiming unemployment benefits from state workforce agencies, The majority of states are now using facial recognition technology as a way to verify to eligible citizens, ironically enough, in order to prevent other types of fraud. As discussed in the WSJ article, the firm ID.me.Inc. which provides facial recognition software for 26 states to help verify individuals eligible for unemployment benefits has seen between June 2020 – January 2021 over 80,000 attempts to fool government identification facial recognition systems.  Hackers of facial recognition systems use a myriad of techniques including deepfakes (AI generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology.  Despite its appeal for employers and organizations, there are concerns regarding the accuracy of the technology, as well as significant legal implications to consider.  First, there are growing concerns regarding accuracy and biases of the technology.  A recent report by the National Institute of Standards and Technology studied 189 facial recognition algorithms which is considered the “majority of the industry”.  The report found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10 to beyond 100 times more than White faces.  Moreover, false positives are significantly more common in woman than men, and more elevated in elderly and children, than middle-aged adults.

In addition, several U.S. localities have already banned the use of facial recognition for law enforcement, other government agencies, and/or private and commercial use.  The City of Baltimore, for example, recently banned the use of facial recognition technologies by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.” Likewise in September of 2020 the City of Portland in Oregon became the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. The Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

And finally, companies looking to implement facial recognition technologies, must consider their obligations under laws such as the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). The BIPA addresses a business’s collection of biometric data from both customers and employees including for example facial recognition, finger prints, and voice prints.  The BIPA requires informed consent prior to collection of biometric data, mandates protection obligations and retention guidelines, and creates a private right of action for individuals aggrieved by BIPA violations which has resulted in a flood of BIPA class action litigation in recent years.  TexasWashington and California also have similar requirements, New York is considering a BIPA-like privacy bill and NYC recently created BIPA-like requirements for retail, hospitality businesses concerning biometric collection from customers. Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Takeaway

Facial recognition and other biometric data related technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. Creating a robust privacy and data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins