February 21, 2020

February 21, 2020

Subscribe to Latest Legal News and Analysis

February 20, 2020

Subscribe to Latest Legal News and Analysis

February 19, 2020

Subscribe to Latest Legal News and Analysis

Privacy Tip #223 – Navigating Individual Data Privacy in a World with AI

The same week that the National Institute of Standards and Technology came out with its Privacy Framework [view related post], highlighting how privacy is basically a conundrum, news articles also highlighted a new technology, Clearview AI, that allows someone to snap a picture of anyone walking down the street and instantly find out that person’s name, address and “other details.” I want to know what that means? Does that mean they automatically know my salary, the number in my bank account, my prescription medication or health issues, my political affiliation, or what I buy at the drug store or grocery store? All of this information tells a lot about me. Some people don’t care, but I am not sure why. There just does not seem to be any respect or interest in the protection of individual privacy. It’s not that people have things to hide—it’s just that it is reminiscent of some darker days of humanity—such as World War II era Germany.

It is comforting to see that privacy advocates are warning us about Clearview AI. Clearview AI has obtained the information, including facial recognition information of individuals, by scraping common websites such as LinkedIn, Facebook, YouTube, and Venmo, and is storing that biometric information in its system and sharing it with others. According to Clearview AI, its database is for use only by law enforcement and security personnel, and has assisted law enforcement to solve crimes. That is obviously very positive. However, privacy advocates point out that the app may return false matches, could be used by stalkers and other bad actors, as well as for mass surveillance of the U.S. population. That is obviously very negative.

There has always been a tension between using technology for law enforcement and national security, which frankly, we all want, and using technology for uses that are less clear and may promote abuse, which we don’t want. Clearview AI is collecting facial images of millions of people without their consent, which may be used for good or bad purposes. This is where public policy and data ethics must play a part. The NIST Privacy Framework can help in determining whether the collection, use and disclosure of facial recognition on the spot is protecting the privacy and dignity of individuals. Technological capabilities must be used for good purposes, but in today’s world technology is moving fast, and data ethics, privacy considerations and abuse are not always being considered, including with facial recognition applications. Perhaps the Privacy Framework can help shape the discussion, which is why its release is so timely and important.

This article co-authored by guest contributor Victoria Dixon. Victoria is a Business Development & Marketing Coordinator at Robinson+Cole and is not admitted to practice law.

Copyright © 2020 Robinson & Cole LLP. All rights reserved.

TRENDING LEGAL ANALYSIS


About this Author

Linn F. Freedman, Robinson Cole Law Firm, Cybersecurity and Litigation Law Attorney, Providence
Partner

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She provides guidance on data privacy and cybersecurity compliance to a full range of public and private clients across all industries, such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine, and charitable organizations. Linn is a member of the firm's Business Litigation Group and chairs its Data Privacy + Cybersecurity Team. She is also a member of the Financial Services Cyber-Compliance Team (CyFi ...

401-709-3353