October 20, 2019

October 18, 2019

Subscribe to Latest Legal News and Analysis

Facial Recognition Technology - Good or Bad?

As of June 2019, law enforcement agencies are working with the city of Perth in running a 12-month trial in the use of facial recognition software. The trial involves the installation of the software in 30 CCTV cameras and is part of the Federal Government’s Smart Cities plan, which was created with the aim of increasing interconnectivity and building intelligent, technology-enabled infrastructure throughout Australia.

The software is able to detect clothing colour and gender and track movement speed and patterns, with some cameras also able to detect heat.

If the trial is successful, the new facial recognition technology may be installed across all cameras in Perth’s network.

Meanwhile in NSW, transport minister Andrew Constance has raised the idea of facial recognition software being used by commuters to access public transport, by linking it to their Opal account. Mr Constance suggested that it would create “frictionless transport payments” that may become available “in the not too distant future”.

In line with trends, it’s been proposed that the system would work on a subscription model like Netflix, whereby public transport users would pay a weekly or monthly fee for unlimited use.

Although this technology could potentially help NSW deal with the 4.7 percent increase in public transport commuters over the last year, the idea raises concerns. There may be issues with misidentification of commuters which would likely lead to inefficiencies, most likely at cost for the commuter.

The city of London has not had much success with facial recognition software for law enforcement. Following trials conducted by London Metropolitan Police, researchers from the University of Essex, who were given privileged access to the trials, found that members of the public were misidentified as potential criminals 80% of the time. This suggests that the technology may not be sufficiently mature to be useful in large scale applications where the multiplier effect of misidentification is significant.

Use of facial recognition software often raises questions about the ethics of artificial intelligence. As an example, San Francisco (with Silicone Valley as the AI capital) has banned the use of facial recognition software by police and other law enforcement agencies. The reason for the ban cites potential risks of abuse, although critics of the ban have commented that instead of a ban, the focus should instead turn to regulators and encourage them to find a way to balance the usefulness of facial recognition and prevent abuse of the technology.

As facial recognition and artificial intelligence technologies develop and become more prevalent in everyday activities, it will be increasingly important for technology providers, in conjunction with regulators, to consider, develop and adhere to ethical guidelines that prevent misidentification and abuse. At the same time the technology must be accurate enough to not create a burden on those affected by misidentification.

 

Co-Author: Jacqueline Patishman

Copyright 2019 K & L Gates

TRENDING LEGAL ANALYSIS


About this Author

Cameron Abbott, Technology, Attorney, Australia, corporate, KL Gates Law Firm
Partner

Mr. Abbott is a corporate lawyer who focuses on technology, telecommunications and broadcasting transactions. He assists corporations and vendors in managing their technology requirements and contracts, particularly large outsourcing and technology procurements issues including licensing terms for SAP and Oracle and major system integration transactions.

Mr. Abbott partners with his clients to ensure market leading solutions are implemented in to their businesses. He concentrates on managing and negotiating complex technology solutions, which...

+61.3.9640.4261
Senior Attorney

Ms. Aggromito is a senior lawyer in the lawyer in the Melbourne commercial technology and sourcing team focusing on IT, privacy and data protection.

+61.3.9205.2027