Law enforcement is using a facial recognition app with huge privacy issues

The technological development the world has seen is immense. Many of the ideas or depictions we have thought to belong in science fiction have become a reality. For instance, the idea of facial recognition.

A little-known start-up company called Clearview AI has come out with a facial recognition software that is turning very popular with law enforcement agencies. The New York Times has reported that about 600 of these agencies from both the USA and Canada have subscribed to use this software.

The software itself is versatile. Once a photo has been uploaded, even with questionable angles, the software runs it against a database of over three billion images. This database is curated from all over the internet, including Facebook and YouTube.

However, with the prevalence of the software in the hands of the police department, questions of privacy have risen as well. With Clearview’s software already helping the police solve cases, people are registering their discomfort. The software gives the police a relatively unimpeded way to stalk people, intimidate protestors and therefore abuse the system in entirety.

In addition to this, the software is reportedly only 75 percent accurate, meaning that there could be harassment based on faulty matches. Further, this software has not been tested by the US government’s National Institute of Standards and Technology. Causing concerns also is the fact that there could potentially be gender and racial biases integrated into the way the software identifies the people; or even failure in identifying people of other races.

Read – Emotion-Detecting Technology Should be Banned, Says AI Now

In fact, the software is already noted to be violating the policies of a lot of the websites that it collects the images from. Facebook, for instance, has a policy against third party collecting the users’ images as a group. The social networking giant is reportedly looking into the situation and promised to take appropriate steps in case it found Clearview to be in fault.

A reason why all of this snowballed into an issue of recent relevance is a lack of oversight by concerned authorities. Public opinion on the issue of whether Clearview’s software should be integrated with local law enforcement was never taken. To make matters worse, Clearview’s ability to protect and safeguard the database of images has never been tested; nor has the company properly detailed how it intends to use this database of theirs. Of note is that Clearview itself was not known to people until late 2019.

Put all of these in perspective, it is only clear why people are questioning the legality and capabilities of Clearview and the software that is touted to help the society we live in. In fact, cities like San Francisco have decided to ban the government’s use of facial recognition over the same issues. It remains to be seen what will be done as the public’s voice against facial recognition software grows.

Read – After AI, Monkeys Defeat Humans At A Simple Computer Game

Sreemitra Somanchi
A tech enthusiast with an itch to write. She is Interested in Consumer Technology in any form factor. She lives on the Google side of the world.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here