Rudimentary Facial Recognition Technology (FRT) was pioneered in the 1960s and even then the government quickly realized what implications FRT could have for policing and security. If one could use a computer to quickly look for the identity of suspects, they could effectively automate an extremely labor intensive aspect of policing—saving money in the long run.
Since the 60s, computing power has increased significantly, and with that FRT has proliferated. FRT now touches many different aspects of our lives, from the innocuous unlocking of phones to the more sinister surveillance and security state applications. As usual, the law hasn’t quite kept pace with this explosion of potentially invasive technology.
Much like any other surveillance technology, FRT is ripe for abuse. FRT has biases that are baked-in due to the way it is designed: most algorithms are trained on primarily white and male faces, leading to algorithms being less accurate for non-white, non-male people. This can lead to false identifications, wrongful arrest, and lengthy and expensive legal battles to clear the names of the accused. We know of three cases, all involving black men, where the men were exonerated after being wrongly charged in crimes based on FRT matches.
This shouldn’t be taken as a call to increase the accuracy of FRT algorithms, quite the opposite: fully accurate FRT systems would be a waking nightmare when it comes to privacy rights; a literal 1984 dystopia where the government can track all your movements throughout the day.
Facial recognition technology also has concerning implications for protest rights. Often times, law enforcement justifies the installation of cameras that may be used for FRT by claiming its for “public safety purposes”, but then quickly turn to those cameras to identify protesters. We know this is likely to have a chilling effect upon peaceful protest. Protesters fear that their identification could result in retribution by political or ideological opponents. This concern precipitated a moratorium by Apple, IBM, and Microsoft on sales of FRT to law enforcement and a new law in Virginia to tighten restrictions on the use of facial recognition technology by local law enforcement agencies.
And it’s not rare for law enforcement—federal or local—to be taking advantage of this tech. A recent GAO report revealed that of 42 government agencies 20 use facial recognition technology (FRT). The GAO summarized the risks of FRT and expressed concern over the insecurity of citizens’ private data. And without pressure to stop its use, FRT will continue to proliferate.
FRT can also quickly implicate itself in a Kafkaesque bureaucratic nightmare. Unemployment recipients across the U.S. have been denied benefits due to ID.me’s flawed facial recognition models. At a recent meeting of the Massachusetts legislature on facial recognition, Registrar Ogilvie of the Registry of Motor Vehicles (RMV) testified that 20% of applications to RMV were flagged as suspicious in their preliminary facial recognition screen using Rekognition’s system. These 260,000 applications were reviewed by the State Police, who determined that just 497 applications were actually fraudulent.
The private companies that supply databases for FRT algorithms are not so upstanding either. Clearview AI has been implicated in multiple scandals: from providing databases and FRT tech to ICE, to scraping unwitting social media users’ photos for to be used in the lineup of Clearview AI’s database. Buzzfeed recently reported on the use of Clearview AI’s facial recognition technology by 2,200 law enforcement agencies in the US. Even more concering, officers often used it without the permission of superiors. The NYPD conducted 5,100 searches with Clearview AI using the company’s trial program.
19 municipalities have adopted facial recognition technology ordinances, but these do not always prevent the use of FRT. The San Francisco Police Department was sued for its alleged use of a network of 400 private surveillance cameras to spy on protesters in 2020, despite an ordinance banning use of facial surveillance. Boston’s facial recognition ban contains a loophole allowing the use of databases, programs, and technology provided by another government entity.
This is why we need a federal moratorium on the use of facial and biometric technology. A moratorium will allow legislators to consider proper regulation of our government’s use of a technology that has enabled the wholesale roundup of Uyghurs in China and contributed to the wrongful arrest of at least three innocent Black men here in the US. Our freedom is threatened by any surveillance technology that enables us to be tracked everywhere and FRT has especially dangerous implications for this. You can send a letter to your lawmakers to tell them to support a federal moratorium of FRT here.
Learn more about FRT here.
2 replies on “Tell Congress to Put an End to FRT”
I’m in complete agreement with the views and concerns that your organization fights for and if I’m wrong in saying that my concern is that only the minorities are defended in this article where this violation of our rights is definitely “color blind” and when these mistakes are made they will continue in every community. If so I apologize, Im not trying to marginalize the problems in our communities or minimize the disadvantage of another to make this point.
FRT algorithms themselves are not “color blind.” The current systems are significantly more likely to misidentify people of color than white people.
But we also emphasize in the article that even if they were “color blind”, FRT would be a “waking nightmare” for all people.