Retail Use of FRT is Problematic as Ever

Private retailer use of facial recognition technology has been in the news lately, with a story out of Detroit of a young Black girl being ejected from a private business due to a FRT misidentification. Lamya Robinson was kicked out of a roller skating rink after facial recognition technology that the business was using “identified” her as having been part of a fight there before. The only problem: Robinson had never actually been to that skating rink before. Her mother is quoted as saying, “To me, it’s basically racial profiling.” And she’s right: FRT is the same old racial profiling, with a 21st century, high-tech veneer of objectivity.


Tech world biases are baked into the technology itself, technology that is often trained on databases that are primarily filled with white faces. This means that FRT misidentifies Black and brown faces more often than white faces. It’s a fallacy to believe that surveillance such as this guarantees safety. It’s often a question of safety for whom? FRT misidentifications—whether public or private—can lead to dangerous contact between marginalized communities and law enforcement. There’s been multiple cases of black men being wrongfully imprisoned over false FRT identifications. That’s not actually safety, that’s mass criminalization and it harms communities.

Facial recognition technology is inaccurate and unsafe for large portions of our population including, women, LGBTQ people, and people of color. Retailers who use FRT are knowingly choosing to create environments that are not just unwelcoming but also unsafe for marginalized communities. And often times, shoppers have no idea what they’re walking into. Even worse, shoppers may have no choice—consider people who live in food deserts or other communities without many choices of where one can shop. The proliferation of FRT in retail settings will just contribute to mass criminalization of marginalized communities and our already racist policing system. We need to draw a line in the sand here and now: it’s simply not ok for retailers to use facial recognition technology. Sign the petition and shame the naughty list of retailers: here.

Tell Congress to Put an End to FRT

Rudimentary Facial Recognition Technology (FRT) was pioneered in the 1960s and even then the government quickly realized what implications FRT could have for policing and security. If one could use a computer to quickly look for the identity of suspects, they could effectively automate an extremely labor intensive aspect of policing—saving money in the long run.

Since the 60s, computing power has increased significantly, and with that FRT has proliferated. FRT now touches many different aspects of our lives, from the innocuous unlocking of phones to the more sinister surveillance and security state applications. As usual, the law hasn’t quite kept pace with this explosion of potentially invasive technology.

Much like any other surveillance technology, FRT is ripe for abuse. FRT has biases that are baked-in due to the way it is designed: most algorithms are trained on primarily white and male faces, leading to algorithms being less accurate for non-white, non-male people. This can lead to false identifications, wrongful arrest, and lengthy and expensive legal battles to clear the names of the accused. We know of three cases, all involving black men, where the men were exonerated after being wrongly charged in crimes based on FRT matches.

This shouldn’t be taken as a call to increase the accuracy of FRT algorithms, quite the opposite: fully accurate FRT systems would be a waking nightmare when it comes to privacy rights; a literal 1984 dystopia where the government can track all your movements throughout the day.

Facial recognition technology also has concerning implications for protest rights. Often times, law enforcement justifies the installation of cameras that may be used for FRT by claiming its for “public safety purposes”, but then quickly turn to those cameras to identify protesters. We know this is likely to have a chilling effect upon peaceful protest. Protesters fear that their identification could result in retribution by political or ideological opponents. This concern precipitated a moratorium by Apple, IBM, and Microsoft on sales of FRT to law enforcement and a new law in Virginia to tighten restrictions on the use of facial recognition technology by local law enforcement agencies.

And it’s not rare for law enforcement—federal or local—to be taking advantage of this tech. A recent GAO report revealed that of 42 government agencies 20 use facial recognition technology (FRT). The GAO summarized the risks of FRT and expressed concern over the insecurity of citizens’ private data. And without pressure to stop its use, FRT will continue to proliferate.

FRT can also quickly implicate itself in a Kafkaesque bureaucratic nightmare. Unemployment recipients across the U.S. have been denied benefits due to ID.me’s flawed facial recognition models. At a recent meeting of the Massachusetts legislature on facial recognition, Registrar Ogilvie of the Registry of Motor Vehicles (RMV) testified that 20% of applications to RMV were flagged as suspicious in their preliminary facial recognition screen using Rekognition’s system. These 260,000 applications were reviewed by the State Police, who determined that just 497 applications were actually fraudulent.

The private companies that supply databases for FRT algorithms are not so upstanding either. Clearview AI has been implicated in multiple scandals: from providing databases and FRT tech to ICE, to scraping unwitting social media users’ photos for to be used in the lineup of Clearview AI’s database. Buzzfeed recently reported on the use of Clearview AI’s facial recognition technology by 2,200 law enforcement agencies in the US. Even more concering, officers often used it without the permission of superiors. The NYPD conducted 5,100 searches with Clearview AI using the company’s trial program.

19 municipalities have adopted facial recognition technology ordinances, but these do not always prevent the use of FRT. The San Francisco Police Department was sued for its alleged use of a network of 400 private surveillance cameras to spy on protesters in 2020, despite an ordinance banning use of facial surveillance. Boston’s facial recognition ban contains a loophole allowing the use of databases, programs, and technology provided by another government entity.

This is why we need a federal moratorium on the use of facial and biometric technology. A moratorium will allow legislators to consider proper regulation of our government’s use of a technology that has enabled the wholesale roundup of Uyghurs in China and contributed to the wrongful arrest of at least three innocent Black men here in the US. Our freedom is threatened by any surveillance technology that enables us to be tracked everywhere and FRT has especially dangerous implications for this. You can send a letter to your lawmakers to tell them to support a federal moratorium of FRT here.

Learn more about FRT here.