Because Facial Recognition Tech Just Isn’t Sketchy Enough, Cops Are Now Running Searches Using AI-Generated Faces
from the two-wrongs-in-search-of-a-right-(to-violate) dept
Facial recognition tech is probably improving as time goes on. Given enough providers, controversy, and individuals who definitely want this tech to stop being so terrible at correctly identifying women and minorities, anything is possible.
Rather than wait for the tech to catch up to the promises made by promotional materials, cops are apparently moving ahead with efforts that will cause even more problems for future tech adoption.
We’re already afflicted by at least one tech company that believes it’s perfectly OK to stock its database of billions of photos with any content not locked down on the internet. Beyond that, there’s the problems inherent to the systems themselves, which aggravate biased policing by doing their most accurate work when gazing on the faces of white males.
Now there’s this, which is the sort of thing that’s just a lawsuit waiting to happen. Here’s Paige Collins and Matthew Guariglia of the EFF with more details:
A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.
Parabon has been offering its DNA-to-face services for years. It’s very proud of its ability to generate faces using nothing but DNA info. It has tons of cases listed on its site and provides links to news coverage of investigations aided by its ability to generate a DNA-based analogue for police suspect sketches.
Perusing the site, it’s immediately noticeable that lots of the DNA-based speculations look very little like the person arrested or charged. But that’s not really all that problematic. Parabon’s “snapshots” aren’t meant to be definitive descriptions of criminal suspects. They’re simply meant to contribute to ongoing investigations by giving cops something to post or hand out when asking people if they’ve seen anyone resembling these speculative pictures.
- It’s not great, but it’s not Parabon’s fault if cops decide to go a step or two further than the purpose for which these “snapshots” were intended.
- But cops who apparently have zero concern about adding AI speculation to AI speculation to engage in investigations are making things demonstrably worse by using Paragon’s “snapshots” for reasons they were never intended.
- Wrong + wrong never equals right.
This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people) will create a “most likely match” for that face.
- This more than doubles the magnitude of unintended consequences by feeding pure speculation to an algorithm that likely already has problems accurately identifying people, especially if those people are minorities or women.
- That it might perform better on white males doesn’t matter much when it’s just questionable AI being examined by other questionable AI.
- But cops rarely care if they violate laws, much less a private party’s rules of engagement.
- Any case closed by violating rights and/or terms of service will be treated as a success story.
- Every failure will simply be considered the acceptable cost of doing police business, even if the failure results in an expensive lawsuit settlement.
- If one cop shop is doing this, the odds are several have done the same thing.
- The only difference is they haven’t been caught yet.
Filed Under: ai, dna to face, evidence, facial recognition, police
Companies: parabon nanolabs
- California State Senator Pushes Bill To Remove Anonymity From Anyone Who Is Influential Online
- The New 'Sports Illustrated' Promises To Still Do 'In-Depth Journalism' Despite Being A Hollowed Out Husk Now
- 'Goon Squad' Deputies Headed To Jail For Torturing Black Men For The Crime Of Being Black
- Vehicle Cloning -- Another Reason Not To Use Automated License Plate Readers
- Investigation: 'Gold Standard' Of Evidence Turned To Pyrite By Colorado Crime Lab Employee
- Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.
But it gets worse.
In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people) will create a “most likely match” for that face.
These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing.
There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.
EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled.
Comments on “Because Facial Recognition Tech Just Isn’t Sketchy Enough, Cops Are Now Running Searches Using AI-Generated Faces”