03 April 2024

DNA-To-Face Services

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? 
How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? 
Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 


Because Facial Recognition Tech Just Isn’t Sketchy Enough, Cops Are Now Running Searches Using AI-Generated Faces

from the two-wrongs-in-search-of-a-right-(to-violate) dept

Facial recognition tech is probably improving as time goes on. Given enough providers, controversy, and individuals who definitely want this tech to stop being so terrible at correctly identifying women and minorities, anything is possible.

Rather than wait for the tech to catch up to the promises made by promotional materials, cops are apparently moving ahead with efforts that will cause even more problems for future tech adoption.

We’re already afflicted by at least one tech company that believes it’s perfectly OK to stock its database of billions of photos with any content not locked down on the internet. Beyond that, there’s the problems inherent to the systems themselves, which aggravate biased policing by doing their most accurate work when gazing on the faces of white males.

Now there’s this, which is the sort of thing that’s just a lawsuit waiting to happen. Here’s Paige Collins and Matthew Guariglia of the EFF with more details:

A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parabon has been offering its DNA-to-face services for years. It’s very proud of its ability to generate faces using nothing but DNA info. It has tons of cases listed on its site and provides links to news coverage of investigations aided by its ability to generate a DNA-based analogue for police suspect sketches.

Perusing the site, it’s immediately noticeable that lots of the DNA-based speculations look very little like the person arrested or charged. But that’s not really all that problematic. Parabon’s “snapshots” aren’t meant to be definitive descriptions of criminal suspects. They’re simply meant to contribute to ongoing investigations by giving cops something to post or hand out when asking people if they’ve seen anyone resembling these speculative pictures.

Sure, there’s always a chance this may result in a wrongful arrest or detention, but the company makes it clear these are nothing more than a best guess based on DNA profiles. 
  • It’s not great, but it’s not Parabon’s fault if cops decide to go a step or two further than the purpose for which these “snapshots” were intended.
That’s on the cops themselves. Parbon does not encourage this sort of use of its DNA snapshots. 
  • But cops who apparently have zero concern about adding AI speculation to AI speculation to engage in investigations are making things demonstrably worse by using Paragon’s “snapshots” for reasons they were never intended. 
  • Wrong + wrong never equals right.

This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

We already know facial recognition tech is flawed. 
  • This more than doubles the magnitude of unintended consequences by feeding pure speculation to an algorithm that likely already has problems accurately identifying people, especially if those people are minorities or women. 
  • That it might perform better on white males doesn’t matter much when it’s just questionable AI being examined by other questionable AI.
As the EFF notes, the cited search was a violation of Parabon’s terms of service
  • But cops rarely care if they violate laws, much less a private party’s rules of engagement. 
  • Any case closed by violating rights and/or terms of service will be treated as a success story. 
  • Every failure will simply be considered the acceptable cost of doing police business, even if the failure results in an expensive lawsuit settlement.
What’s exposed here is the tip of the iceberg. 
  • If one cop shop is doing this, the odds are several have done the same thing. 
  • The only difference is they haven’t been caught yet. 
Coverage like this may make it clear cops prefer power to responsibility, but the best deterrent remains meaningful consequences for their careless actions. 
And, to date, this country has shown — via law enforcement agencies and the judicial system that handles lawsuits resulting from their abuses of power — it will almost always consider it better to suffer abuse than actually hold cops accountable.

Filed Under: 
Companies: parabon nanolabs

Comments on “Because Facial Recognition Tech Just Isn’t Sketchy Enough, Cops Are Now Running Searches Using AI-Generated Faces”

Subscribe: RSSLeave a comment
22 Comments

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

MARCH 22, 2024
In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. 
A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.
Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits.  Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately. . .
  • Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.
Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.
Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.
But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

No comments:

Boundary Lines for Linguistics are Very Clear

World GeoDemo o S r p s d e n t o a 0 g 3 8 i 0 f 1 8 5 9 5 3 0 c 0 l c m 6 5 9 h 6 g m 3 g 1 7 0 i 8 5 f 1 h l 9 0 t u h 9 c t t m 7    ·  ...