06 July 2021

New GAO Report via Techdirt/WaPo: Use of Facial Recognition Tech Deserves Some Actual Oversight

A source for reliable information with some unexpected findings -

Federal Watchdog Finds Lots Of Facial Recognition Use By Gov't Agencies, Very Little Internal Oversight

A Closer Look at GAO's Watchdog Report for International Podcast Day |  WatchBlog: Official Blog of the U.S. Government Accountability Office

from the getting-a-real-'Wild-West'-vibe-from-this dept

 
The upside is this state of affairs has prompted at least one federal government oversight entity to do some actual oversight. The Government Accountability Office (GAO) has released its report [PDF] on federal agencies' use of facial recognition tech and it contains a couple of surprises and, unfortunately, several of the expected disappointments. (via the Washington Post)

Proving You're You: How Federal Agencies Can Improve Online Verification |  WatchBlog: Official Blog of the U.S. Government Accountability OfficeFor instance, while we expect law enforcement agencies like the FBI, DEA, ATF, and TSA to use facial recognition tech, the report notes that a total of 20 agencies own or use the tech. That list also includes some unexpected agencies, like the IRS, US Postal Service, the FDA, and NASA.

There's also a surprising number of Clearview users among federal agencies, which seems unwise given the company's history for being sued, investigated, exposed as dishonest, and just kind of terrible in every way. Of the 20 agencies that admitted using this tech, ten have used or have contracts with Clearview, outpacing other third-party offerings by a 2-to-1 margin.

What are these agencies using this tech for? Mainly criminal investigations. . .This includes people who may have committed criminal acts during last summer's nationwide anti-police violence protests.

GAO's new list of troubled federal programs is longer than ever - The  Washington PostOne of the agencies on this list is the US Postal Inspection Service, which used Clearview to identify suspects who damaged USPS property or stole mail. The US Capitol Police also used Clearview to "generate leads" following the January 6th attack on the US Capitol.

That's what's known. There's a lot that's unknown, thanks to federal agencies apparently not caring who's doing what with whatever facial recognition tech they have access to.

Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.

Yay! Your federal tax dollars at work putting citizens at risk of being misidentified right into holding cells or deportation or whatever. The less you know, I guess. . .

Then there's mind-boggling stuff like this:

Officials from another agency initially told us that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.

The line between "we don't do this" and "we do this pretty much nonstop" is finer than I thought.

The CBP, which has used this tech for years, says it's still "in the process of implementing a mechanism to track" use of non-federal facial recognition systems for employees. So far, the CBP has come up with nothing better than hanging up a couple of clipboards. . .

In addition to being careless and cavalier about the use and deployment of unproven tech, the sullen shrugs of these thirteen government agencies are also possibly admissions of criminal activity.

When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies. There is also a risk that non-federal system owners will share sensitive information (e.g. photo of a suspect) about an ongoing investigation with the public or others.

Government Accountability Office - WikipediaThe GAO closes its depressing report with 26 recommendations -- thirteen of them being "start tracking this stuff, you dolts."

The second -- which makes two recommendations per failing federal agency -- is to assess the risks of the tech, including possible violations of privacy laws and the negative side effects of these systems misidentifying people.

There's no good news in this report.

Agencies are using unproven, sometimes completely unvetted tech without internal or external oversight. They've rolled out these programs well ahead of required Privacy Impact Assessments or internal tracking/reporting measures in place. The only pleasant surprise is that this hasn't resulted in more false arrests and detainments. But that definitely can't be attributed to the care and diligence of agencies using this tech because the GAO really wasn't able to find much evidence of that. But this does put the issue on the radar of Congress members who haven't been paying much attention to this tech's drift towards ubiquity.

Filed Under: 4th amendment, accountability, facial recognition, federal government, gao, oversight, surveillance

 

 

No comments: