PredPol: What Is It?
The report notes that almost all federal law enforcement agencies are (or were) using some sort of analytics to determine crime hot spots and areas where enforcement should be targeted. As of 2014, none of them were using actual "predictive policing" software, but all were engaged in some sort of crime-modeling.
FOIA'ed DOJ Report Points Out The Downsides Of Relying On 'Predictive Policing' To Fight Crime
from the making-tradeoffs-without-considering-the-majority-of-stakeholders dept
The Electronic Privacy Information Center (EPIC) has obtained a DOJ report on predictive policing via a FOIA lawsuit. The document dates back to 2014 but it shows the DOJ had concerns about the negative side effects of predicting where crime may occur by using data that details where crime has happened.
The report [PDF] contains some limited data from trial runs of predictive policing efforts. One of these tests ran from 2009 to 2012 in Shreveport, Louisiana. Using historic property crime data, along with 911 calls and the number of residents on parole or probation, the analytic software attempted to predict where future crime might occur and where police presence might be increased to prevent crime.
The results were inconclusive:
The RAND Corporation evaluated the program and found that property crime decreased by approximately 35% in the first four months of the seven-month evaluation period as compared with the control districts. After those first four months, however, the SPD reduced its intervention efforts, and property crime reverted back to the same level as the control districts. The RAND evaluation concluded that additional research should be done.
More research was underway at the time the report was written, but the results hadn't been compiled at the time of this publication. Other efforts not involving the DOJ reported similar results: an immediate drop in the type of crime targeted. But there's no data in the report indicating this resulted in long-term declines in criminal activity or whether targeting one area resulted in criminal activity migrating elsewhere.
The report notes that almost all federal law enforcement agencies are (or were) using some sort of analytics to determine crime hot spots and areas where enforcement should be targeted. As of 2014, none of them were using actual "predictive policing" software, but all were engaged in some sort of crime-modeling.
The DOJ says predictive policing efforts show promise but contain numerous downsides. Efforts like these could conceivably result in more efficient law enforcement activity, directing already-depleted resources to areas in need of the most attention. The DOJ even theorizes that swamping "high crime areas" with additional officers might somehow result in better relationships with those communities. But that theory really doesn't square with the downsides noted in the report, which indicate flooding certain areas with more cops is only going to increase the tension between these public servants and the public they serve.
Part of the problem is "garbage in/garbage out." If law enforcement agencies have historically engaged in biased policing and enforcement efforts, all predictive policing software does is tell those officers they were correct to do so. Junk data created by biased policing will only generate biased predictions.
Legal authorities provide some guidance about the degree to which race, national origin, and other protected or immutable characteristics may be considered. Critics have noted that proxies for protected characteristics, or for socioeconomic characteristics, can make their way into analyses as well. Even when the variables seem neutral, any model is susceptible to importing any biases reflected in the underlying data.
Biased policing is a problem everywhere. This has been the rule, rather than the exception, in the US, resulting in dozens of consent decrees with the DOJ meant to eliminate bias and restrict unnecessary use of force. Feeding a bunch of unjustified stops and arrests into a system wholly reliant on the data being fed to it turns supposedly-neutral software into a confirmation-bias generator.
Equally as troubling is the unavoidable outcome of predictive policing: the permission to punish people for things other people did.
There is also a fundamental question about what decisions should be based on historical, broad-based data rather than on individualized conduct. It would make little sense to deploy resources without an understanding of where they are most needed, and fewer concerns have been raised about the potential for misuse of data for these purposes (although, at a basic level, additional police deployment can mean additional law enforcement scrutiny for individuals who live in those areas).
In practical terms, living in the wrong zip code -- or even the wrong end of a block -- turns people into suspected criminals, even when there's no evidence they've committed any crimes. You can't mend a broken community relationship by flooding an area with cops just because crimes were committed there at some point in the past. Predictive policing allows cops to view everyone in certain areas as inherently suspicious, which isn't going to result in residents feeling better about the influx of officers flooding their neighborhoods.
Despite the DOJ's caveats, law enforcement agencies are still looking to predictive policing to solve their problems. As far as the limited data shows, it's at best a temporary fix. But those looking for a decline in crime numbers seem willing to ignore the long-term negative effects of focusing on areas where crime might be happening based on little more than where crime once was.
Filed Under: doj, predictive policing