Predicting more biased policing in Baltimore

Police Commissioner Darryl De Sousa and Mayor Catherine Pugh recently announced measures meant to reduce violent crime in Baltimore, including “predictive policing,” which they plan to be fully operational in the Eastern and Western districts by June 1 — less than two months from now. But the method poses its own set of bias issues, which Baltimoreans should be aware of before it is implemented by the Baltimore Police Department.

Commissioner De Sousa’s planned model of predictive policing, which has come into fashion throughout the U.S. of late, relies on data and algorithms. These tools are used to predict where crime is likely to occur in the immediate future. Commanders then dispatch police officers to specified locations to deter those crimes from occurring. For example, if the data show that cars are broken into at a particular time in a particular neighborhood, officers will be deployed to that neighborhood at that time to deter break-ins.

There are two big problems with such technologies. The first is that they have not been proven to work. The second is even bigger: The data are based largely on crime reports and crime statistics. Thus, the data only measures what has been reported. Property crime, for instance, is reported more than violent crime (which is notoriously underreported in communities that distrust law enforcement). As a result, the data are not accurate. Moreover, crime statistics are largely based on policing patterns — where officers actually police. Deploying officers based on crime statistics will simply return them to where they concentrate their time. As a result, the data often push officers into the same over-policed and over-criminalized communities.

In a nutshell, biased inputs result in biased outputs, including stops, searches, arrests and criminal records. As Andrew Guthrie Ferguson, a professor at the University of the District of Columbia’s David A. Clarke School of Law, warns in his must-read book, “The Rise of Big Data Policing,” the “underlying fear” of predictive policing based on biased data is that “the history of unjust policing practices will be justified by technological spin.”

Baltimore’s consent decree with the U.S. Department of Justice requires the BPD to disclose any new policing technology to the public prior to its use. The BPD monitor and the court overseeing the decree should have to approve this technology because it will be utilized in the very same communities the DOJ concluded have been burdened with racially discriminatory policing practices. Before Commissioner De Sousa carries through with this plan, he, Mayor Pugh and the monitor should gather community input on many questions, including the following:

What predictive policing model(s) does Commissioner De Sousa plan to implement? There are different models sold by different companies that carry different risks. What data inputs will be employed in the model(s)? What are the BPD’s intended uses of the technology? What are the goals and how will they be measured?

How does predictive policing comport with the enhanced Fourth Amendment protections that the consent decree mandates for Baltimore’s residents? For example, the decree dictates that officers cannot detain someone based solely on an individual’s “attempt to avoid contact with an officer.” What impact will predictive policing — which brings officers and their discretion to “target” communities — have on these protections?

Relatedly, the monitoring team and the BPD are drafting and will implement several new policies, including impartial policing. How does predictive policing mesh with officer training on, for example, implicit bias? How will biased predictions exacerbate the biases that officers bring to their jobs? When an algorithm directs an officer to visit a specific location, what will be the attitudes and biases of those officers as they arrive? How will they see and interpret the actions and inactions of everyone who lives there?

Reams of empirical data show that officers see and interpret black men, women and children as they do no one else. The DOJ found this to be painfully true in Baltimore. Predictive policing risks replicating and reinforcing these biases. How can the BPD and the monitoring team account for these risks while drafting policies that aim to usher in bias-free policing?

Certainly, the need to reduce violence in Baltimore is urgent. However, the BPD, its monitor and city residents must examine the consequences of deploying any policing tool, particularly one with the risks that predictive policing presents. Quite simply, residents must know what predictive policing is and what it is not.

Michael Pinard is the Francis & Harriet Iglehart Professor of Law and co-director of the Clinical Law Program at the University of Maryland Francis King Carey School of Law. His email is mpinard@law.umaryland.edu; Twitter: @ProfMPinard.

Copyright © 2018, The Baltimore Sun, a Baltimore Sun Media Group publication | Place an Ad
45°