Category: Criminal Procedure

0

Predictive Policing and Reasonable Suspicion (Part II)

Last week, I blogged about law enforcement’s use of automated predictions. There, the “Super Cruncher” system mines data to highlight high-crime areas so that police departments can best allocate resources. What if those predictions provided the sole basis of an officer’s stop and frisk of a particular location? Suppose the computer suggested that a particular corner was a red-hot zone. When the officer saw someone standing at that particular corner at midnight, he took the computer as its prediction and stopped and frisked the person, revealing an illegal firearm. Would the computer’s prediction form the basis of reasonable suspicion supposing that the person standing on the corner did nothing else to raise any concerns about illegality? Last week, I suggested that the retail question would likely be straightforward. The computer’s prediction about a location could not be said to infer anything revealing about a particular person in that location, right?

Professor Orin Kerr brought a recent case to my attention that while not exactly on point is nonetheless illuminating about the value of automated judgments in evaluating a stop for Fourth Amendment purposes. In United States v. Antonio Esquivel-Rios, a trooper pulled over a defendant driving a car with temporary Colorado tags. When the trooper initially called in the tag, the dispatcher told him that the automated system found that the tags were not registered (as the dispatcher explained, the system did not “return the tag”). The dispatcher also cautioned the trooper that Colorado tags “usually do not return.” Said another way, the dispatcher qualified the system’s finding that the tags were not officially on file (and thus could be fraudulent) with the warning that Colorado tags usually did not show up in the system. Why that was the case for Colorado tags was not explained to the trooper. Nonetheless, the trooper pulled over the defendant and got consent to search the car. It turns out the defendant had a pound of meth in his secret glove compartment. In challenging the constitutionality of the stop, the defendant argued that the trooper relied on an unreliable automated finding that could not support a finding of reasonable suspicion. Said another way, the computer’s “no tags” determination did not amount to particularized suspicion because the system’s findings as to Colorado tags was not reliably revealing of criminality.

The opinion began by noting that a “maniacally all-knowing, all-seeing” HAL 9000 computer in government’s hands would raise Fourth Amendment concerns. The Tenth Circuit did not say more about that point, but I take the court to be saying that computers making “pre-crime” Minority Report-ish adjudications about individuals implicates constitutional concerns–procedural due process is certainly at issue. After making that threshold point, the court then got down to business to explore whether the trooper had reasonable suspicion to stop the defendant based on the computer’s “no return” finding and the dispatcher’s qualification of that finding. As the court explored, reasonable suspicion is far less than probable cause, there needs to be some particularized suspicion of criminality. Concerns about the quality of evidence can be offset with quantity, that is, something more suggesting criminality. If there are questions about the system’s reliability, worries about its reliability can diminish if there are other independent indicia of criminality. The trooper, however, only relied on the database report to justify his stop. The computer “no return” hit, the court suggested, could have been enough for reasonable suspicion if the system was reliable. There, such a computer finding would concern the specific individual, not a particular location as I suggested in my initial post. The court’s point is well-taken. In that case, it would have been permissible to rely on computer finding to support a stop because the computer’s finding would relate to evidence about the specific defendant (or his car). In this case, the court explains, the trooper had reason to doubt that the computer hit meant something suspicious about the car’s tags. That Colorado usually does not return hits could mean that Colorado is having bureaucratic problems inputting temporary tags into the system; it could mean that some, most, or vanishingly small number of “no return” findings say something about the tags’ verifiability. What goes into the database impacts the reasonableness of the seizure relying upon it, garbage in, garbage out. The court notes, relying on Professor Kerr’s work, that reasonable suspicion is not a statistical determination, much as probable cause isn’t. But in this case, the database had reliability problems and as the sole reason for the stop, it had to be assessed with a eye to its statistical value. With its concern about the computer finding’s reliability made clear, the court remanded the case to the district court to reconsider the constitutionality of the stop and the evidence found as a result of the stop. The Tenth Circuit’s finding makes a lot of sense, indeed. It also suggests that computer adjudications have to have an indicia of reliability and must relate to a specific individual (rather than location) to support reasonable suspicion.

 

0

Brandon Garrett on the Court’s Recent Criminal Procedure Decisions

Professor Brandon Garrett has an interesting post on three important criminal procedure cases from the past Term (including his take on Maryland v. King):

With the past Term’s Supreme Court’s decisions behind us, commentators, scholars, and judges, are still processing the implications of the major decisions on race, voting rights, and same sex marriage. Understandably less noticed have been three decisions with real implications for criminal justice. In cases concerning the procedural barriers to relief when evidence of innocence arises after conviction, the expanded collection and storage of DNA, and the conduct of police interrogations, the Court issued rulings that bear on the accuracy of our criminal justice system.

First, the Court continues to recognize that innocence should be an important consideration for federal judges reviewing prisoners’ habeas petitions. InMcQuiggan v. Perkins, the Court recognized for the first time that evidence of a prisoner’s innocence can provide an exception to the restrictive one-year statute of limitations imposed in 1996 by Congress in the Antiterrorism and Effective Death Penalty Act (AEDPA). However, the Court somewhat gratuitously emphasized that this innocence exception would be “severely confined” and that the class of prisoners able to show that a jury presented with new evidence would be likely not to convict may be quite small.

Moreover, the Court still has not recognized an outright constitutional claim of innocence. Innocence is merely a “gateway” to excuse complex procedural barriers, but innocence is not a stand-alone ground for relief in federal courts. More than two-decades into the DNA era, judges are now far more aware than in the past that prisoners can prove their outright innocence of serious crimes. But as I describe in Convicting the Innocent, judges have only slowly and reluctantly loosened their grip on technical rules that make it extremely difficult for even innocent convicts to secure their freedom.

Second, although DNA testing continues to reshape the criminal justice system, the Supreme Court’s decision this term in Maryland v. King may encourage some of the worst tendencies in the law enforcement use of DNA. The Court endorsed police taking DNA from people at the time of arrest for purposes of “identification,” but also to permanently enter that DNA in the national databank to search against any number of past and future unsolved crimes. Given my interest in using DNA to potentially free the innocent, one might expect that I would welcome any and all expansion of DNA databanks. However, I co-authored an amicus brief with Erin Murphy taking the other side and offering a detailed explanation of our thinking. We argued that the federal government and states should absolutely invest in collecting DNA from serious criminals, and in using DNA to potentially free the innocent. But taking DNA from vast numbers of mere arrestees, who have not been convicted of any crime, is counterproductive. It is a serious burden on the privacy of vast numbers of people, including innocent people who are cleared after arrest. By the same token, taking DNA from arrestees has not been shown to improve crime fighting; in fact, it can dilute the power of DNA databases. Read More

2

Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.

12

Thoughts on the Zimmerman Trial

I did not follow the trial carefully, so I don’t feel qualified to comment on the jury verdict.  There are two legal aspects of the case, though, that I can talk about.

1.  I agree with Eugene Volokh’s point that Florida should reconsider its law allowing a six-person jury to hear felony cases.  The Supreme Court’s decision (from 40 years ago) upholding the constitutionality of criminal juries smaller than 12 in state trials falls in the category of “wrong, but settled.”   State lawmakers should still think about the fact that a larger jury will be more diverse and tend to inspire more confidence, though, of course, it increases the cost of a trial.

2.  I am uneasy when a state acquittal is followed by the threat of a federal prosecution for the same act.  This practice is constitutional because of the Supreme Court’s decision in Bartkus v. Illinois, which held that the Double Jeopardy Clause is not violated by consecutive state and federal prosecutions for the same act under the “dual sovereignty” doctrine.  There is a powerful irony in this decision.  It reflected Felix Frankfurter’s view that incorporation was mostly wrong and that the states should be able to run their criminal justice system free from federal constitutional restraints.  The Supreme Court’s liberals (Brennan, Black, Douglas, and Warren) dissented.  Yet Bartkus became a powerful weapon for liberals seeking to right wrongs perpetrated in the Jim Crow South by, in effect, overturning verdicts from all-white racist juries.  The continuing vitality of Bartkus (as opposed to other criminal procedure decisions from the 1950s) reflects the influence of the Civil Rights Movement on constitutional law, though I wonder if this decision should be revisited.

Race, Justice, and the Political Economy of Vigilantism

A few thoughts in the wake of Zimmerman verdict (and related matters):

1) The New Yorker’s Amy Davidson stated last night, “I still don’t understand what Trayvon Martin was supposed to do” once he knew he was menaced.  Gary Younge similarly asked, “What version of events is there for that night in which Martin gets away with his life?”

Cord Jefferson, in a way, provides a practical response to that question:

To stay alive and out of jail, brown and black kids learn to cope. They learn to say, “Sorry, sir,” for having sandwiches in the wrong parking lot. They learn, as LeVar Burton has, to remove their hats and sunglasses and put their hands up when police pull them over. They learn to tolerate the indignity of strange, drunken men approaching them and calling them and their loved ones a bunch of [n______]. They learn that even if you’re willing to punch a harasser and face the consequences, there’s always a chance a police officer will come to arrest you, put you face down on the ground, and then shoot you execution style. Maybe the cop who shoots you will only get two years in jail, because it was all a big misunderstanding. You see, he meant to be shooting you in the back with his taser.

Yahdon Israel writes about similar coping mechanisms in Manhattan, and the fallback tactic of avoidance.  He notes that, “Although Columbia [University] is in Harlem, power wills that there is no Harlem in Columbia. Rather than walk through, the people of Harlem are more comfortable with walking around Columbia to get to the other side because they know where they don’t belong.”

Read More

Badge = Deference & Submission

We know that, in theory, citizens have some rights vis-a-vis police. But in practice, does it make sense to simply submit to any person waving a badge? Reason magazine features a story where that seems to be the lesson:

A group of state Alcoholic Beverage Control agents clad in plainclothes approached [Daly], suspecting the blue carton of LaCroix sparkling water to be a 12-pack of beer. Police say one of the agents jumped on the hood of her car. She says one drew a gun. Unsure of who they were, Daly tried to flee the darkened parking lot. “They were showing unidentifiable badges after they approached us, but we became frightened, as they were not in anything close to a uniform,” she recalled Thursday in a written account of the April 11 incident. . . . That led to Daly spending a night and an afternoon in the Albemarle-Charlottesville Regional Jail.

This story also suggests a wider range of opportunities for abuse of the discretion granted to officers.

1

Probabilistic Crime Solving

In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.

Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.