Archive for the ‘Criminal Procedure’ Category
posted by Danielle Citron
Tomorrow, if you are in the D.C. area in the afternoon, Georgetown Law’s American Criminal Law Review, American Civil Liberties Union, and Criminal Law Association are hosting my brilliant colleague David Gray to talk about his article, A Spectacular Non Sequitur: The Supreme Court’s Contemporary Fourth Amendment Exclusionary Rule Jurisprudence (forthcoming in ACLR). His lecture will focus on the Exclusionary Rule and the recent cases involving the 4th Amendment. Location: Hotung 1000. Starts at 3:30 p.m. Will be worth it, indeed. Professor Gray is an illuminating and dynamic speaker.
posted by UCLA Law Review
Volume 61, Discourse Discourse
|Not the Last Word, but Likely the Last Prosecution: Understanding the U.S. Department of Justice’s Evaluation of Whether to Authorize a Successive Federal Prosecution in the Trayvon Martin Killing||Adam Harris Kurland||206|
posted by UCLA Law Review
Volume 61, Discourse
|The Two-Tiered Program of the Tribal Law and Order Act||Seth J. Fortin||88|
posted by Danielle Citron
Last week, I blogged about law enforcement’s use of automated predictions. There, the “Super Cruncher” system mines data to highlight high-crime areas so that police departments can best allocate resources. What if those predictions provided the sole basis of an officer’s stop and frisk of a particular location? Suppose the computer suggested that a particular corner was a red-hot zone. When the officer saw someone standing at that particular corner at midnight, he took the computer as its prediction and stopped and frisked the person, revealing an illegal firearm. Would the computer’s prediction form the basis of reasonable suspicion supposing that the person standing on the corner did nothing else to raise any concerns about illegality? Last week, I suggested that the retail question would likely be straightforward. The computer’s prediction about a location could not be said to infer anything revealing about a particular person in that location, right?
Professor Orin Kerr brought a recent case to my attention that while not exactly on point is nonetheless illuminating about the value of automated judgments in evaluating a stop for Fourth Amendment purposes. In United States v. Antonio Esquivel-Rios, a trooper pulled over a defendant driving a car with temporary Colorado tags. When the trooper initially called in the tag, the dispatcher told him that the automated system found that the tags were not registered (as the dispatcher explained, the system did not “return the tag”). The dispatcher also cautioned the trooper that Colorado tags “usually do not return.” Said another way, the dispatcher qualified the system’s finding that the tags were not officially on file (and thus could be fraudulent) with the warning that Colorado tags usually did not show up in the system. Why that was the case for Colorado tags was not explained to the trooper. Nonetheless, the trooper pulled over the defendant and got consent to search the car. It turns out the defendant had a pound of meth in his secret glove compartment. In challenging the constitutionality of the stop, the defendant argued that the trooper relied on an unreliable automated finding that could not support a finding of reasonable suspicion. Said another way, the computer’s “no tags” determination did not amount to particularized suspicion because the system’s findings as to Colorado tags was not reliably revealing of criminality.
The opinion began by noting that a “maniacally all-knowing, all-seeing” HAL 9000 computer in government’s hands would raise Fourth Amendment concerns. The Tenth Circuit did not say more about that point, but I take the court to be saying that computers making “pre-crime” Minority Report-ish adjudications about individuals implicates constitutional concerns–procedural due process is certainly at issue. After making that threshold point, the court then got down to business to explore whether the trooper had reasonable suspicion to stop the defendant based on the computer’s “no return” finding and the dispatcher’s qualification of that finding. As the court explored, reasonable suspicion is far less than probable cause, there needs to be some particularized suspicion of criminality. Concerns about the quality of evidence can be offset with quantity, that is, something more suggesting criminality. If there are questions about the system’s reliability, worries about its reliability can diminish if there are other independent indicia of criminality. The trooper, however, only relied on the database report to justify his stop. The computer “no return” hit, the court suggested, could have been enough for reasonable suspicion if the system was reliable. There, such a computer finding would concern the specific individual, not a particular location as I suggested in my initial post. The court’s point is well-taken. In that case, it would have been permissible to rely on computer finding to support a stop because the computer’s finding would relate to evidence about the specific defendant (or his car). In this case, the court explains, the trooper had reason to doubt that the computer hit meant something suspicious about the car’s tags. That Colorado usually does not return hits could mean that Colorado is having bureaucratic problems inputting temporary tags into the system; it could mean that some, most, or vanishingly small number of “no return” findings say something about the tags’ verifiability. What goes into the database impacts the reasonableness of the seizure relying upon it, garbage in, garbage out. The court notes, relying on Professor Kerr’s work, that reasonable suspicion is not a statistical determination, much as probable cause isn’t. But in this case, the database had reliability problems and as the sole reason for the stop, it had to be assessed with a eye to its statistical value. With its concern about the computer finding’s reliability made clear, the court remanded the case to the district court to reconsider the constitutionality of the stop and the evidence found as a result of the stop. The Tenth Circuit’s finding makes a lot of sense, indeed. It also suggests that computer adjudications have to have an indicia of reliability and must relate to a specific individual (rather than location) to support reasonable suspicion.
posted by Danielle Citron
With the past Term’s Supreme Court’s decisions behind us, commentators, scholars, and judges, are still processing the implications of the major decisions on race, voting rights, and same sex marriage. Understandably less noticed have been three decisions with real implications for criminal justice. In cases concerning the procedural barriers to relief when evidence of innocence arises after conviction, the expanded collection and storage of DNA, and the conduct of police interrogations, the Court issued rulings that bear on the accuracy of our criminal justice system.
First, the Court continues to recognize that innocence should be an important consideration for federal judges reviewing prisoners’ habeas petitions. InMcQuiggan v. Perkins, the Court recognized for the first time that evidence of a prisoner’s innocence can provide an exception to the restrictive one-year statute of limitations imposed in 1996 by Congress in the Antiterrorism and Effective Death Penalty Act (AEDPA). However, the Court somewhat gratuitously emphasized that this innocence exception would be “severely confined” and that the class of prisoners able to show that a jury presented with new evidence would be likely not to convict may be quite small.
Moreover, the Court still has not recognized an outright constitutional claim of innocence. Innocence is merely a “gateway” to excuse complex procedural barriers, but innocence is not a stand-alone ground for relief in federal courts. More than two-decades into the DNA era, judges are now far more aware than in the past that prisoners can prove their outright innocence of serious crimes. But as I describe in Convicting the Innocent, judges have only slowly and reluctantly loosened their grip on technical rules that make it extremely difficult for even innocent convicts to secure their freedom.
Second, although DNA testing continues to reshape the criminal justice system, the Supreme Court’s decision this term in Maryland v. King may encourage some of the worst tendencies in the law enforcement use of DNA. The Court endorsed police taking DNA from people at the time of arrest for purposes of “identification,” but also to permanently enter that DNA in the national databank to search against any number of past and future unsolved crimes. Given my interest in using DNA to potentially free the innocent, one might expect that I would welcome any and all expansion of DNA databanks. However, I co-authored an amicus brief with Erin Murphy taking the other side and offering a detailed explanation of our thinking. We argued that the federal government and states should absolutely invest in collecting DNA from serious criminals, and in using DNA to potentially free the innocent. But taking DNA from vast numbers of mere arrestees, who have not been convicted of any crime, is counterproductive. It is a serious burden on the privacy of vast numbers of people, including innocent people who are cleared after arrest. By the same token, taking DNA from arrestees has not been shown to improve crime fighting; in fact, it can dilute the power of DNA databases. Read the rest of this post »
posted by Danielle Citron
Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.
What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.
H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.
posted by Gerard Magliocca
I did not follow the trial carefully, so I don’t feel qualified to comment on the jury verdict. There are two legal aspects of the case, though, that I can talk about.
1. I agree with Eugene Volokh’s point that Florida should reconsider its law allowing a six-person jury to hear felony cases. The Supreme Court’s decision (from 40 years ago) upholding the constitutionality of criminal juries smaller than 12 in state trials falls in the category of “wrong, but settled.” State lawmakers should still think about the fact that a larger jury will be more diverse and tend to inspire more confidence, though, of course, it increases the cost of a trial.
2. I am uneasy when a state acquittal is followed by the threat of a federal prosecution for the same act. This practice is constitutional because of the Supreme Court’s decision in Bartkus v. Illinois, which held that the Double Jeopardy Clause is not violated by consecutive state and federal prosecutions for the same act under the “dual sovereignty” doctrine. There is a powerful irony in this decision. It reflected Felix Frankfurter’s view that incorporation was mostly wrong and that the states should be able to run their criminal justice system free from federal constitutional restraints. The Supreme Court’s liberals (Brennan, Black, Douglas, and Warren) dissented. Yet Bartkus became a powerful weapon for liberals seeking to right wrongs perpetrated in the Jim Crow South by, in effect, overturning verdicts from all-white racist juries. The continuing vitality of Bartkus (as opposed to other criminal procedure decisions from the 1950s) reflects the influence of the Civil Rights Movement on constitutional law, though I wonder if this decision should be revisited.
posted by Frank Pasquale
A few thoughts in the wake of Zimmerman verdict (and related matters):
1) The New Yorker’s Amy Davidson stated last night, “I still don’t understand what Trayvon Martin was supposed to do” once he knew he was menaced. Gary Younge similarly asked, “What version of events is there for that night in which Martin gets away with his life?”
Cord Jefferson, in a way, provides a practical response to that question:
To stay alive and out of jail, brown and black kids learn to cope. They learn to say, “Sorry, sir,” for having sandwiches in the wrong parking lot. They learn, as LeVar Burton has, to remove their hats and sunglasses and put their hands up when police pull them over. They learn to tolerate the indignity of strange, drunken men approaching them and calling them and their loved ones a bunch of [n______]. They learn that even if you’re willing to punch a harasser and face the consequences, there’s always a chance a police officer will come to arrest you, put you face down on the ground, and then shoot you execution style. Maybe the cop who shoots you will only get two years in jail, because it was all a big misunderstanding. You see, he meant to be shooting you in the back with his taser.
Yahdon Israel writes about similar coping mechanisms in Manhattan, and the fallback tactic of avoidance. He notes that, “Although Columbia [University] is in Harlem, power wills that there is no Harlem in Columbia. Rather than walk through, the people of Harlem are more comfortable with walking around Columbia to get to the other side because they know where they don’t belong.”
posted by Danielle Citron
At Slate, Barry Friedman and Dahlia Lithwick have a provocative new piece entitled “What’s Left? Have Progressives Abandoned Every Cause Save Gay Marriage?” Great read.
posted by Frank Pasquale
We know that, in theory, citizens have some rights vis-a-vis police. But in practice, does it make sense to simply submit to any person waving a badge? Reason magazine features a story where that seems to be the lesson:
A group of state Alcoholic Beverage Control agents clad in plainclothes approached [Daly], suspecting the blue carton of LaCroix sparkling water to be a 12-pack of beer. Police say one of the agents jumped on the hood of her car. She says one drew a gun. Unsure of who they were, Daly tried to flee the darkened parking lot. “They were showing unidentifiable badges after they approached us, but we became frightened, as they were not in anything close to a uniform,” she recalled Thursday in a written account of the April 11 incident. . . . That led to Daly spending a night and an afternoon in the Albemarle-Charlottesville Regional Jail.
This story also suggests a wider range of opportunities for abuse of the discretion granted to officers.
posted by Danielle Citron
In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.
Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.
posted by Babak Siavoshy
This is a follow-up to my previous post on the Supreme Court’s recent decision in Maryland v. King, which upheld, over a scathing dissent by Justice Scalia, the constitutionality of DNA searches of arestees for “serious offenses” under Maryland’s public safety statute. One open question after King is how the majority’s rule would apply to other states’ DNA collection statutes, which permit DNA collection for a broader range of offenses than does Maryland’s statute.
The King majority repeatedly limited its holding to DNA searches that followed arrests for a “serious offense.” But what counts as a serious offense? This is a live question in Haskel v. Harris, the ACLU’s challenge to California’s DNA collection law (Prop. 69). According to the ACLU, California’s law would permit DNA collection for arrests on suspicion of “simple drug possession, joyriding, or intentionally bouncing a check.” An en banc panel of the Ninth Circuit is considering the case in light of Maryland v. King. If the ACLU’s characterization is correct, then California’s law may not survive intact under King’s “serious offense” limiting principle.
While the task of determining the seriousness of an offense as a triggering condition for a legal rule can be difficult–particularly in light of the patchwork of criminal laws that forms the quilt of our fifty-state, federalist system–it is not outside the province of what courts do. For instance, in Carachuri-Rosendo v. Holder, 130 S. Ct. 2577 (2010), the Supreme Court had to decide whether state or federal standards should apply in determining whether a person convicted of a second state drug possession offense committed an “aggravated felony” under the immigration laws, and was therefore subject to automatic deportation. (The Court ultimately held the drug possession conviction was not an aggravated felony).
Is the Fourth Amendment transsubstantive (and should it be)?
More generally, King’s “serious offense” principle raises questions about whether the Fourth Amendment is, or remains, transsubstantive. The Supreme Court has previously suggested the Fourth Amendment is transsubstantive–namely, that all other things equal, the Fourth Amendment applies the same way regardless of the severity of the underlying crime that’s being investigated. (Though I’m not familiar with the scholarship on this issue, it appears scholars agree this is the governing rule: see here and here).
Maryland v. King: Are suspiconless DNA searches permissible for crime solving or suspect identification? (Probably, both).
posted by Babak Siavoshy
I’m thrilled to be guest-blogging at Concurring Opinions just in time for Maryland v. King, the Supreme Court’s decision today on the constitutionality of DNA testing. The Court held (5-4) that the police’s collection (and testing) of King’s DNA after his arrest for a violent crime, and pursuant to Maryland’s public safety statute, was a reasonable search under the Fourth Amendment.
Though the case’s up/down holding is straightforward enough, the majority’s rationale is not. Read in a vacuum, the majority opinion reads as a full-throated legal and policy defense of the government’s use of DNA to verify a suspect’s identification at various stages following a lawful arrest.
But this case—and indeed, the Maryland statute at issue—was not merely about the use of DNA testing for a routine purpose ancillary to police investigations, such as verifying a suspect’s identification. It was, instead, also about the use of suspicionless DNA searches as part of the police’s quintessential activity: investigating and solving crimes. And that is precisely the conduct which the majority’s opinion authorizes. (Do read Justice Scalia’s dissent, which argues this point persuasively).
In that vein, here’s what I take to be the majority’s honest holding: the government can engage in suspicionless and warrantless DNA searches of a suspect, including for investigating and solving crimes, in at least one context—when the subject of the search is lawfully arrested for a serious, even if unrelated, offense; and the search is performed as part of a routine, bounded, post-arrest procedure.
The police’s legitimate need to verify an arrestee’s identity, which takes up most of the majority opinion, is weak justification for this exception to the individualized suspicion and warrant requirements. The more plausible arguments supporting the majority’s holding are, instead, to be found (one might argue, buried) in Part V of the opinion: Suspicionless DNA searches are permissible in this context because persons lawfully arrested for violent crimes have diminished privacy rights, and DNA swabbing and testing within the bounds of the Maryland statute is (the Court says) relatively unintrusive in light of those diminished rights. (The Court calls these the “circumstances” of “diminished expectations of privacy [and] minimal intrusions,” citing McArthur, 531 U. S., at 330).
The Court’s decision in King is important and will have potentially far-ranging effects on how police conduct investigations. Even assuming the Court reached the right result (a question I haven’t addressed in this post), the case’s key question merited a more direct and forthright discussion than the majority opinion provides. For this reason alone, the majority invited, indeed, deserved, every quip and jab in Justice Scalia’s dissent.
The case raises some (but perhaps not so many) interesting doctrinal questions, which I’ll explore in a later post.
posted by Danielle Citron
A new casebook co-authored by University of Virginia law professor Brandon Garrett and my brilliant colleague Lee Kovarsky is the first to comprehensively cover habeas corpus, particularly exploring the topics of post-conviction review, executive and national security detention litigation, and the detention of immigrants. The book, just published by Foundation Press, is titled “Federal Habeas Corpus: Executive Detention and Post-conviction Litigation.”
The privilege of habeas corpus — which ensures that a prisoner can challenge an unlawful detention, such as for a lack of sufficient cause or evidence — has grown increasingly complex and important. Just this week, the Supreme Court decided important habeas cases recognizing an innocence-exception to habeas time-limits, and making it easier for state inmates to use habeas corpus to challenge the ineffectiveness of their trial lawyer. See Garrett and Kovarsky on ‘Two Gateways to Habeas’)
Here is an excerpt of an interview of Professor Garrett and Professor Kovarsky posted on the UVA website:
“In writing this casebook, our goal was to create the subject,” Garrett said. “There is something deep connecting different parts of habeas corpus that are often taught in far-flung parts of courses or are not taught at all. Habeas corpus is now an extremely valuable and exciting course to teach, and we thought the subject demanded a rich set of teaching materials.”
Garrett, who has taught habeas corpus at UVA Law for eight years, co-wrote the book with Kovarsky, a 2004 Virginia Law graduate and a leading habeas and capital litigator who joined the University of Maryland’s Francis King Carey School of Law as an assistant professor in 2011.
“A few years ago, I started talking to Lee about habeas corpus,” Garrett said. “Lee writes insightful scholarship about habeas corpus, and is also a longtime habeas practitioner; he still works on high-profile death penalty cases in Texas. I sent him my course materials because he was starting teaching as a law professor at Maryland. And he immediately said that this should be a casebook.”
Kovarsky said he and Garrett decided to work together on the project to identify — and establish — a habeas canon that was “divorced from any immediate political, ideological or institutional objective.”
”The decisional law and academic literature is polluted with too much erroneously accepted wisdom about the [writ of habeas corpus'] essence and, by implication, its limits,” he said. “That accepted wisdom, in turn, fuels legally substantial narratives that are, in many ways, best explored, challenged and modified in a classroom.”
Traditionally, Garrett said, law schools have taught habeas corpus as a short segment in federal courts or criminal adjudication courses rather than as a full class. Yet these brief segments, he said, are no longer sufficient.
The law of habeas corpus became significantly more complicated after Congress passed the Antiterrorism and Effective Death Penalty Act in 1996, which was passed in the wake of the Oklahoma City bombing and the first World Trade Center bombing. Read the rest of this post »
posted by Robert Gellman
Privacy advocates have disliked the third-party doctrine at least from the day in 1976 when the Supreme Court decided U.S. v. Miller. Anyone who remembers the Privacy Protection Study Commission knows that its report was heavily influenced by Miller. My first task in my long stint as a congressional staffer was to organize a hearing to receive the report of the Commission in 1977. In the introduction to the report, the Commission called the date of the decision “a fateful day for personal privacy.”
Last year, privacy advocates cheered when Justice Sonia Sotomayor’s concurrence in U.S. v. Jones asked if it was time to reconsider the third-party doctrine. Yet it is likely that it would take a long time before the Supreme Court revisits and overturns the third-party doctrine, if ever. Sotomayor’s opinion didn’t attract a single other Justice.
Can we draft a statute to overturn the third-party doctrine? That is not an easy task, and it may be an unattainable goal politically. Nevertheless, the discussion has to start somewhere. I acknowledge that not everyone wants to overturn Miller. See Orin Kerr’s The Case For the Third-party Doctrine. I’m certainly not the first person to ask the how-to-do-it question. Dan Solove wrestled with the problem in Digital Dossiers and the Dissipation of Fourth Amendment Privacy.
I’m going at the problem as if I were still a congressional staffer tasked with drafting a bill. I see right away that there is precedent. Somewhat remarkably, Congress partly overturned the Miller decision in 1978 when it enacted The Right to Financial Privacy Act, 12 U.S.C. § 3401 et seq. The RFPA says that if the federal government wants to obtain records of a bank customer, it must notify the customer and allow the customer to challenge the request.
The RFPA is remarkable too for its exemptions and weak standards. The law only applies to the federal government and not to state and local governments. (States may have their own laws applicable to state agencies.) Bank supervisory agencies are largely exempt. The IRS is exempt. Disclosures required by federal law are exempt. Disclosures for government loan programs are exempt. Disclosures for grand jury subpoenas are exempt. That effectively exempts a lot of criminal law enforcement activity. Disclosures to GAO and the CFPB are exempt. Disclosures for investigations of crimes against financial institutions by insiders are exempt. Disclosures to intelligence agencies are exempt. This long – and incomplete – list is the first hint that overturning the third-party doctrine won’t be easy.
We’re not done with the weaknesses in the RFPA. A customer who receives notice of a government request has ten days to challenge the request in federal court. The customer must argue that the records sought are not relevant to the legitimate law enforcement inquiry identified by the government in the notice. The customer loses if there is a demonstrable reason to believe that the law enforcement is legitimate and a reasonable belief that the records sought are relevant to that inquiry. Relevance and legitimacy are weak standards, to say the least. Good luck winning your case.
Who should get the protection of our bill? The RFPA gives rights to “customers” of a financial institution. A customer is an individual or partnership of five or fewer individuals (how would anyone know?). If legal persons also receive protection, a bill might actually attract corporate support, along with major opposition from every regulatory agency in town. It will be hard enough to pass a bill limited to individuals. The great advantage of playing staffer is that you can apply political criteria to solve knotty policy problems. I’d be inclined to stick to individuals.
posted by Deven Desai
Some day we might do away with pretext traffic stops, because some day autonomous vehicles will be common. At ReInventlaw Silicon Valley, David Estrada from GoogleX, made the pitch for laws to allow autonomous vehicles a bright future. He went to the core reasons such fuel sustainability and faster commutes. He also used the tear jerking commercial that showed the true benefits of enabling those who cannot drive to drive. I have heard that before. But I think David also said that the cars are required to obey all traffic laws.
If so, that has some interesting implications.
I think that once autonomous vehicles are on the road in large numbers, the police will not be able to claim that some minor traffic violation required pulling someone over and then searching the car. If a stop is made, like the Tesla testing arguments, the car will have rich data to verify that the car was obeying laws.
These vehicles should also alter current government income streams. These shifts are not often obvious to start but hit home quickly. For example, when cell phones appeared, colleges lost their income from high rates for a phone in a dorm room. That turned out out to be a decent revenue stream. If autonomous vehicles obey traffic laws, income from traffic violations should go down. Cities, counties, and states will have to find new ways to make up that revenue stream. Insurance companies should have much lower income as well.
I love to drive. I will probably not like giving up that experience. Nonetheless, reduced traffic accidents, fewer drunk drivers, more mobility for the elderly and the young (imagine a car that handled shuttling kids from soccer, ballet, music, etc., picking you up, dropping you home, and then gathering the kids while you cooked a meal (yes, should I have kids, I hope to cook for them). The time efficiency is great. Plus one might subscribe to a car service so that the $10,000-$40,000 car is not spending its time in disuse most of the day. Add to all that a world where law enforcement is better used and insurance is less needed, and I may have to give in to a world where driving myself is a luxury.
posted by UCLA Law Review
Volume 60, Discourse
|Edifying Thoughts of a Patent Watcher: The Nature of DNA
|Dan L. Burk
David H. Kaye
posted by Frank Pasquale
Last month the actor Forest Whitaker was stopped in a Manhattan delicatessen by an employee. Whitaker is one of the pre-eminent actors of his generation. . . Since the Whitaker affair, I’ve read and listened to interviews with the owner of the establishment. He is apologetic to a fault and is sincerely mortified. He says that it was a “sincere mistake” made by a “decent man” who was “just doing his job.” I believe him.
We can forgive Whitaker’s assailant. Much harder to forgive is all that makes Whitaker stand out in the first place. New York is a city, like most in America, that bears the scars of redlining, blockbusting and urban renewal. The ghost of those policies haunts us in a wealth gap between blacks and whites that has actually gotten worse over the past 20 years. But much worse, it haunts black people with a kind of invisible violence that is given tell only when the victim happens to be an Oscar winner.
The “invisible violence” extends to the newsmagazine of NYC’s billionaire mayor, to his law enforcement policies. Implicit bias is pervasive. We need not accuse any particular person of evil intent to observe the corrosive structures that reinforce it.
posted by Danielle Citron
Privacy leading light Alan Westin passed away this week. Almost fifty years ago, Westin started his trailblazing work helping us understand the dangers of surveillance technologies. Building on the work that Warren and Brandeis started in “The Right to Privacy” in 1898, Westin published Privacy and Freedom in 1967. A year later, he took his normative case for privacy to the trenches. As Director of the National Academy of Science’s Computer Science and Engineering Board, he and a team of researchers studied governmental, commercial, and private organizations using databases to amass, use, and share personal information. Westin’s team interviewed 55 organizations, from local law enforcement, federal agencies like the Social Security Administration, and direct-mail companies like R.L. Polk (a predecessor to our behavioral advertising industry).
The 1972 report, Databanks in a Free Society: Computers, Record-Keeping, and Privacy, is a masterpiece. With 14 case studies, the report made clear the extent to which public and private entities had been building substantial computerized dossiers of people’s activities and the risks to economic livelihood, reputation, and self-determination. It demonstrated the unrestrained nature of data collection and sharing, with driver’s license bureaus selling personal information to direct-mail companies and law enforcement sharing arrest records with local and state agencies for employment and licensing matters. Surely influenced by Westin’s earlier work, some data collectors, like the Kansas City Police Department, talked to the team about privacy protections, suggesting the need for verification of source documents, audit logs, passwords, and discipline for improper use of data. Westin’s report called for data collectors to adopt ethical procedures for data collection and sharing, including procedural protections such as notice and chance to correct inaccurate or incomplete information, data minimization requirements, and sharing limits.
Westin’s work shaped the debate about the right to privacy at the dawn of our surveillance era. His changing making agenda was front and center of the Privacy Act of 1974. In the early 1970s, nearly fifty congressional hearings and reports investigated a range of data privacy issues, including the use of census records, access to criminal history records, employers’ use of lie detector tests, and the military and law enforcement’s monitoring of political dissidents. State and federal executives spearheaded investigations of surveillance technologies including a proposed National Databank Center.
Just as public discourse was consumed with the “data-bank problem,” the courts began to pay attention. In Whalen v. Roe, a 1977 case involving New York’s mandatory collection of prescription drug records, the Supreme Court strongly suggested that the Constitution contains a right to information privacy based on substantive due process. Although it held that the state prescription drug database did not violate the constitutional right to information privacy because it was adequately secured, the Court recognized an individual’s interest in avoiding disclosure of certain kinds of personal information. Writing for the Court, Justice Stevens noted the “threat to privacy implicit in the accumulation of vast amounts of personal information in computerized data banks or other massive government files.” In a concurring opinion, Justice Brennan warned that the “central storage and easy accessibility of computerized data vastly increase the potential for abuse of that information, and I am not prepared to say that future developments will not demonstrate the necessity of some curb on such technology.”
What Westin underscored so long ago, and what Whalen v. Roe signaled, technologies used for broad, indiscriminate, and intrusive public surveillance threaten liberty interests. Last term, in United States v. Jones, the Supreme Court signaled that these concerns have Fourth Amendment salience. Concurring opinions indicate that at least five justices have serious Fourth Amendment concerns about law enforcement’s growing surveillance capabilities. Those justices insisted that citizens have reasonable expectations of privacy in substantial quantities of personal information. In our article “The Right to Quantitative Privacy,” David Gray and I are seeking to carry forward Westin’s insights (and those of Brandeis and Warren before him) into the Fourth Amendment arena as the five concurring justices in Jones suggested. More on that to come, but for now, let’s thank Alan Westin for his extraordinary work on the “computerized databanks” problem.
February 24, 2013 at 10:18 am Posted in: Criminal Procedure, Current Events, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement) Print This Post 4 Comments
posted by UCLA Law Review
Volume 60, Discourse
|Discovery From the Trenches: The Future of Brady||Laurie L. Levenson||74|