Category: Empirical Analysis of Law

Scientists Manques?

Ever wonder why Richard Posner has gotten so interested in pragmatism? Well, James R. Hackney’s book Under Cover of Science: American Legal-Economic Theory and the Quest for Objectivity suggests that he’s right to be looking for a post-scientific discourse for the style of law & economics he advances. Here’s an abstract of Hackney’s work:

The current dominant strand of legal economic theory is what is commonly referred to as law and economics (but more appropriately labeled “law and neoclassical economics”). [This movement] gained its claim to objectivity based on the philosophical premises of logical positivism and the analytic philosophy movement generally. . . . In understanding the claim of objectivity in the law and neoclassical economics movement and why that claim can no longer be sustained (in part due to new conceptions of science and developments in philosophy) it is crucial that legal-academics have a fuller understanding of developments in science and how they shape our general cultural ethos.

Hackney synthesizes a wide variety of CLS and socio-economic critiques to show how “law and economics often cloaks ideological determinations—particularly regarding the distribution of wealth—under the cover of science.” Toward the end of the book he tentatively points a way forward for the discipline, urging greater humility about theoretical claims and greater reliance on empirical work. In other words, the cure for scientism is genuine science.

I have some sympathy with this perspective, and new awareness of “uniformity costs” in both law and legal scholarship backs up Hackney’s position. But the problem of “scientism” may extend beyond law and neoclassical economics…

Read More

From Right-of-Reply to Norm-of-Trackback

One of the things I love about the blogosphere is the way that comments let readers correct you or turn your attention to something you may have missed. One of my recent posts on copyright law illustrates how this process can work. James Grimmelmann has suggested that this right to comment, and to trackback to one’s own post upon linking to another’s post, is a big victory for free speech. While right-of-reply laws may be stymied by Miami Herald v. Tornillo, these innovations let everyone have their say.

Should the mainstream media adopt similar norms? Consider the case of a recent WSJ commentary entitled “The Innocence Myth,” arguing that the rate of false convictions is very low. You can find critiques of it online if you google “innocence myth,” and the WSJ does publish some skeptical letters to the editor. But my colleague Michael Risinger is about to publish a piece that he believes definitively refutes the WSJ piece. As he argues:

If one is at all serious about trying to determine the empirical truth about the magnitude of the wrongful conviction problem, one must make an attempt to associate the denominator with the same kind of cases represented in the numerator. . . . In an article now in galleys at Northwestern Law School’s Journal of Criminal Law and Criminology, I have tried to do just that. Using only DNA exonerations for capital rape-murders from 1982 through 1989 as a numerator, and a 407-member sample of the 2235 capital sentences imposed during this period, this article shows that 21.45%, or around 479 of those, were cases of capital rape murder. Data supplied by the Innocence Project of Cardozo Law School and newly developed for this article show that only two-thirds of those cases would be expected to yield usable DNA for analysis. Combining these figures and dividing the numerator by the resulting denominator, a minimum factually wrongful conviction rate for capital rape-murder in the 1980’s emerges: 3.3%.

The WSJ has so far failed to publish Prof. Risinger’s letter to the editor, and claims a policy against allowing responses to commentaries. But would it at least behoove the Journal to provide a link to Risinger’s work after this opinion piece? I don’t see how this could hurt. . . . especially given time already devoted to screening letters to the editor. The Journal could make the links inobtrusive, as it does in this fantastic article on predatory debt collectors.

I hope that more of the mainstream media (MSM) follows the lead of the Washington Post, which provides great links to blogs (and opportunities for comment) on virtually all of its online articles (including editorials). Perhaps “opening up” the letters to the editor section in this way will be a bit of a burden at the beginning. But as technology makes these online forums more permeable, the usual excuse of “space constraints” (for shutting out diverse views) will be less and less convincing.

22

The Death of Fact-finding and the Birth of Truth

magnififying.jpgToday’s Supreme Court decision in Scott v. Harris is likely to have profound long-term jurisprudential consequences. At stake: whether trial courts, or appellate courts, are to have the last say on what the record means. Or, more grandly, does litigation make findings of fact, or truth?

The story itself is pretty simple. Victor Harris was speeding on a Georgia highway. Timothy Scott, a state deputy, attempted to pull him over, along with other officers. Six minutes later, after a high-speed chase captured on a camcorder on Scott’s car, Scott spun Harris’ car off the road, leading to an accident. Harris is now a quadriplegic. He sued Scott for using excessive force in his arrest. On summary judgment, the District Court denied Scott’s qualified immunity defense; the Eleventh Circuit affirmed.

Justice Scalia, writing for the majority, noted that the “first step is . . . to determine the relevant facts.” Normally, of course, courts take the non-moving party’s version of the facts as given. [Or, to be more precise, the district court resolves factual disputes in favor of the non-moving party.] But here, the videotape “quite clearly contradicts the version of the story told by respondent and adopted by the Court of Appeals.” Notwithstanding a disagreement with Justice Stevens on what whether that statement was accurate (“We are happy to allow the videotape to speak for itself.” Slip Op. at 5), the Court proceeded to reject the nonmoving party’s version of the facts. To do so, it relied on the ordinary rule that the dispute of facts must be “genuine”: the Respondent’s version of the facts is “so utterly discredited by the record that no reasonable jury could have believed him.” (Slip Op. at 8).

Let’s get a bias out of the way. At the Court’s suggestion, I watched the video. I lean toward Justice Stevens’ view: “This is hardly the stuff of Hollywood. To the contrary, the video does not reveal any incidents that could even be remotely characterized as ‘close calls.'” Such a dispute over a common story immediately highlights the most serious problem with the Court’s opinion: we all see what we want to see; behavioral biases like attribution and availability lead to individualized view of events. Where the majority sees explosions, Justice Stevens sees “headlights of vehicles zooming by in the opposite lane.” (Dissent at 2, n.1 – and check out the rest of the sentence for a casual swipe against the younger members of the court.) It brings to mind the Kahan/Slovic/Braman/Gastil/Cohen work on the perceptions of risk: each Justice saw the risk of speeding through his or her own cultural prism.

But even if I agreed with the majority on what the videotape shows, the Court’s opinion is disruptive to fundamental principles of American Law. Justice Stevens suggests that the majority is acting like a jury, reaching a “verdict that differs from the views of the judges on both the District court and the Court of Appeals who are surely more familiar with the hazards of driving on Georgia roads than we are.” (Dissent at 1). There are several problems with such appellate fact finding based on videotape that the Court ignores.

Read More

Libertarians Against Subjectivism

Some commenters on my post on the Value of Pets took me to task for being too quick to discount individuals’ extraordinary attachment to their companion animals. I found some support in unlikely quarters–Will Willkinson’s critique of “happiness research” which recently appeared on the Cato Institute’s website. This is the most comprehensive recent comment on the literature of subjective well-being that I’ve seen, and raises all sorts of interesting questions for those who are trying to expand the boundaries of economic analysis.

A little background: A growing number of economists have begun to question traditional measurements of well-being, such as GDP or income, and have focused instead on self-reported “subjective well-being” from interviewed subjects. “Happiness research” has come up with some counterintuitive findings, reporting extraordinary levels of life dissatisfaction in apparently prospering liberal democracies.

Wilkinson takes these social scientists to task for failing to fully describe “the dependent variable—

the target of elucidation and explanation—in happiness research.” He claims there are four main possibilities:

(1) Life satisfaction: A cognitive judgment about overall life quality relative to expectations.

(2) Experiential or “hedonic” quality: The quantity of pleasure net of pain in the stream of subjective experience.

(3) Happiness: Some state yet to be determined, but conceived as a something not exhausted by

life satisfaction or the quality of experiential states.

(4) Well-being: Objectively how well life is going for the person living it.

Wilkinson provides some great arguments for questioning 1 and 2 as hopelessly subjective desiderata for public policy. He quotes Wayne Sumner, a Toronto philosopher, on 2: “Time and philosophical fashion have not been kind to hedonism . . . Although hedonistic theories of various sorts flourished for three centuries or so in the congenial empiricist habitat, they have all but disappeared from the scene. Do they now merit even passing attention[?]” “Life satisfaction” also comes in for heavy criticism, as epiphenomenal of various uncontrollable variables: “people have different standards for assessing how well things are going, and they may employ different standards in different sorts of circumstances.”

Of course, Wilkinson and I go entirely different directions at this point: he tries to argue that the whole line of research is useless, while I think inconsistencies like the ones he points out demonstrate the necessity of more objective and virtue-oriented accounts of well-being. (Or, to be more precise, Wilkinson (like Freud) appears to believe that debates over happiness may ultimately best be settled by brain analysis, while I tend to think the direction of Aristotelian theorists like Seligman & Nussbaum is the way to go.) But his perspective does demonstrate that even those most committed to the idea of individual liberty as a public policy goal are not necessarily wedded to the type of subjectivity in value that would underlie societal recognition of the more extreme claims of pet-owners mentioned in that post.

6

Docketology

As I previously have discussed here and here, I’ve been working on a project examining when trial courts write opinions. With the help of statistician co-authors, I have investigated trial court dockets, trying to account for various factors that might lead a contested matter to either be explained through a traditional written opinion or issued in a brief order. Our resulting draft, “Docketology, District Courts, and Doctrine”, is now available from SSRN or from Selected Works. Here is an abstract:

Empirical legal scholars have traditionally modeled judicial opinion writing by assuming that judges act rationally, seeking to maximize their influence by writing opinions in politically important cases. Support for this hypothesis has reviewed published opinions, finding that civil rights and other “hot” topics are more to be discussed than other issues. This orthodoxy comforts consumers of legal opinions, because it suggests that opinions are largely representative of judicial work.

The orthodoxy is substantively and methodologically flawed. This paper starts by assuming that judges are generally risk averse with respect to reversal, and that they provide opinions when they believe that their work will be reviewed by a higher court. Judges can control risk, and maximize leisure, by writing in cases that they believe will be appealed. We test these intuitions with a new methodology, which we call docketology. We have collected data from 1000 cases in 4 different jurisdictions. We recorded information about every judicial action over each case’s life.

Using a hierarchical linear model, our statistical analysis rejects the conventional orthodoxy: judges do not write opinions to curry favor with the public or with powerful audiences, nor do they write more when they are younger, seeking to advance their careers. Instead, judges write more opinions at procedural moments (like summary judgment) when appeal is likely and less opinions at procedural moments (like discovery) when it is not. Judges also write more in cases that are later appealed. This suggests that the dataset of opinions from the trial courts is significantly warped by procedure and risk aversion: we can not look at opinions to capture what the “Law” is.

These results have unsettling implications for the growing empirical literature that uses opinions to describe judicial behavior. It also challenges the meaning of doctrine, as we show that the vast majority of judicial work – almost 90% of substantive orders, and 97% of all judicial actions – are not fully reasoned, and are read only by the parties. Those rare orders that are explained by opinions are, at best, unrepresentative. At worst, they are true black sheep – representing moments and issues where the court is most obviously rejecting traditional patterns and analyses.

I am very interested in receiving comments on this paper, particularly before the late summer, when we plan to submit it to the law reviews!

[Nit-seekers beware: there is one typo in the SSRN abstract. (Don't go find it, just trust me, it is there.) For what it is worth, I basically agree with Kevin Heller that SSRN should give users more control over author-submitted papers to make revision easier. ]

0

Neuroscience and Law

Jeffery Rosen has a fascinating article in this week’s New York Times Magazine. While the article is balanced and careful, the “buy me, read me” headlines and several of the researchers that Rosen quotes suggest that a law-and-neuroscience revolution is brewing. I want to add my voice to the skeptics that Rosen quotes, though with a different perspective. To my mind, recent findings in the field of neuroscience will change law only at the margins; and its main contribution will be to confirm the central tenets of legal realism, and will thus have only minor effects on most legal concerns.

Read More

5

Lewis Libby (66%) Guilty

Price for Lewis (Scooter) Libby Charges at intrade.com

While Scott may not be “100% sold on either side” of the Libby trial, two-thirds of traders on the prediction markets think that he will be found guilty of lying on at least one charge against him. This is a notable upswing from last June, when I wrote about the market and its intricacies. I then pointed out that prices for these “conviction contracts” include a discount for the likelihood of a plea, so that in the months before trial, prices for conviction are likely to be depressed. I think that the rise in the price of Libby’s conviction stock demonstrates the point. Although there were few surprises at trial, traders raised the likelihood of conviction by over 20% after it began, representing the end of the plea discount and the market’s real estimation of the likelihood of conviction. Notably, traders don’t seem to think that a mistrial is terribly likely, although the likelihood of conviction has decreased from 75% at the beginning of the jury’s deliberations.

For what it is worth, I tend to agree with Scott that the length of the jury’s deliberations is a mark of seriousness and worth. If we wanted a quick and summary answer, we wouldn’t use a jury, we’d flip a coin.

8

Pie Charts: The Prime Evil

pie2.pngI’ve been busy working on edits for my recent paper, which attempts to present lots of data in relatively clear ways. I’ve gotten some great comments from readers. None more so than this one, in response to a proposed figure providing some descriptive statistics:

Pie charts are bad! They are ugly and provide the reader no visual assistance in comparing categories.

I had no idea if this was a generally accepted view among experts in the visual display of quantitative information. Extensive research suggested that it was:

One of the prevailing orthodoxies of this forum – one to which I whole-heartedly subscribe – is that pie charts are bad and that the only thing worse than one pie chart is lots of them.

The thing I don’t get is why: pie charts seem to be a very common form of data presentation; and folks are accustomed to measuring the area of slices of pie, so the visuals convey important data. What’s wrong with a slice of pie? (And, more importantly, are dot plots really better?)

11

Law Profs Who Code

perl.gif

Law Professors who write about the Internet tend to develop facts through a combination of anecdote and secondary-source research, through which information about the conduct of computer users, the network’s structure and architecture, and the effects of regulation on innovation are intuited, developed through stories, or recounted from others’ research. Although I think a lot of legal writing about the Internet is very, very good, I’ve long yearned for more “primary source” analysis.

In other words, there is room and need for Internet law scholars who write code. Although legal scholars aren’t about to break fundamental new ground in computer science, the hidden truths of the Internet don’t run very deep, and some very simple code can elicit some important results. Also, there is a growing cadre of law professors with the skills needed to do this kind of research. I am talking about a new form of empirical legal scholarship, and empiricists should embrace the perl script and network connection as parts of their toolbox, just as they adopted the linear regression a few decades ago.

I plan to talk about this more in a subsequent post or two, but for now, let me give some examples of what I’m describing. Several legal scholars (or people closely associated with legal scholarship) are pointing the way for this new category of “empirical Internet legal studies”.

  • Jonathan Zittrain and Ben Edelman, curious about the nature and extent of filtering in China and Saudi Arabia, wrote a series of scripts to “tickle” web proxies in those countries to analyze the amount of filtering that occurs.
  • Edelman has continued to engage in a particularly applied form of Internet research, for example see his work on spyware and adware.
  • Ed Felten—granted, a computer scientist not a law professor—and his graduate students at Princeton have investigated DRM and voting machines with a policy bent and a particular focus on applied, clear results. Although the level of technical sophistication found in these studies is unlikely to be duplicated in the legal academy soon, his methods and approaches are a model for what I’m describing.
  • Journalist Kevin Poulsen created scripts that searched MySpace’s user accounts for names and zip codes that matched the DOJ’s National Sex Offender Registry database, and found more than 700 likely matches.
  • Finally, security researchers have set up vulnerable computers as “honeypots” or “honeynets” on the Internet, to give them a vantage point from which to study hacker behavior.

What are other notable examples of EILS? Let’s keep with the grand Solovian tradition, and call this a Census. Is this sub-sub-discipline ready to take off, or should we mere lawyers leave the coding to the computer scientists?

3

Replicability, Exam Grading, and Fairness

Exam-Grade-2a.jpgWhat does it mean to grade fairly?

At my law school, and presumably elsewhere, law students aggrieved by a grade can petition that it be changed. Such petitions are often granted in the case of mathematical error, but usually denied if the basis is that on re-reading, the professor would have reached a different result. The standard of review for such petitions is something like “fundamental fairness.” In essence, replicability is not an integral component of fundamental fairness for these purposes.

Law students may object to this standard, and its predictable outcome, asserting that if the grader can not replicate his or her outcomes when following the same procedure, then the total curve distribution is arbitrary. On this theory, a student at the least should have the right to a new reading of their test, standing alone and without the time-pressure that full-scale grading puts on professors.

To which the response is: grading is subjective, and not subject to scientific proof. Moreover, grades don’t exist as platonic ideals but rather distributions between students: only when reading many exams side by side can such a ranking be observed. We wouldn’t even expect that one set of rankings would be very much like another: each is sort of like a random draw of a professor’s gut-reactions to the test on that day.

This common series of arguments tends to engender cynicism among high-GPA and low-GPA students alike. To the extent that law school grading is underdetermined by work, smarts and skill, it is a bit of a joke. The importance placed on these noisy signals by employers demonstrates something fundamentally bitter about law – the power of deference over reason.

Read More