Category: Empirical Analysis of Law

3

Update on Plea Bargains and Prediction Markets

In Let Markets Help Criminal Defendants, I wrote that “If I were running a public defender service, I’d consider setting up an online prediction market for the conviction of my clients.” I still think this is a good idea, but someone suggested a serious problem that would have to be remedied for the scheme to be possible.

Right now, prediction markets bets on judicial events, like the conviction of Lewis Libby (whose graph is to the right), pay off at 100 for conviction, and 0 for any other ending of this set of charges, including a plea. This creates noise which renders them useless for criminal defendants looking to see if they ought to plea. That is, as I didn’t fully appreciate before, traders must be estimating the probability of conviction, tempered by the likelihood of a plea – prices are lower than the actual market estimate of a guilty verdict independent of a plea. That is, if the current price of Libby’s “stock” is .40, that means that incarceration is not 40% likely. It means that traders think it is 60% likely that Libby will win at trial, receive a mistrial, obtain a dismissal, be granted a pardon, or plea. I imagine that the likelihood of a plea accounts for a large percentage of this figure.

If traders thought that conviction prices affected defendant behavior, then presumably they’d seek to put in sell orders at prices above those where rational defendants would plea. This would put downward pressure on price and make the entire system useless from defense counsel’s perspective.

For my system to work, you’d have to exclude the possibility of a plea (i.e., nullify all bets if there is a plea). Of course, this still would create some dynamic tension, as bettors presumably would become eager to invest time and trade only as pleas become less likely – near trial, or in jurisdictions, like Philadelphia, where the District Attorney has a no-plea policy. But the resulting prices would be more informative than those offered by the current system.

8

Setting the Bar, and the Limits of Empirical Research

Larry Ribstein and Jonathan Wilson are debating the merits of a strong, exclusionary, state bar.

Wilson’s position is pro-Bar:

Deregulating lawyers as punishment or retribution for a profession that has lost its way would be a recipe for disaster. Deregulating the practice of law would open the floodgates to fraud of every conceivable variety and would only compound the problems that the readers of these pages see in our civil justice system.

Ribstein, naturally, is pro-market:

Big law firms provide a strong reputational “bond” . . . Lawyers can be certified by private organizations, including existing bar associations, which can compete with each other by earning reputations for reliability. . . .We could have stricter pleading rules, or require losers to pay winners’ fees. Or how about this: let anybody into court, but adopt a loser pays rule for parties that come into court represented by anything less than a lawyer with the highest possible trial certificate . . . Even if only licensing would effectively deal with this problem, the licensing scheme should be designed specifically to protect the courts. Instead of requiring the same all-purpose license to handle a real estate transaction and to prosecute a billion-dollar class action, we could have a special licensing law for courtroom practice, backed by tight regulation of trial lawyers’ conduct – something like the traditional barrister/solicitor distinction in the UK.

Josh Wright has picked up the thread of the discussion at TOTM, and suggests that empirical evidence would inform this debate. Unfortunately, as both Larry and he note, there is a paucity of useful studies on point:

If I recall, the Federal Trade Commission has recently been involved in some advocacy efforts in favor of limiting the scope of unauthorized practice of law statutes. My sense is that a number of states must have relaxed unauthorized practice of law restrictions (I think Arizona is one), or similarly relaxed restrictions on lawyer licensing, such that one could directly test the impact of these restrictions on consumers in terms of prices and quality of service. There must be work on this somewhere.

Solove and I have gone around on this question before (see here for the powerful pro-licensing position, and here and here for Solove’s “response”).

Generally, I like Josh’s intuition. It would be quite useful to look to Arizona, or other natural experiments, to help us to answer the problem of the utility of the Bar Exam and other licensing barriers. Surely, there is no reason in the abstract to preserve an ancient system that keeps lawyer fees artificially high, diverts millions of dollars from law students to Barbri, and causes no end of mental anguish simply because it provides a new jurisprudential lens!

But I’m quite skeptical that this is an answerable question, at least in the short term. My thinking is informed somewhat by the new Malcolm Gladwell New Yorker essay about basketball. Although Gladwell extols the virtues of statistical analysis (instead of anecdote, judgment, and valuing the joy of watching Allen Iverson triumph despite his height), the lesson I took from the piece was that:

Most tasks that professionals perform . . . are surprisingly hard to evaluate. Suppose that we wanted to measure something in the real world, like the relative skill of New York City’s heart surgeons. One obvious way would be to compare the mortality rates of the patients on whom they operate—except that substandard care isn’t necessarily fatal, so a more accurate measure might be how quickly patients get better or how few complications they have after surgery. But recovery time is a function as well of how a patient is treated in the intensive-care unit, which reflects the capabilities not just of the doctor but of the nurses in the I.C.U. So now we have to adjust for nurse quality in our assessment of surgeon quality. We’d also better adjust for how sick the patients were in the first place, and since well-regarded surgeons often treat the most difficult cases, the best surgeons might well have the poorest patient recovery rates. In order to measure something you thought was fairly straightforward, you really have to take into account a series of things that aren’t so straightforward.

I know how I would test the direct cost of legal service in Pennsylvania, and I’ve no doubt that it would go down if I (by fiat) abolished the state bar. But I have no good idea of how we can measure lawyer “quality”. To take something as obvious as criminal defense, some really good public defenders will lose every case for a year, but take comfort in having not lost on the top count of a single indictment. Saying that a public defender who went 0 for 50 in 2005 was a less “good” attorney than a prosecutor who went 50-0 would be a real problem. Facts drive litigation, and make empirical investigation of lawyer quality as a quantitative matter hard. And that is for attorneys who perform in public. How do you evaluate the relative strength of deal counsel on a gross level? Count the typos in the document? Talk with the business folks, and ask who got in the way less? [Obviously, deal counsel can be very good and very bad: the point is we need metrics that are easily coded by, say, research assistants.]

So here is the question for our readers. Can you design an empirical project that measures both litigation and transactional practice quality as a function of licensing?

4

Empirical Studies at ALEA

Bill Henderson (at the ELS Blog) has a very useful round-up of empirical papers presented at the recent ALEA conference. Blog-traveller Kate Litvak comes in for special praise:

Kate Litvak [presented] “The Effect of the Sarbanes-Oxley Act on Non-US Companies Listed in the U.S.,” which was an extremely well-done event study that used a natural experiment approach to capture the market reaction to SOX (it was generally negative). In the last couple of years, Kate, who does not have a PhD, has spent a lot of time learning sophisticated econometric techniques. It really showed. Very impressive (and easy to follow) presentation.

To be frank, I’ve been quite skeptical of studies showing a negative relationship between SOX and equity prices, on several grounds: (1) my practice experience managing the creation of event studies that dealt with changing legal regimes suggested that results are rarely as robust as one might hope; (2)) the passage and eventual implementation of SOX were so attenuated that event studies would seem hard to perform; and (3) the debate is quite politicized, with folks already disposed to dislike federalization of corporate law leading the charge on the empirical front as well. But, having read Kate’s paper, I’m inclined to rethink my position. It is well-worth a read.

5

Nominally Empirical Evidence of Unraveling in the Law Review Market

book21a.jpgIn a previous post, I observed that “the time for submitting law review articles is creeping backwards.” I then hypothesized that “we are experiencing what Alvin Roth called the ‘unraveling’ of a sorting market.” This is bad news:

Authors may not be able to get any sense at all of the “market value” of their article (loosely reflected, the myth goes, by multiple offers at a variety of journals). Conversely, journals feeling pressure to move quickly will increasingly resort to proxies for quality like letterhead, prior publication, and the eminences listed in the article’s first footnote (which tell you who an author’s friends and professional contacts are).

At the end of that post, I promised to “explore empirical evidence that this is in fact an unraveling market problem (as opposed to anecdote, to the extent possible).” As it turns out, this was a hard promise to deliver on. There simply isn’t data out there – at least that I’ve been able to find, that collects historical information about the submission processes to law reviews. This is somewhat surprising. Law professors are insular, interested in navel gazing, and well-motivated to do anything other than grading. Moreover, the process of submission is an economically consequential activity. But only recently, in two works-in-progress, has there have been any attempt made to systematically get at this problem. See here, and here.

I thought I’d make a modest contribution to the field by contributing some data from Temple in this recent submission season, and ask our readers to contribute with their experience as well. The sample size is tiny; the respondents self-selecting. This is, therefore, Co-Op’s second “very non-scientific survey” this week. It’s a trend! The data is not meant to suggest any definite conclusions, but rather help researchers with hypothesis formation. But I’ll offer some grand thoughts at the end of this post anyway.

Read More

6

Reefer Madness At The FDA

marijuana-leaf.jpgOne of the most troubling behaviors of the current administration is its repeated willingness to manipulate the distribution of empirical data with which it disagrees. From global warming to crime, the government seems more interested in promoting its policy preferences than transparently reporting the results of the research it performs or supports. The administration has a legitimate right to advocate for its positions. But if it wants to argue that marijuana ought to be illegal, as the FDA did last week in its Inter-Agency Advisory Regarding Claims That Smoked Marijuana Is A Medicine, it seems to me the better policy – both from an honesty and a credibility point of view – is to concede the facts that cut against you, and make your case anyway. In its press release last week, the FDA asserted that:

A past evaluation by several Department of Health and Human Services (HHS) agencies, including the Food and Drug Administration (FDA), Substance Abuse and Mental Health Services Administration (SAMHSA) and National Institute for Drug Abuse (NIDA), concluded that no sound scientific studies supported medical use of marijuana for treatment in the United States.

True as this may be, a 1999 review of studies by the National Institute of Medicine suggests that marijuana offers potential therapeutic value for pain relief, control of nausea and vomiting, and appetite stimulation. Also, it notes that “until a non-smoked, rapid-onset cannabanoid drug delivery system becomes available…there is no clear alternative” to smoking. Why can’t the administration concede the existence of this data review by another federal agency?

It seems to me that the administration is driven by a decision, ex ante, that marijuana ought to be illegal. If it were truly interested in investigating the utility of the drug, it wouldn’t make serious research into its value exceedingly difficult. So the federal government ignores data suggesting the value of marijuana. It makes it hard to generate more research on marijuana. And it is therefore able to rail against the many states that have legalized marijuana for medical purposes. There are reasons to believe that, if the government allowed the debate to flourish – by sharing data that does exist and promoting the production of new data – its position might become weaker. But if marijuana is in fact effective as a medicine, perhaps the FDA should legalize it. And if the government’s real argument is something other than efficacy – that it is very likely to be misued, for example, or that its increased availability will lead to a rise in DUI cases – then it should make that case instead.

In some respects, this approach to policy debate reminds me of an argument made by death penalty opponents who argue that the death penalty is bad policy because it is expensive. But why is it expensive? Because opponents litigate these cases very aggressively. There are many good reasons why some people may oppose the death penalty. But it seems to me that when the people complaining about the cost of capital punishment are the people generating this expense, one should at least be skeptical. I’m not denying that the expense argument might mask a a deeper claim: perhaps these cases are so expensive, and require so many appeals, because the state fails to provide excellent counsel in the first instance. But if this is true, wouldn’t a more logical solution to the cost problem be a requirement that states spend money on quality counsel up front, to save in the long haul? In the end, the real claim underneath cost is fairness: the quality of a person’s lawyer should not determine whether he receives a death sentence. That may not “sell” as well to certain voters, but it is the more honest argument.

As for reefer, when government is making the arguments, I think we have a right to expect honesty. The FDA’s dubious pronouncement appears driven primarily by the administration’s emotional hatred of marijuana. Personally, I’d prefer FDA decisions to be grounded in evidence-based research rather than simply madness.

2

The Most Cited Cases in Administrative Law

Some empirical research is more blog-worthy than essay-worthy. Entering citations into Westlaw’s Allfeds database over lunch may be an example.

Others have observed that Chevron v. NRDC may become the most cited case of any kind by federal courts, displacing Erie v. Thompkins. It has garnered 7909 citations, far ahead of the next most cited case in administrative law, NLRB v. Universal Camera Corp. (substantial evidence), with 4801 citations. Following that, it’s a tight race between Matthews v. Eldridge (due process), with 4293 citations, and Citizens to Preserve Overton Park v. Volpe (hard look), with 4227. The scope-of-judicial-review case that has underperformed is MVMA v. State Farm (arbitrary and capricious), with 2276 citations, less than the sort of quaint Goldberg v. Kelly’s (due process) 2377 citations and the narrow-issue-area Abbott Labs v. Gardner’s (ripeness) 2910 citations. Chevron has also stolen a lot of Vermont Yankee v. NRDC’s (rulemaking) glory – it has 1059 citations. But my not-so-dark-horse candidate for the silver medal in the future is Lujan v. Defenders of Wildlife (standing) with 3775 cites. Not too bad for a case from 1992, and I suspect that the government has installed a shift-F4 macro for the case on every one of its attorneys’ computers.

0

What does Chevron deference have to do with the Appellate Body of the WTO?

Other than that administrative law and trade law are the two subjects that my students endure from me, the connection between Chevron v. NRDC and the GMO dispute between the United States and Europe is tenuous. Perhaps we can broadly characterize them as vectors through which the federal government vindicates its policies through judicial review – be it domestic and international. But perhaps not.

How often does the United States prevail in these fora?

Orin Kerr, a terrible writer, but a perspicacious empiricist, found that in 1995 and 1996 agency interpretations received Chevron deference 73% of the time in the courts of appeals (not online, but see 15 Yale J on Reg at 30). Now Cass Sunstein and Thomas Miles are at work on a larger study of Chevron deference over a longer period of time, involving three characteristic government agencies – and even if their results show less deference to the agency interpretation, the conventional wisdom is that there’s no way that the United States could lose before anybody more than they lose in the WTO.

But that’s not how the USTR calculates it. In its view, “the Administration’s record in WTO cases involving the United States is 13 wins and 10 losses in three and a half years, a 56% success rate. From 1995-2000, the U.S. record was 18 wins and 15 losses, a 54% success rate.”

Wins – or “wins,” as Joost Pauwelyn usefully reminds us – aren’t as hard to come by for fearsome American government litigators as one might think, no matter what the fora. I find the apples and oranges comparison interesting, although not rigorous.

5

Qualitative Empirical Legal Research

A big welcome to the blogosphere for the new Empirical Legal Research Blog. I applaud the empirical move because I think this sort of research adds substantial value to the understanding of how law functions both internally and within society. As I’ve suggested in a comment over there, however, I do think that many people in the legal academy have come to conflate the idea of empirical work with quantitative work. As people in coordinate social science disciplines well know (because they, unlike most vanilla JD’s, have had formal methodological training), the concept of empirical work includes both quantitative and qualitative work. This is not to say that the quantitative and qualitative camps are always so cozy. Number crunchers sometimes think qualitative work is too squishy or subjective. The qualitative folks sometimes think that the use of numbers creates a false aura of objectivity. But many serious empirical scholars – particularly those trained in recent years – understand that both types of work are necessary to further the grand project of increasing human knowledge. I hope the folks over the new blog take qualitative work seriously. I suspect that in the next few years we’ll see qualitative researchers gain a stronger footing within the legal academy. At least I hope so.