Category: Empirical Analysis of Law

0

Greiner and Pattanayak: The Sequel

In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.

The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.

If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.

 

0

Dockets and Data Breach Litigation

Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation.  Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version).  From the abstract:

In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.

A few thoughts follow after the jump.

Read More

0

R.I.P. Larry Ribstein

Larry Ribstein, who died earlier this week, was a galvanic force as a scholar and blogger.  I join those who’ve expressed sadness and loss at his untimely passing.  I figured I’d add two comments.

As others have commented, Larry always told you when he thought you were being an idiot.  When I presented one of my early empirical papers at an otherwise warm-and-friendly Canadian Law and Economics conference, Larry provided comments from the audience that had me wanting to go back to running fire drills at Cravath.  My god, how he schooled me!  But he was basically right, and it was business, not personal.  Some years later, he provided crucial encouragement on a new (better?) empirical paper.  Praise felt twice as good coming from him.  What a teacher he must have been!

Second, I’ve recently read his book (coauthored with Erin O’Hara) The Law Market.  I think it’s simply amazing – provocative, and in some ways as mind-opening as Stuntz’s Collapse of American Criminal Justice.  Law and economics has lost a great and unique voice.

0

CELS VI: Half a CELS is Statistically Better Than No CELS

Northwestern's Stained Glass Windows Made Me Wonder Whether Some Kind of Regression Was Being Proposed

As promised, I’m filing a report from the Sixth Annual Empirical Studies Conference, held 11/4-11/5 at Northwestern Law School.  Several of the attendees at the Conference approached me and remarked on my posts from CELS V, IV, and III. That added pressure, coupled with missing half of the conference due to an unavoidable conflict, has delayed this post substantially.  Apologies!  Next time, I promise to attend from the opening ceremonies until they burn the natural law figure in effigy.  Next year’s conference is at Stanford.  I’ll make a similar offer to the one I’ve made in the past: if the organizing committee pays my way, I promise not only to blog the whole thing, but to praise you unstintingly.  Here’s an example: I didn’t observe a single technical or organization snafu at Northwestern this year.  Kudos to the organizing committee: Bernie Black, Shari Diamond, and Emerson Tiller.

What I saw

I arrived Friday night in time for the poster session.  A few impressions.  Yun-chien Chang’s Tenancy in ‘Anticommons’? A Theoretical and Empirical Analysis of Co-Ownership won “best poster,” but I was drawn to David Lovis-McMahon & N.J. Schweitzer’s Substantive Justice: How the Substantive Law Shapes Perceived Fairness.  Overall, the trend toward professionalization in poster display continues unabated.  Even Ted Eisenberg’s poster was glossy & evidenced some post-production work — Ted’s posters at past sessions were, famously, not as civilized. Gone are the days where you could throw some powerpoint slides onto a board and talk about them over a glass of wine!  That said, I’m skeptical about poster sessions generally.  I would love to hear differently from folks who were there.

On Saturday, bright eyed and caffeinated, I went to a Juries panel, where I got to see three pretty cool papers.  The first, by Mercer/Kadous, was about how juries are likely to react to precise/imprecise legal standards.  (For a previous version, see here.) Though the work was nominally about auditing standards, it seemed generalizable to other kinds of legal rules.  The basic conclusion was that imprecise standards increase the likelihood of plaintiff verdicts, but only when the underlying conduct is conservative but deviates from industry norms.  By contrast, if the underlying conduct is aggressive, jurors return fewer pro-plaintiff verdicts.  Unlike most such projects, the authors permitted a large number of mock juries to deliberate, which added a degree of external validity.  Similarly worth reading was Lee/Waters’ work on jury verdict reporters (bottom line: reporters aren’t systematically pro-plaintiff, as the CW suggests, but they are awfully noise measures of what juries are actually doing).  Finally, Hans/Reyna presented some very interesting work on the “gist” model of jury decisionmaking.

At 11:00, I had to skip a great paper by Daniel Klerman whose title was worth the price of admission alone – the Selection of Thirteenth-Century Disputes for Litigation.  Instead, I went to Law and Psychology III.  There, Kenworthey Bilz presented Crime, Tort, Anger, and Insult, a paper which studies how attribution & perceptions of dignitary loss mark a psychological boundary between crime and tort cases.  Bilz presented several neat experiments in service of her thesis, among them a priming survey- – people primed to think about crimes complete the word “ins-” as “insult,” while people primed to think about torts complete it as “insurance.”  (I think I’ve got that right – – the paper isn’t available online, and I’m drawing on two week old memories.)

At noon, Andrew Gelman gave a fantastic presentation on the visualization of empirical data.  The bottom line: wordles are silly and convey no important information.  Actually, Andrew didn’t say that.  I just thought that coming in.  What Andrew said was something more like “can’t people who produce visually interesting graphs and people who produce graphs that convey information get along?”

Finally, I was the discussant at an Experimental Panel, responding to Brooks/Stremitzer/Tontrup’s Framing Contracts:Why Loss Framing Increases Effort.  Attendees witnessed my ill-fated attempt to reverse the order of my presentation on the fly, leading me to neglect the bread in the praise sandwich.  This was a good teaching moment about academic norms. My substantive reaction to Framing Contracts is that it was hard to know how much the paper connected to real-world contracting behavior, since the kinds of decision tasks that the experimental subjects were asked to perform were stripped of the relational & reciprocal norms that characterize actual deals.

CELS: What I missed

The entire first day!  One of my papers with the cultural cognition project, They Saw a Protest, apparently came off well.  Of course, there was also tons of great stuff not written from within the expanding cultural cognition empire.  Here’s a selection: on lawyer optimism; on public housing, enforcement and race; on probable cause and hindsight judging; and several papers on Iqbal, none of which appear to be online.

What did you see & like?

0

Reversal Rates, Reconsidered

What is the meaning of an appellate court’s “reversal rate”?  Opinions vary.  (My view, expressed, succinctly, is “basically nothing.”) However conceived, we ought to at least be measuring reversal correctly.  But two lawyers at Hangley Aronchick, a Philadelphia law firm, think that scholars (and journalists) have conceptualized reversal in entirely the wrong way.

According to John Summers and Michael Newman, we’ve forgotten that every case the Supreme Court takes implicitly also considers shadow cases from other circuits ruling on the same issue — that is, the Supreme Court doesn’t just “reverse” the circuit on direct appeal, it also affirms (or reverses) coordinate circuits while resolving a split.  Thus, both our numerator and our denominator have been wrong.  They’ve written up the results of this pretty interesting approach to reversal in a paper you can find blurbed here.   Among the highlights: (1) reversal is less common that is commonly supposed; (2) the Court doesn’t predictably follow the majority of circuits; (3) there are patterns of concordance between circuits in analyzing issues; and (4) even under the new approach, the ninth circuit is still the least loyal agent of the Supreme Court.

I think that this method has real promise, and I bet that folks who are interested in judicial behavior will want to check it out.

3

In Praise of Complexity

Earlier this month, right here on this very blog, Dave Hoffman pontificated about two of my favorite subjects: empirical legal studies and baseball. Primarily, Dave wondered about whether empirical legal research was facing might face the same problem as sabermatic baseball analysis: inaccessible complexity. I won’t rehash his argument because he did a very good job of explaining it in the original post. Although I completely agree with his conclusion that empirical legal studies should seek to be more accessible (which I always note at the end of my introduction of my empirical work), I disagree with his contention that empirical legal studies are facing might face widespread incomprehensibility due to growing complexity. Because I think it is a helpful analogy, I’ll borrow Dave’s example of advanced statistics in baseball. Read More

1

Law’s Arbitrary Endpoints

For many purposes, a season is an arbitrary endpoint to measure a baseball player’s success.  To extract utility from performance data over time, you need to pick endpoints that make sense in light of what you are measuring.  Thus, if we want to know how much to discount a batter’s achievements by luck, it might not make sense to look seasonally – – because there’s no good reason to expect that luck is packaged in April-to-October chunks.  Nonetheless, sabermetricians commonly do talk about BABIP seasonally — thus, Aaron Rowand had an “unusually lucky 2007,” and has since regressed himself off of a major league payroll.  Jayson Werth, similarly, is feeling the bite of lady luck this “season.”  (For pitchers, the analysis makes more sense, since the point of BABIP is that pitchers can’t control outcomes once the ball hits a bat.  Thus, the Phillies fifth starter is supposedly not nearly as good as his haircut suggests he ought to be.)

This bias toward artificial endpoints affects legal studies, though less obviously.  There aren’t legal seasons.  (It’s always a time to weep, to bill, to work, to reap.)  But we still organize our analyses around units which might not exactly track the underlying item of interest.  We want to study disputes, but we look at the records of filing and verdicts (which are a smaller unit in time than the object of study).  We wish to examine ideological voting patterns on the Court, but we organize our study by Term.  We want clear signals of young lawyer quality, but we look at grades in law school, for (mostly) the first three semesters). We want to know how law schools’ influence hiring practices, but we look at deadline-generated 9-month hiring reports.  Different slices at these numbers may produce quite different results — heck, one of the reasons that USNews obtains variable rankings is that they keep on moving the endpoints of the analysis in ways that are perfectly unclear.

There’s no complete solution to the endpoint problem – at least, not one that’s easily compatible with the project of data-driven legal analysis.  It’s important, therefore, to be especially careful when reading studies that take advantage of convenient legal periods.  A prime example is the Supreme Court’s “Term.”  I have no good reason to expect that the Justices’ behavior changes meaningfully from one Term to another — absent an intervening change in personnel.  So, Term analysis is convenient, but I bet it misleads.  Comparing the performance of a Circuit from one Term to another is similarly odd — whatever the value of the “reversal rate” inquiry, it surely doesn’t turn on Terms!

This set of cautions might be extended to a more general one, directed at folks who are interested in doing empirical work but haven’t yet begun to collect data. If your outcome of interest is measured monthly, seasonally or yearly, consider whether that unit of measurement reflects something true about the data, or is merely a convenience.  If it’s the latter, proceed with caution.  Obviously, this isn’t at all a novel caution, but the persistence of the error suggests it can’t be made often enough.

Assessing Medicaid Managed Care

The Washington Post has featured two interesting pieces recently on Medicaid managed care. Christopher Weaver reported on a battle between providers and insurers in Texas. Noting that “federal health law calls for a huge expansion of the Medicaid program in 2014,” Weaver shows how eager insurers are to enroll poor individuals in their plans. Each enrollee would “yield on average $7 a month profit,” according to recent calculations. Cost-cutting legislators see potential fiscal gains, too, once the market starts working its magic.

There’s only one problem with those projections: it turns out that “moving Medicaid recipients into managed care ‘did not lead to lower Medicaid spending during the 1991 to 2003 period,'” according to a report published by the National Bureau of Economic Research this month. Sarah Kliff is surprised to find that this is “the first national look at whether Medicaid managed care has actually done a key thing that states want it to do.”
Read More

3

The Future of Empirical Legal Studies

Kenesaw Mountain Landis would have hated both sabermetrics and ELS.

Reading these two articles on the problems of complexity for sabermetrics, I wondered if the empirical legal studies community is coming soon to a similar point of crisis. The basic concern is that sabermetricians are devoting oodles of time to ever-more-complex formulae which add only a small amount of predictive power, but which make the discipline more remote from lay understanding, and thus less practically useful.   Basically: the jargonification of a field.  Substituting “law” into Graham MacAree‘s article on the failings of sabermetrics, we get the following dire warning:

“Proper [empirical legal analysis] is something that has to come from the top down ([law]-driven) rather than the bottom up (mathematics/data driven), and to lose sight of that causes a whole host of issues that are plaguing the field at present. Every single formula must be explainable without recourse to using ridiculous numbers. Every analystmust be open to thinking about the [law] in new ways. Every number, every graph in a [ELS] piece musttell a [legal] story*, because otherwise we’re no longer writing about the [legal system ] but indulging in blind number-crunching for its own sake. …

Surveying the field, I no longer believe that those essential precepts hold sway over the [ELS] community. Data analysis methods are being misapplied and sold to readers as the next big thing. Articles are being written for the sake of sharing irrelevant changes in irrelevant metrics. Certain personalities are so revered that their word is taken as gospel when fighting dogma was what brought them the respect they’re now given in the first place. [ELS] is in a sorry state.

How do we fix it? Well, the answer seems simple. [ELS] shouldn’t be so incomprehensible so as not to call up the smell of [a courtroom, or the careful drafting of the definition clauses in a contract, or the delicate tradeoffs involved in family court practice, or the importance of situation sense]. Statistics shouldn’t be sterile and clean and shiny and soulless. They shouldn’t just be about [Law]; they should invoke it. Otherwise, they run the risk of losing the language which makes them so special.”

Note: this is an entirely different  than Leiter’s 2010 odd critique that ELS work was largely mediocre.  The problem, rather, is that the trend is toward a focus on more complex and “accurate” models, often without the input of people with legal training, and insufficient attention to how such models will be explained to lawyers, judges and legal policymakers.  (See also all of Lee Epstein’s work.)

0

Assessing Twiqbal

Several months ago, the FJC put out a well-publicized study assessing the results of Twombly-Iqbal on motions practice.  It concluded that there was little reason, overall, for concern that the Supreme Court’s new pleadings jurisprudence had worked a revolutionary change down below.  Lonny Hoffman (Houston) has just released an important new paper which questions the methods and conclusions of the FJC’s work.  He pulls no punches:

“This paper provides the first comprehensive assessment of the Federal Judicial Center’s long-anticipated study of motions to dismiss for failure to state a claim after Iqbal v. Ashcroft. Three primary assessments are made of the FJC’s study. First, there are reasons to be concerned that the study may be providing an incomplete picture of actual Rule 12(b)(6) activity. Even if the failure to capture all relevant motion activity was a non-biased error, the inclusiveness problem is consequential. Because the study was designed to compare over time the filing and grant rate of Rule 12(b)(6) motions, the size of the effect of the Court’s cases turns on the amount of activity found. Second, even if concerns are set aside that the collected data may be incomplete, it misreads of the FJC’s findings to conclude that the Court’s decisions are having no effect on dismissal practice. The FJC found that after Iqbal, a plaintiff is twice as likely to face a motion to dismiss. This sizeable increase in rate of Rule 12(b)(6) motion activity represents a marked departure from the steady filing rate observed over the last several decades and means, among other consequences, added costs for plaintiffs who have to defend more frequently against these motions. The data regarding orders resolving dismissal motions even more dramatically shows the consequential impacts of the Court’s cases. There were more orders granting dismissal with and without leave to amend, and for every case category examined. Moreover, the data show that after Iqbal it was much more likely that a motion to dismiss would be granted with leave to amend (as compared to being denied) both overall and in the three largest case categories examined (Civil Rights, Financial Instruments and Other). Employment Discrimination, Contract and Torts all show a trend of increasing grant rates. In sum, in every case type studied there was a higher likelihood after Iqbal that a motion to dismiss would be granted. Third, because of inherent limitations in doing empirical work of this nature, the cases may be having effects that the FJC researchers were unable to detect. Comparing how many motions were filed and granted pre-Twombly to post-Iqbal cannot tell us whether the Court’s cases are deterring some claims from being brought, whether they have increased dismissals of complaints on factual sufficiency grounds, or how many meritorious cases have been dismissed as a result of the Court’s stricter pleading filter. Ultimately, perhaps the most important lesson to take away from this last assessment of the FJC’s report is that empirical study cannot resolve all of the policy questions that Twombly and Iqbal raise.”

I should disclose that I provided Lonny comments on an earlier draft, and overall I think he’s done an incredible (and generally very fair) job.  One thing to think about, as always when evaluating litigation data, is the degree to which we would expect to see any results at all given case selection effects.  That Lonny does observe such substantively significant changes notwithstanding selection tells us something about how dramatic the Twiqbal decisions really were.