Archive for the ‘Empirical Analysis of Law’ Category
posted by David Schwartz
Before delving into the substance of my first post, I wanted to thank the crew at Concurring Opinions for inviting me to guest blog this month.
Recently, I have been thinking about whether empirical legal scholars have or should have special ethical responsibilities. Why special responsibilities? Two basic reasons. First, nearly all law reviews lack formal peer review. The lack of peer review potentially permits dubious data to be reported without differentiation alongside quality data. Second, empirical legal scholarship has the potential to be extremely influential on policy debates because it provides “data” to substantiate or refute claims. Unfortunately, many consumers of empirical legal scholarship — including other legal scholars, practitioners, judges, the media, and policy makers — are not sophisticated in empirical methods. Even more importantly, subsequent citations of empirical findings by legal scholars rarely take care to explain the study’s qualifications and limitations. Instead, subsequent citations often amplify the “findings” of the empirical study by over-generalizing the results.
My present concern is about weak data. By weak data, I don’t mean data that is flat out incorrect (such as from widespread coding errors) or that misuses empirical methods (such as when the model’s assumptions are not met). Others previously have discussed issues relating to incorrect data and analysis in empirical legal studies. Rather, I am referring to reporting data that encourages weak or flawed inferences, that is not statistically significant, or that is of extremely limited value and thus may be misused. The precise question I have been considering is under what circumstances one should report weak data, even with an appropriate explanation of the methodology used and its potential limitations. (A different yet related question for another discussion is whether one should report lots of data without informing the reader which data the researcher views as most relevant. This scattershot approach has many of the same concerns as weak data.)
posted by Dave Hoffman
For some time, I’ve been mulling over how closely parties can tailor the rules of civil procedure to their own purposes. That is: can parties write enforceable contract terms which state that if they sue each other, the ordinary procedural rules won’t apply? Do such contracts exist? For example, parties might contract to be able to take 5 depositions in a case instead of the default 10. Or they might dispose of the rules of hearsay. The literature on this topic of private procedure arguably started with the Scott/Triantis piece, Anticipating Litigation in Contract Design, and has gotten new momentum from Bone, Kapeliuk/Klement, Dodge, and Drahozal/Rutledge. My contribution, freshly up on SSRN, ended up being slightly more empirical than I’d expected — though I guess this won’t surprise any of our long-time readers. In Why Is Privatized Procedure So Rare?, I try to explain why there is actually so little private procedure in places we’d expect to see it:
“Increasingly we hear that civil procedure lurks in the shadow of private law. Scholars suggest that the civil rules are mere defaults, applying if the parties fail to contract around them. When judges confront terms modifying court procedures — a trend said to be explosive — they seem all-too-willing to surrender to the inevitable logic of private and efficient private ordering.
How concerned should we be? This Article casts a wide net to find examples of private contracts governing procedure, and finds a decided absence of evidence. I search a large database of agreements entered into by public firms, and a hand-coded set of credit card contracts. In both databases, clauses that craft private procedural rules are rare. This is a surprising finding given recent claims about the prevalence of these clauses, and the economic logic which makes them so compelling.
A developing literature about contract innovation helps to explain this puzzle. Parties are not rationally ignorant of the possibility of privatized procedure, nor are they simply afraid that such terms are unenforceable. Rather, evolution in the market for private procedure, like innovation in contracting generally, is subject to a familiar cycle of product innovation. Further developments in this field will not be linear, uniform and progressive; they will be punctuated, particularized and contingent.”
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Jason P. Nance entitled School Security Considerations After Newtown. Professor Nance writes that strict school security measures may be ineffective but have a balkanizing effect:
On December 14, 2012, and in the weeks thereafter, our country mourned the deaths of twenty children and six educators who were brutally shot and killed at Sandy Hook Elementary School in Newtown, Connecticut. Since that horrific event, parents, educators, and lawmakers have understandably turned their attention to implementing stronger school security measures to prevent such atrocities from happening again. In fact, many states have enacted or proposed legislation to provide additional funds to schools for metal detectors, surveillance cameras, bulletproof glass, locked gates, and law enforcement officers. Because increased security measures are unlikely to prevent someone determined to commit a violent act at school from succeeding, funding currently dedicated to school security can be put to better use by implementing alternative programs in schools that promote peaceful resolution of conflict.
The events at Newtown have caused all of us to deeply consider how to keep students safe at school. A natural response to this atrocity is to demand that lawmakers and school administrators invest our limited public funds into strict security measures. But this strategy is misguided. Empirical evidence suggests that these additional investments in security equipment and law enforcement officers may lead to further disparities along racial and economic lines. Further, it is imperative that all constituencies understand that there are more effective ways to address violence than resorting to coercive measures that harm the educational environment. Indeed, schools can make a tremendous impact in the lives of students by teaching students appropriate ways to resolve conflict and making them feel respected, trusted, and cared for. These are the types of schools that can make a real difference in the lives of students.
February 11, 2013 at 10:45 am Tags: Civil Rights, Education, Policy, school security, schools Posted in: Civil Rights, Education, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post No Comments
posted by Dave Hoffman
Intrigued by the goings on at CELS VII? Join the revolution. Andrew Martin asked me to post the following:
Title: Conducting Empirical Legal Scholarship Workshop, May 22-24, 2013
On Wednesday, May 22, 2013 through Friday, May 24, 2013, Lee Epstein and Andrew Martin will be teaching their annual Conducting Empirical Legal Scholarship workshop. This workshop will be held in Los Angeles, and is co-sponsored by USC Gould School of Law and Washington University Law. There is more information available about the workshop here:
The Conducting Empirical Legal Scholarship workshop is for law school and social science faculty interested in learning about empirical research. The instructors provide the formal training necessary to design, conduct, and assess empirical studies, and to use statistical software (Stata) to analyze and manage data. Participants need no background or knowledge of statistics to enroll in the workshop. Topics to be covered include research design, sampling, measurement, descriptive statistics, inferential statistics, and linear regression.
posted by Dave Hoffman
[CELS VII, held November 9-10, 2012 at Stanford, was a smashing success due in no small part to the work of chief organizer Dan Ho, as well as Dawn Chutkow (of SELS and Cornell) and Stanford's organizing committee. For previous installments in the CELS recap series, see CELS III, IV, V, and VI. For those few readers of this post who are data-skeptics and don’t want to read a play-by-play, resistance is obviously futile and you might as well give up. I hear that TV execs were at CELS scouting for a statistic geek reality show, so think of this as a taste of what’s coming.]
Unlike last year, I got to the conference early and even went to a methods panel. Skipping the intimidating “Spatial Statistics and the GIS” and the ominous “Bureau of Justice Statistics” panels, I sat in on “Internet Surveys” with Douglas Rivers, of Stanford/Hoover and YouGuv. To give you a sense of the stakes, half of the people in the room regularly use mTurk to run cheap e-surveys. The other half regularly write nasty comments in JELS reviewer forms about using mTurk. (Oddly, I’m in both categories, which would’ve created a funny weighting problem if I were asked my views.) The panel was devoted to the proposition “Internet surveys are much, much more accurate than you thought, and if you don’t believe me, check out some algebraic proof. And the election.” Two contrasting data points. First, as Rivers pointed out, all survey subjects are volunteers, and thus it’s a bit tough to distinguish internet convenience samples from some oddball scooped up by Gallup’s 9% survey response rate. Second, and less comfortingly, 10-15% of the adult population has a reading disability that makes self-administration of a survey prompt online more than a bit dicey. I say: as long as the disability isn’t biasing with respect to contract psychology or cultural cognition, let’s survey on the cheap!
Lunch next. Good note for presenters: avoid small pieces of spinach/swiss chard if you are about to present. No one will tell you that you’ve spinach on a front tooth. Not even people who are otherwise willing to inform you that your slides are too brightly colored. Speaking of which, the next panel I attended was Civil Justice I. Christy and I presented Clusters are Amazing. We tag-teamed, with me taking 9 minutes to present 5 slides and her taking 9 minutes to present the remaining 16 or so. That was just as well: no one really wanted to know how our work might apply more broadly anyway. We got through it just fine, although I still can’t figure out an intuitive way to describe spectral clustering. What about “magic black box” isn’t working for you?
posted by Stanford Law Review
Continuing our dialog on antitrust enforcement, the Stanford Law Review Online has just published an Essay by Daniel A. Crane entitled The Obama Justice Department’s Merger Enforcement Record. Professor Crane responds to Carl Shapiro and Jonathan Baker’s criticism of his response to his earlier Essay:
My recent Essay, Has the Obama Justice Department Reinvigorated Antitrust Enforcement?, examined the three major areas of antitrust enforcement—cartels, mergers, and civil non-merger—and argued that, contrary to some popular impressions, the Obama Justice Department has not “reinvigorated” antitrust enforcement. Jonathan Baker and Carl Shapiro have published a response, which focuses solely on merger enforcement. Baker and Shapiro’s argument that the Obama Justice Department actually did reinvigorate merger enforcement is unconvincing.
Jon Baker and Carl Shapiro are smart, effective economists for whom I have great respect. I have few quarrels with how they or the Obama Administration in general conduct antitrust enforcement. The point of my essay was that antitrust enforcement has become largely technocratic and independent of political ideology. I have heard nothing that dissuades me from that view.
Read the full article, The Obama Justice Department’s Merger Enforcement Record by Daniel A. Crane, at the Stanford Law Review Online.
September 6, 2012 at 3:03 pm Tags: Antitrust, merger enforcemenet, mergers, Obama administration, Policy Posted in: Antitrust, Corporate Law, Current Events, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post No Comments
posted by Dave Hoffman
Symposiast Jim Greiner passes along the following call for applications:
Working together across the lines of scholarship and practice, a group of researchers and field professionals in access to civil justice (A2J) in the United States is soliciting applications to attend a two-day Workshop to be held in Chicago, Illinois on December 7-8, 2012. The Workshop opens with a poster session and town hall meeting on the afternoon of Friday, December 7. This open session, held in conjunction with the National Legal Aid and Defender Association annual meetings, will bring together scholars and practitioners from many perspectives to identify and explore access to justice research needs. On the following day, Saturday, December 8, the Workshop will convene a smaller, closed session to push forward the work of revitalizing A2J research. We are grateful to the National Science Foundation Law and Social Sciences Program (SES-1237958) for recommending financial support.
The application materials are here: NSF Workshop Application. Jim encourages all interested parties – which should include anyone who is interested in empirically examining access to justice issues – to apply.
posted by Danielle Citron
What are we really teaching our students? Those of us who complain that our students are too focused on learning rules and doctrines should read a provocative empirical study recently published on SSRN by my colleague Don Gifford, Villanova sociologist Brian Jones, and two of Don’s former students with expertise in statistical analysis, Joseph Kroart and Cheryl Cortemeglia. Donald G. Gifford, Joseph Kroart, Brian Jones & Cheryl Cortemeglia, What’s on First?: Organizing the Casebook and Molding the Mind, 44 Ariz. St. L.J. ___ (2013) (forthcoming). The article describes an empirical study suggesting that whether the Torts professor begins with intentional, negligent, or strict liability torts affects the students’ understanding of the role of the common-law judge in a statistically significant way. The authors argue that the judge’s role in deciding intentional tort cases is at least to some extent more rule-based than her role in negligence and strict liability cases. Applying the work of sociologist Eving Goffman, they posit that beginning with intentional torts frames the judicial role in this manner. Further, they hypothesize that once frequently anxious first-semester students latch onto one particular conception of the judicial role during the initial weeks of the semester, it becomes anchored and resistant to change even after the students have studied other categories of tort liability.
Gifford et al. surveyed more than 450 first-year law students at eight law schools that vary widely in terms of their
reputational ranking. The students were surveyed at the beginning, middle, and end of the first semester. The survey results supported the authors’ hypothesis that students who begin their study of Torts with strict liability experience a greater shift toward understanding the judge’s role as being influenced by social, economic, and ideological factors and a sense of fairness and less as a process of rule application than do students who begin their study with either intentional torts or negligence. Even when the authors controlled for the ranking of the law school, topic sequence still generated a significant effect on students’ perceptions of the role of the common law judge. Nor did the effect of topic sequence vary by gender. The authors were surprised to find that students who began with intentional torts experience a greater attitudinal shift toward perceiving the judicial role as being policy influenced than do students who began with negligent torts.
Despite their disclaimers, the authors implicitly criticize the overwhelming majority of Torts professors who begin with intentional torts. Most Torts casebooks begin with intentional torts, at least after a brief introductory chapter. Their editors claim that these cases are “accessible,” “memorable,” and provide “a nice warm up” for studying other torts. Some of these same editors admit that intentional torts comprise a “backwater” in modern tort practice. Gifford et al. suggest that the real reason for beginning with intentional torts may be because that is the way it always has been done. They note that the first Torts casebook, edited by James Ames Barr, Dean Langdell’s colleague, began with intentional torts. They provocatively suggest that Ames may have begun with intentional torts in part precisely because these torts were most rule-like in nature and furthered Langdell’s mission to make the law appear “scientific” in order to justify its inclusion within the university curriculum. If this is true, note the authors, then most modern-day Torts professors are “unwitting conscripts” in the Langdellian mission. Read the rest of this post »
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Jonathan Baker and Carl Shapiro entitled Evaluating Merger Enforcement During the Obama Administration. Professors Baker and Shapiro take issue with Daniel Crane’s assertions in his Essay of July 18:
We recently concluded that government merger enforcement statistics “provide clear evidence that the Obama Administration reinvigorated merger enforcement, as it set out to do.” Three weeks later, in an article published in the Stanford Law Review Online, Professor Daniel A. Crane reached the opposite conclusion, claiming that “[t]he merger statistics do not evidence ‘reinvigoration’ of merger enforcement under Obama.”
Crane is simply wrong. The data regarding merger enforcement unambiguously support our conclusion and cannot reasonably be read to support Crane’s assertions. Crane’s conclusion regarding merger enforcement is inaccurate because he relies upon flawed metrics and overlooks or misinterprets other important evidence.
Our analysis of merger enforcement at the DOJ during the George W. Bush Administration—based on the enforcement statistics and more—showed that it was unusually lax and in need of reinvigoration. It is too early to reach a comparably definitive conclusion about merger enforcement at the DOJ during the Obama Administration, but nothing in Daniel Crane’s article seriously challenges our interpretation of the preliminary data as demonstrating that the necessary reinvigoration has taken place.
Read the full article, Evaluating Merger Enforcement During the Obama Administration by Jonathan Baker and Carl Shapiro, at the Stanford Law Review Online.
August 21, 2012 at 9:30 am Tags: Antitrust, bush administration, executive branch, FTC, merger enforcement, mergers, Obama administration, Politics Posted in: Antitrust, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post One Comment
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Edward McCaffery entitled The Dirty Little Secret of (Estate) Tax Reform. Professor McCaffery argues that Congress encourages and perpetuates the cycle of special interest spending on the tax reform issue:
Spoiler alert! The dirty little secret of estate tax reform is the same as the dirty little secret about many things that transpire, or fail to transpire, inside the Beltway: it’s all about money. But no, it is not quite what you think. The secret is not that special interests give boatloads of money to politicians. Of course they do. That may well be dirty, but it is hardly secret. The dirty little secret I come to lay bare is that Congress likes it this way. Congress wants there to be special interests, small groups with high stakes in what it does or does not do. These are necessary conditions for Congress to get what it needs: money, for itself and its campaigns. Although the near certainty of getting re-elected could point to the contrary, elected officials raise more money than ever. Tax reform in general, and estate tax repeal or reform in particular, illustrate the point: Congress has shown an appetite for keeping the issue of estate tax repeal alive through a never-ending series of brinksmanship votes; it never does anything fundamental or, for that matter, principled, but rakes in cash year in and year out for just considering the matter.
On the estate tax, then, it is easy to predict what will happen: not much. We will not see a return to year 2000 levels, and we will not see repeal. The one cautionary note I must add is that, going back to the game, something has to happen sometime, or the parties paying Congress and lobbyists will wise up and stop paying to play. But that has not kicked in yet, decades into the story, and it may not kick in until more people read this Essay, and start to watch the watchdogs. Fat chance of that happening, too, I suppose. In the meantime, without a meaningful wealth-transfer tax (the gift and estate taxes raise a very minimal amount of revenue and may even lose money when the income tax savings of standard estate-planning techniques, such as charitable and life insurance trusts, are taken into account), one fundamental insight of the special interest model continue to obtain. Big groups with small stakes—that is, most of us—continue to pay through increasingly burdensome middle class taxes for most of what government does, including stringing along those “lucky” enough to be members of a special interest group. It’s a variant of a very old story, and it is time to stop keeping it secret.
August 14, 2012 at 10:00 am Tags: Congress, death tax, estate tax, Politics, special interests, tax, tax law, taxes Posted in: Current Events, Empirical Analysis of Law, Law Rev (Stanford), Politics, Tax, Uncategorized Print This Post 2 Comments
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Daniel Crane entitled Has the Obama Justice Department Reinvigorated Antitrust Enforcement?. Professor Crane assesses antitrust enforcement in the Obama and Bush administrations using several empirical measures:
The Justice Department’s recently filed antitrust case against Apple and several major book publishers over e-book pricing, which comes on the heels of the Justice Department’s successful challenge to the proposed merger of AT&T and T-Mobile, has contributed to the perception that the Obama Administration is reinvigorating antitrust enforcement from its recent stupor. As a candidate for President, then-Senator Obama criticized the Bush Administration as having the “weakest record of antitrust enforcement of any administration in the last half century” and vowed to step up enforcement. Early in the Obama Administration, Justice Department officials furthered this perception by withdrawing the Bush Administration’s report on monopolization offenses and suggesting that the fault for the financial crisis might lie at the feet of lax antitrust enforcement. Even before the AT&T and Apple cases, media reports frequently suggested that antitrust enforcement is significantly tougher under President Obama.
For better or worse, the Administration’s enforcement record does not bear out this impression. With only a few exceptions, current enforcement looks much like enforcement under the Bush Administration. Antitrust enforcement in the modern era is a technical and technocratic enterprise. Although there will be tweaks at the margin from administration to administration, the core of antitrust enforcement has been practiced in a relatively nonideological and nonpartisan way over the last several decades.
Two points stressed earlier should be stressed again: (1) statistical measures of antitrust enforcement are an incomplete way of understanding the overall level of enforcement; and (2) to say that the Obama Administration’s record of enforcement is not materially different than the Bush Administration’s is not to chide Obama for weak enforcement. Rather, it is to debunk the claims that antitrust enforcement is strongly dependent on politics.
This examination of the “reinvigoration” claim should not be understood as acceptance that tougher antitrust enforcement is always better. Certainly, there have been occasions when an administration would be wise to ease off the gas pedal. At present, however, there is a high degree of continuity from one administration to the next.
Read the full article, Has the Obama Justice Department Reinvigorated Antitrust Enforcement? by Daniel Crane, at the Stanford Law Review Online.
July 18, 2012 at 10:15 am Tags: Antitrust, Corporate Law, law enforcement, Obama administration Posted in: Antitrust, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post One Comment
posted by Dave Hoffman
Over at the Cultural Cognition Blog, I’ve written a bit about some new evidence about partisan division. The headline news is that partisanship is a better predictor than it used to be of cultural division. But as I read the data, the undernews is that we’re actually no more divided than we used to be on common ideological and cultural measures. Given all that’s happened in the last quarter-century – including media differentiation, the digital revolution and 24-hour news cycle, more bowling alone, sprawl – isn’t that kind of a huge deal? The fact that partisan self-identification is a better predictor of cultural views than it used to be simply means that the parties are cohering better. That might be bad for the functioning of our particular form of representative government, but it doesn’t mean that we’re drifting apart as a country.
posted by Dave Hoffman
For the last two years, Christy Boyd and I, along with some friends, have been working on a paper on how attorneys construct complaints. The project began when we were working to code some other detritus of federal litigation and decided to collect the causes of action in complaints to understand the legal issues in our cases in a better manner than NOS codes alone permitted. Soon enough, we got to thinking that our causes of action were pled in distinctively patterned ways. Obviously, this isn’t an earth-shaking insight, as most first year students have thought, at one time or another, that each of their classes’ exam fact patterns could easily substitute for any other. That is: causes of action are alternative, mutually complementary, theories that channel a limited number of fact patterns into claims to legal relief. Everyone knows that contract and tort claims are pled together, and that constitutional claims come accompanied by state law torts. But we thought it’d be worthwhile to nail down this insight using a very similar analysis to the one that enables Amazon to tell you which books you might like — i.e., if you plead a particular cause of action, what other causes of action are you likely to bring in a particular case?
We gathered a set of 2,500 complaints (from a much larger sample of federal complaints derived through RECAP). The complaints were sampled to be fairly representative of all federal litigation, excluding pro se, social security, and prisoner petition cases. The sample contained 11,500 individual causes of action – around 4.6 causes of action per case. Guided by co-authors at Temple’s Center for Data Analytics, we used spectral clustering to examine the relationship between causes of action. Two years later and presto, we’ve a (draft) paper is up on SSRN! The ungainly title is Building a Taxonomy of Litigation: Clusters of Causes of Action in Federal Complaints. I welcome your comments, and your suggestions for a better title. Follow me after the jump for an exploration of our findings.
posted by Dave Hoffman
Over at the Cultural Cognition blog, Dan Kahan has two posts up, with a third promised, on the Trayvon Martin case. In the first, Dan argued that motivated cognition helps to explain why we disagree so vehemently about the facts of the Martin-Zimmerman incident. Indeed, he claimed that “we’ll never know what happened, because we—the members of our culturally pluralistic society—have radically different understandings of what a case like this means.”
In his second post, he connects the shooting to the history of stand-your-ground laws – and the NRA’s successful strategy to combine self-defense norms with gun rights. Arguing that turning Martin’s death into a discussion of the empirics of gun violence is exactly what the NRA would like, he urges that commentators “to just back off. Not only are you needlessly sowing division; you are destroying the prospects for a meaningful conversation of the values that—despite our cultural differences—in fact unite us. ”
As as is so often the case, Dan states offers a subtle and compelling argument for the relevance of motivated cognition in understanding public policy. I’ve actually been toying with writing a similar post – but it wouldn’t have been nearly as well-executed. So I hope you’ll go to the CCP blog and read what he’s written – it might cause you to rethink your priors on the tragedy in florida. Then please come back for a few further thoughts.
posted by Dave Hoffman
A common criticism one reads of ELS is that “too much of the work is driven by the existence of a data set, rather than an intellectual or analytical point.” It’s ironic that this is the very critique that the realists made of traditional legal scholarship. Consider the great Llewellyn:
“I am a prey, as is every man who tries to work with law, to the apperceptive mass. I see best what I have learned to see. I am a prey, too — as are the others — to the old truth that the available limits vision, the available bulks as if it were the whole. What records have I of the work of magistrates? How shall I get them? Are there any? And if there are, must I search them out myself? But the appellate courts make access to their work convenient. They issue reports, printed, bound, to be had all gathered for me in the libraries. The convenient source of information lures. Men work with it, first, because it is there; and because they have worked with it, men build it into ideology. The ideology grows and spreads and gains acceptance, acquires a force and an existence of its own, becomes a thing to conjure with: the rules and concepts of the courts of last resort.”
Or to put it differently, all of our work – quantitative empiricists, doctrinalists, corporate finance wizards, administrative regulation parsers, legal philosophers, and derivative social psychologists alike – is driven by the materials at hand. For most lawyers and legal academics, appellate opinions are the most convenient pieces of information available; we use such opinions to create mental models of what the “law” is, and (ordinarily in legal scholarship) what it ought be. Indeed, whenever trial court opinions are cited, they are often discounted as aberrant or transitory, in part because they are known to be unrepresentative!
Why, you might wonder, is the convention of data-driven-scholarship a particular problem in quantitative empirical work? ELS’s detractors make three interrelated claims:
posted by Jeffrey Selbin
In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.
The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.
If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.
February 22, 2012 at 9:56 am Tags: Symposium (What Difference Representation) Posted in: Civil Rights, Empirical Analysis of Law, Law Practice, Symposium (What Difference Representation) Print This Post No Comments
posted by Dave Hoffman
Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation. Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version). From the abstract:
In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.
A few thoughts follow after the jump.
February 19, 2012 at 1:33 pm Posted in: Economic Analysis of Law, Empirical Analysis of Law, Privacy, Privacy (Consumer Privacy), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Dave Hoffman
Larry Ribstein, who died earlier this week, was a galvanic force as a scholar and blogger. I join those who’ve expressed sadness and loss at his untimely passing. I figured I’d add two comments.
As others have commented, Larry always told you when he thought you were being an idiot. When I presented one of my early empirical papers at an otherwise warm-and-friendly Canadian Law and Economics conference, Larry provided comments from the audience that had me wanting to go back to running fire drills at Cravath. My god, how he schooled me! But he was basically right, and it was business, not personal. Some years later, he provided crucial encouragement on a new (better?) empirical paper. Praise felt twice as good coming from him. What a teacher he must have been!
Second, I’ve recently read his book (coauthored with Erin O’Hara) The Law Market. I think it’s simply amazing – provocative, and in some ways as mind-opening as Stuntz’s Collapse of American Criminal Justice. Law and economics has lost a great and unique voice.
posted by Dave Hoffman
As promised, I’m filing a report from the Sixth Annual Empirical Studies Conference, held 11/4-11/5 at Northwestern Law School. Several of the attendees at the Conference approached me and remarked on my posts from CELS V, IV, and III. That added pressure, coupled with missing half of the conference due to an unavoidable conflict, has delayed this post substantially. Apologies! Next time, I promise to attend from the opening ceremonies until they burn the natural law figure in effigy. Next year’s conference is at Stanford. I’ll make a similar offer to the one I’ve made in the past: if the organizing committee pays my way, I promise not only to blog the whole thing, but to praise you unstintingly. Here’s an example: I didn’t observe a single technical or organization snafu at Northwestern this year. Kudos to the organizing committee: Bernie Black, Shari Diamond, and Emerson Tiller.
What I saw
I arrived Friday night in time for the poster session. A few impressions. Yun-chien Chang’s Tenancy in ‘Anticommons’? A Theoretical and Empirical Analysis of Co-Ownership won “best poster,” but I was drawn to David Lovis-McMahon & N.J. Schweitzer’s Substantive Justice: How the Substantive Law Shapes Perceived Fairness. Overall, the trend toward professionalization in poster display continues unabated. Even Ted Eisenberg’s poster was glossy & evidenced some post-production work — Ted’s posters at past sessions were, famously, not as civilized. Gone are the days where you could throw some powerpoint slides onto a board and talk about them over a glass of wine! That said, I’m skeptical about poster sessions generally. I would love to hear differently from folks who were there.
On Saturday, bright eyed and caffeinated, I went to a Juries panel, where I got to see three pretty cool papers. The first, by Mercer/Kadous, was about how juries are likely to react to precise/imprecise legal standards. (For a previous version, see here.) Though the work was nominally about auditing standards, it seemed generalizable to other kinds of legal rules. The basic conclusion was that imprecise standards increase the likelihood of plaintiff verdicts, but only when the underlying conduct is conservative but deviates from industry norms. By contrast, if the underlying conduct is aggressive, jurors return fewer pro-plaintiff verdicts. Unlike most such projects, the authors permitted a large number of mock juries to deliberate, which added a degree of external validity. Similarly worth reading was Lee/Waters’ work on jury verdict reporters (bottom line: reporters aren’t systematically pro-plaintiff, as the CW suggests, but they are awfully noise measures of what juries are actually doing). Finally, Hans/Reyna presented some very interesting work on the “gist” model of jury decisionmaking.
At 11:00, I had to skip a great paper by Daniel Klerman whose title was worth the price of admission alone – the Selection of Thirteenth-Century Disputes for Litigation. Instead, I went to Law and Psychology III. There, Kenworthey Bilz presented Crime, Tort, Anger, and Insult, a paper which studies how attribution & perceptions of dignitary loss mark a psychological boundary between crime and tort cases. Bilz presented several neat experiments in service of her thesis, among them a priming survey- – people primed to think about crimes complete the word “ins-” as “insult,” while people primed to think about torts complete it as “insurance.” (I think I’ve got that right – - the paper isn’t available online, and I’m drawing on two week old memories.)
At noon, Andrew Gelman gave a fantastic presentation on the visualization of empirical data. The bottom line: wordles are silly and convey no important information. Actually, Andrew didn’t say that. I just thought that coming in. What Andrew said was something more like “can’t people who produce visually interesting graphs and people who produce graphs that convey information get along?”
Finally, I was the discussant at an Experimental Panel, responding to Brooks/Stremitzer/Tontrup’s Framing Contracts:Why Loss Framing Increases Effort. Attendees witnessed my ill-fated attempt to reverse the order of my presentation on the fly, leading me to neglect the bread in the praise sandwich. This was a good teaching moment about academic norms. My substantive reaction to Framing Contracts is that it was hard to know how much the paper connected to real-world contracting behavior, since the kinds of decision tasks that the experimental subjects were asked to perform were stripped of the relational & reciprocal norms that characterize actual deals.
CELS: What I missed
The entire first day! One of my papers with the cultural cognition project, They Saw a Protest, apparently came off well. Of course, there was also tons of great stuff not written from within the expanding cultural cognition empire. Here’s a selection: on lawyer optimism; on public housing, enforcement and race; on probable cause and hindsight judging; and several papers on Iqbal, none of which appear to be online.
What did you see & like?
posted by Dave Hoffman
What is the meaning of an appellate court’s “reversal rate”? Opinions vary. (My view, expressed, succinctly, is “basically nothing.”) However conceived, we ought to at least be measuring reversal correctly. But two lawyers at Hangley Aronchick, a Philadelphia law firm, think that scholars (and journalists) have conceptualized reversal in entirely the wrong way.
According to John Summers and Michael Newman, we’ve forgotten that every case the Supreme Court takes implicitly also considers shadow cases from other circuits ruling on the same issue — that is, the Supreme Court doesn’t just “reverse” the circuit on direct appeal, it also affirms (or reverses) coordinate circuits while resolving a split. Thus, both our numerator and our denominator have been wrong. They’ve written up the results of this pretty interesting approach to reversal in a paper you can find blurbed here. Among the highlights: (1) reversal is less common that is commonly supposed; (2) the Court doesn’t predictably follow the majority of circuits; (3) there are patterns of concordance between circuits in analyzing issues; and (4) even under the new approach, the ninth circuit is still the least loyal agent of the Supreme Court.
I think that this method has real promise, and I bet that folks who are interested in judicial behavior will want to check it out.