Site Meter

Author: Dave Hoffman


The Good Life and Gun Control

Like many of you, I’ve been horrified by the events in Newtown, and dismayed by the debate that has followed.  Josh Marshall (at TPM) thinks that “this is quickly veering from the merely stupid to a pretty ugly kind of victim-blaming.”  Naive realism, meet thy kettle!  Contrary to what you’ll see on various liberal outlets, the NRA didn’t cause Adam Lanza to kill innocent children and adults, nor did Alan Gura or the army of academics who helped to build the case for an individual right to gun ownership.  Reading discussions on the web, you might come to believe that we don’t all share the goal of a society where the moral order is preserved, and where our children can be put on the bus to school without a qualm.

But we do.

We just disagree about how to make it happen.

Dan Kahan’s post on the relationship between “the gun debate”, “gun deaths”, and Newtown is thus very timely.  Dan argues that if we really wanted to decrease gun deaths, we should try legalizing drugs.  (I’d argue, following Bill Stuntz, that we also/either would hire many more police while returning much more power to local control).  But decreasing gun deaths overall won’t (probably) change the likelihood of events like these:

“But here’s another thing to note: these very sad incidents “represent only a sliver of America’s overall gun violence.” Those who are appropriately interested in reducing gun homicides generally and who are (also appropriately) making this tragedy the occasion to discuss how we as a society can and must do more to make our citizens safe, and who are, in the course of making their arguments invoking(appropraitely!) the overall gun homicide rate should be focusing on what we can be done most directly and feasibly to save the most lives.

Repealing drug laws would do more —  much, much, much more — than banning assault rifles (a measure I would agree is quite appropriate); barring carrying of concealed handguns in public  (I’d vote for that in my state, if after hearing from people who felt differently from me, I could give an account of my position that fairly meets their points and doesn’t trade on tacit hostility toward or mere incomprehension of  whatever contribution owning a gun makes to their experience of a meaningful free life); closing the “gun show” loophole; extending waiting periods etc.  Or at least there is evidence for believing that, and we are entitled to make policy on the best understanding we can form of how the world works so long as we are open to new evidence and aren’t otherwise interfering with liberties that we ought, in a liberal society, to respect.”

Dan’s post is trying to productively redirect our public debate, and I wanted to use this platform to bring more attention to his point.  But, I think he’s missing something, and if you follow me after the jump, I’ll tell you what.

Read More


Unrepresentative Turkers?

Like many others, I’ve been using Amazon Mechanical Turk to recruit subjects for law & psychology experiments.  Turk is (i) cheap; (ii) fast; (iii) easy to use; and (iv) not controlled by the psychology department’s guardians.  Better yet, the literature to date has found that Turkers are more representative of the general population than you’d expect — and certainly better than college undergrads! Unfortunately, this post at the Monkey Cage provides a data point in the contrary direction:

“On Election Day, we asked 565 Amazon Mechanical Turk (MTurk) workers to take a brief survey on vote choice, ideology and demographics.  . . . We compare MTurk workers on Election Day to actual election results and exit polling.  The survey paid $0.05 and had seven questions:  gender, age, education, income, state of residence, vote choice, and ideology.  Overall, 73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for “Other.”  This is skewed in expected ways, matching the stereotypical image of online IT workers as liberal—or possibly libertarian since 12% voted for a third party in 2012, compared to 1.6% percent of all voters. . .  In sum, the MTurk sample is younger, more male, poorer, and more highly educated than Americans generally.  This matches the image of who you might think would be online doing computer tasks for a small amount of money…”

Food for thought.  What’s strange is that every sample of Turkers I’ve dealt with is older & more female than the general population.  Might it be that Turk workers who responded to a survey on election habits aren’t like the Turk population at large?  Probably so, but that doesn’t make me copacetic.


The Problem With Voting About Corporate Policies

The problems of corporate democracy are well illustrated by this embarrassing showing:

“The largest experiment yet in direct voting ended with a whimper on Monday, when Facebook closed its user polls on its new proposed terms of service, with what looked to be just 668,872 of Facebook’s 1.01 billion global users having even cast a vote, or just 0.067 percent (sixty-seven tenths of a percent) . . . Kicking-off December 6, Facebook had given all of its over 1.01 billion users around the globe one full week to vote on the changes it has proposed to its key “governing documents,” the Statement of Rights and Responsibilities and Data Use Policy, which spell out what type of user data Facebook can collect and what Facebook may do with it.”

Regarding corporate democracy (and its cousin, shareholder franchise): sounds nice, too bad people don’t act like they want it.


When is it ok to be “descriptive”

I presented a taxonomy of federal litigation today to a terrific audience at Rutgers-Camden. As I’ve covered in exhausting detail, the paper sets out to describe how lawyers organize causes of action together into complaints.  It uses a method called spectral clustering to illustrate the networks of legal theories that typically are pled together.  (It does some more stuff, but that’s the gist.)  As often happens when presenting this particular paper, it was pointed out to me that the project lacks a clearly defined normative “so what”.  This is basically correct. The “so what” of the paper is “this is a different, more-finely grained, way to see how attorneys think and produce cases. With pretty pictures. How do you like them apples?

As I said, I tend to get the so-what objection quite often when presenting this paper, and it’s pushed my co-authors and I to make the paper clearer about the implications of the method. At the same time, it has made me even more aware of the bias in legal writing to come up with papers that do more than taxonomize, or describe. This is a well-known problem with the legal academy.  True, taxonomies can be highly successful – Solove’s Taxonomy article is just one recent hit in a long parade of exceptionally good papers that basically try out different ways to organize legal concepts.  But those papers generally pitch the contribution of taxonomies  as systems to harmonize doctrine, or because they illustrate something about the world that needs fixing, or they uncover a missing category that is novel and interesting.

What’s less common is work that is no more than descriptive – this is what the world looks like; this is what happened – and doesn’t go on to fix or recommend a single thing.  Often such work is derided as mere reportage, a practitioner’s piece, or (worse) an uninteresting collection of facts, put together without a synthesis of why we should care.  (Actually, some papers are attacked on all three grounds.)  But other times, descriptive work is seen universally to be immensely important and valuable, even if it doesn’t advance any prescriptive agenda. Some of the middle-period Law and Society papers have this feel, though of course L&S generally is quite ideological.

You may be wondering: what’s the so-what of this post?  Here it comes:

-what is your sense of the appropriate criteria for deciding that purely descriptive scholarship makes a contribution?

-relatedly, if you were advising a first-time scholar, would you advise against writing a paper that is missing a policy solution in Part IV?  

My answer to the first question is that schools and faculties vary widely, and consequently I’d say the risk averse response to the second question is very, very clear.  Discuss.


Empirical Studies Workshop

Intrigued by the goings on at CELS VII?  Join the revolution.  Andrew Martin asked me to post the following:

Title: Conducting Empirical Legal Scholarship Workshop, May 22-24, 2013

On Wednesday, May 22, 2013 through Friday, May 24, 2013, Lee Epstein and Andrew Martin will be teaching their annual Conducting Empirical Legal Scholarship workshop.  This workshop will be held in Los Angeles, and is co-sponsored by USC Gould School of Law and Washington University Law. There is more information available about the workshop here:

The Conducting Empirical Legal Scholarship workshop is for law school and social science faculty interested in learning about empirical research.  The instructors provide the formal training necessary to design, conduct, and assess empirical studies, and to use statistical software (Stata) to analyze and manage data. Participants need no background or knowledge of statistics to enroll in the workshop.  Topics to be covered include research design, sampling, measurement, descriptive statistics, inferential statistics, and linear regression.


CELS VII: Low Variance, High Significance

[CELS VII, held November 9-10, 2012 at Stanford, was a smashing success due in no small part to the work of chief organizer Dan Ho, as well as Dawn Chutkow (of SELS and Cornell) and Stanford's organizing committee.  For previous installments in the CELS recap series, see CELS III, IV, V, and VI. For those few readers of this post who are data-skeptics and don’t want to read a play-by-play, resistance is obviously futile and you might as well give up. I hear that TV execs were at CELS scouting for a statistic geek reality show, so think of this as a taste of what’s coming.]

Survey Research isn't just for the 1%!

Unlike last year, I got to the conference early and even went to a methods panel. Skipping the intimidating “Spatial Statistics and the GIS” and the ominous “Bureau of Justice Statistics” panels, I sat in on “Internet Surveys” with Douglas Rivers, of Stanford/Hoover and YouGuv. To give you a sense of the stakes, half of the people in the room regularly use mTurk to run cheap e-surveys. The other half regularly write nasty comments in JELS reviewer forms about using mTurk.  (Oddly, I’m in both categories, which would’ve created a funny weighting problem if I were asked my views.) The panel was devoted to the proposition “Internet surveys are much, much more accurate than you thought, and if you don’t believe me, check out some algebraic proof.  And the election.”  Two contrasting data points. First, as Rivers pointed out, all survey subjects are volunteers, and thus it’s a bit tough to distinguish internet convenience samples from some oddball scooped up by Gallup’s 9% survey response rate.  Second, and less comfortingly, 10-15% of the adult population has a reading disability that makes self-administration of a survey prompt online more than a bit dicey.  I say: as long as the disability isn’t biasing with respect to contract psychology or cultural cognition, let’s survey on the cheap!

Lunch next. Good note for presenters: avoid small pieces of spinach/swiss chard if you are about to present. No one will tell you that you’ve spinach on a front tooth.  Not even people who are otherwise willing to inform you that your slides are too brightly colored. Speaking of which, the next panel I attended was Civil Justice I. Christy and I presented Clusters are AmazingWe tag-teamed, with me taking 9 minutes to present 5 slides and her taking 9 minutes to present the remaining 16 or so.  That was just as well: no one really wanted to know how our work might apply more broadly anyway. We got through it just fine, although I still can’t figure out an intuitive way to describe spectral clustering. What about “magic black box” isn’t working for you?

Read More


A Grouchy Post About the Election

I’m on record as basically hating blogging by law professors about politics, never more so than when the election is near. Obviously, given the state of commentary on the more popular law professor blogs of late, too few agree with me about how unenlightening most political blogging by professors is.   Well, it takes all kinds!  And there’s always Orin Kerr, writing about actual cases, to read.

But here’s something we can all agree on, I would hope. Law professors have no business telling students who to vote for.  I wonder what percentage of the academy already has, or will, violate this simple rule in the next two days?  My bet: over 25%, and the age distribution would be illuminating. Some additional percentage have probably told their students that as lawyers-in-training they have an extra obligation to participate in the “civic duty” of voting. This, in my mind, is nearly as bad, since it is usually motivated by some implicit sense that the targets of the message are going to vote the way you want them to.

Whew. Glad I got that off my chest!


At CELS 2012

I’m really looking forward to next week’s 7th Annual Conference on Empirical Legal Studies, to be held at Stanford.  Here’s the preliminary program.  As usual, I’ll blog the conference after the fact.  If there are particular papers you want to make sure I get to and highlight, drop me a line.  As a taste, here’s a line from an abstract that made me very curious about the presentation to follow: “Our overall estimates suggest that pornography caused between 10 and 25 percent of all divorces in the United States in the sixties and seventies.”  Caused?!  That must be some kicker of an instrumental variable.


Is Contract Law Really Pragmatic?

I’ll begin by joining the others who’ve written in already to praise Larry’s excellent Contracts in the Real World.  It is highly accessible, entertaining, and offers a ream of examples to make concrete some abstract and hard doctrinal problems. Larry has the gift of making complex problems seem simple – much more valuable and rare than the common academic approach of transforming hard questions into other hard questions! This would be an ideal present to a pre-law student, or even to an anxious 1L who wants a book that will connect the cases they are reading, like Lucy, Baby M, or Peevyhouse, to problems that their peers are chatting about on Facebook.

Larry’s typical approach is to introduce a salient modern contract dispute, and then show how the problem it raises was anticipated or resolved in a famous contract case or cases.  Larry often states that contract “law” steers a path between extremes, finding a pragmatic solution. This approach has the virtue of illustrating the immediate utility of precedent for guiding the resolution of current disputes, and comforts those who might believe that courts are always political actors in (caricatured) Bush v. Gore or Roberts/Health Care Cases sense. It has the vice of de-emphasizing state-by-state differences in how contract law works, as well as the dynamic effects of judicial decisions on future contracts. But I think that for its intended audience, these vices can be easily swallowed.

I wanted to offer one question to provoke discussion: is it actually true that politics is as removed from contract law as Larry’s narrative appears to suggest, and how would we know?  The contracts law professor listserve is full of laments about judges turn away from Traynor & his perceived progressive contract doctrines – and I certainly know of colleagues who teach that there are “liberal” and “conservative” versions of the parol evidence rule, for instance. But what does this actually mean, and how does it connect with the scholarship on judicial politics generally?  As it turns out, this question has been understudied, probably because political scientists have yet to find a way carefully operationalize what a “liberal” or a “conservative” outcome in a contracts case would be, and thus to usefully regress case outcomes against a judge’s political priors.  Many authors (Sunstein et al. 2004; Christy Boyd and I, 2010) have found ideological effects outside of the typical con law regime (particularly in “business law” areas).  But I’m  aware of a few empirical papers analyzing the political valence of how contract doctrine comes to be. (Snyder et al. n.d.)  Some have suggested that contract law is a particularly hard area to study because selection effects loom so large. I would also note that most contract law “work” occurs at the state court level, where ideological measures are either explicit or very obscure.

If we found good measures, my own hypothesis would be that a particular judge’s worldview matters a great deal to how he or she resolves contract disputes – with priors about how much a person should be responsible for their own choices, and their perspective on market discipline, shaping how they understand the facts and thus apply the law.   Contract cases are powerfully controlled by judges – probably more so than in other areas of private law. Contract doctrine would reflect these individual choices, and we’d thus be left not withone  “pragmatic” contract law, but rather many competing strands. I’d thus close by urging readers of Larry’s book to think a bit about the cases not picked out and illuminated in the narrative – where the judges are less wise and more human.


The Increased Cost of Distance Education

For uninteresting reasons, I just read Indiana University’s Strategic Plan for Online Education.  Here’s a fact I didn’t know, and haven’t seen well-advertised in the blog discussion on the cost transformative effects of distance learning:

IU (and the remainder of higher education) needs to educate policy makers and the public that online education generally is more, not less, expensive than on‐campus education at both undergraduate and graduate levels. The biggest reason for this is that a universal experience is that equivalent quality online education requires greater individual student attention than on‐campus education at all levels. Units deal with this either by decreasing class sizes, increasing the credit given to faculty teaching online in calculating their teaching load, or providing additional instructional assistants; all of these increase cost per student.

Additional factors that increase the cost of online instruction are the technological infrastructure needed to support it, the need to support student access 24/7, and the greater costs to develop and maintain course materials. The main factor that generally is cited for a decreased cost of online instruction relative to on‐campus is that it doesn’t require classroom space. This is valid; a careful computation by Associate Vice President Steve Keucher calculates this savings at $8.68 per credit hour, or roughly $26 per three credit course. While significant, this savings is not enough to offset the additional costs of online education, such as class sizes that often are 20‐35% smaller.

As pointed out by IU Vice President and Chief Financial Officer Neil Theobald, an important factor in pricing online education is pricing by peers in this market. As shown by the pricing summary for other universities in Appendix B, this pricing offers some guidance but is highly variable.

This seems to pose a challenge to those who would say that distance learning will drive costs out of higher education, no?