Author: Dave Hoffman

13

Affirmative Action for Law Scholarship

There are several issues in this unfolding story about Scholastica, law review submissions, and “diversity” preferences. Let’s break them out.

  1. I’m shocked!  Shocked!:  Over at Prawfs, Professor Mannheimer and various anonymous commentators think that Orin, Josh and I are naive.  Everyone knows that law reviews routinely take race, gender and sexual orientation into account when choosing between articles.  Indeed, Josh got an email from a former editor at the California Law Review saying that the practice “is nothing new and not exactly a secret.” Well, shucks. I guess I’m the sucker here.  Even if this had crossed my mind, I would have naively thought that law faculties would never permit law student boards to make decisions about articles based on race, gender and sexual orientation without clearly thinking through whether such practices were legal, and without setting forth an explicit and public set of guidelines vetted by university counsel’s office. Honestly, the idea that California, NYU, Boston College, and other law reviews are thinking about my sexual orientation when they go forward with a “board review” is so unbelievably offensive that I’m still having some trouble wrapping my head around it.  So, yup, I’m shocked.
  2. But everyone else is doing it: On the prawfs thread, several anonymous commentators stated that diversity preferences (however defined) are no worse than preferences that boards already express for (or against) elite school letterhead.  There are two points to make here in response. First, the best law journals already engage in blind review, and using letterhead as a proxy for quality is antiquated and embarrassing. It’s not a defense of a bad practice that another bad practice exists.  Second, though it’s not well thought out and should be abolished, at least the intuition behind letter-head bias is rationally related to what I thought the law review’s ends were: to select the best piece of scholarship. But what’s the intuition behind picking people, not papers?  That law review placement is a “good” owned by the law review that wise and benevolent boards should redistribute in the ways that seem best to them?
  3. Scholastica’s just an enabler: I can’t quite figure these folks out.  They commented yesterday that they were just giving reviews what they wanted. But then some editors wrote me to say that they didn’t want this widget – and that they only clicked on it because it was so easy to do. Indeed, Iowa appears to have de-clicked the widget yesterday in response to this thread.  In the best possible light, it seems to me that Scholastica’s developers are simply importing other disciplines’ norms and preferences into the law without thinking carefully about how you might want to have different tools for faculty editors than unsupervised student boards. But maybe that’s not the light to see Scholastica in. As I wrote yesterday, their high price, preference for a different kind of scholarship, and exclusivity campaign might suggest that far from being merely a “platform”, they are hoping to use digital architecture to change law review behavior. I’d love to hear more from them about what their goals were and are going forward for legal scholarship.
  4. Until such questions are answered, my view is that of a commentator from yesterday: vote with your feet. Don’t use Scholastica unless the journal absolutely insists, as very, very few do.  Consider also sending emails to the faculty advisors of journals that are exclusive to ask them if they are on board with this potentially radical, and radically troubling, shift in law review standards and selection processes.
10

Scholastica & Law Review Selection

As several commentators noted (most in private emails, because they are afraid of negative consequences in the submission market), a very disturbing aspect of Scholastica’s new submission process is that it appears to facilitate and encourage law reviews to use sexual orientation, race, and gender in selection decisions.  Josh Blackman has investigated, and written a very useful follow-up post which I hope you all will read.

My own view is that whatever the merits of law reviews giving “plus” points to authors at less prestigious schools,* providing plus points on account of race, gender, and sexual orientation is a terrible, terrible practice, especially if the plus points are awarded in an opaque manner by a largely unsupervised student board at an instrumentality of the state. Scholastica appears to take the position that it’s just giving journals what they want here.  Would it feel the same way if journals were planning to use sexual orientation and race as negative factors?  (Which, from a certain perspective, is exactly what they may be planning on doing.)

Mike Madison, writing on this topic in the fall, suggested that Scholastica is leading the charge toward a privatization of legal scholarship, with all of the associated pathologies (lack of transparency, etc.) That sounds right.  Why, again, are faculty at schools like California (Berkeley), NYU, Iowa, and USC on board with this development?

 

*This too is a bad idea, but that’s a topic for a separate post.

13

Against Scholastica

Like many of you, I’ve an article out in the Spring submission season. (More on that in a separate post later.) Let the agonizing begin! Seriously, where’s the thread?

This year, in addition to ExpressO, email, website submission, Redyip, and printed copies, we’ve a new way to deliver our articles to their ultimate masters: Scholastica. You may have learned about Scholastica when your favorite law review wrote you to inform you that they were exclusively taking submissions through that system, or when your associate dean told you that the institution would prefer not to pay pay more per submission than ExpressO for a substantially similar service.

Here are some key things you might not know:

  1. As far as I can tell only two of the top fifty journals – NYU and Iowa – are exclusive to Scholastica. “Exclusive” for other journals appears to mean “we’d prefer.”
  2. Scholastica is very  hostile to the currently way that legal scholarship is selected — they push double-blind peer review and don’t very much like student editing. This isn’t surprising, because as far as I can tell, none of the developers went to law school, served on a law review, or writes for legal audiences. They are, respectively, a sociology graduate student, a former historian, and a political scientist. There are many things one could say in defense of our current multiple-submission, student-selection, system. None appear on the Scholastica page.
  3. Scholastica asks for your sexual orientation and other demographic information (include a free-form place to talk about “additional comments that demonstrate diversity”) and then provides that information to each submitting journals that request it. Apparently the theory is that journals will want to take identity politics into account when making selection decisions. [For more, see blackman’s post on this topic, which I hadn’t seen before writing this.]
  4. Did I mention that Scholastica is more expensive that ExpressO and infinitely more expensive than emailing the journal directly?

I think Scholastica might be a good deal for journals – it takes care of publishing problems, and it will significantly reduce the flow of submissions. I can also see why graduate students from other disciplines would find our tiny corner of the world to be odd.  But I don’t see why anyone would ever submit through their system unless absolutely forced to, especially when they appear determined to import some unattractive aspects of other disciplines into legal academic publishing, which is already quite ugly.

What I don’t particularly understand is why faculty of the institutions running law reviews which are now exclusive to Scholastica are permitting this radical turn, which almost certainly will result in more concentration of prestige publication in the hands of prestige authors (who have the money to pay for multiple submissions at $5.00 each).  Er.  Reading that sentence again, I guess I understand after all.

That all said, Scholastica, please don’t lose my submission to NYU! I’ve never even gotten a rejection from those folks – maybe this year you can gin one up?

6

Pick up the Phone!

From Redstone Federal Credit Union’s credit card agreement:

“Collection. If your Account should become past due, or otherwise in default, you will accept telephone calls from us regarding collection of your Account. You understand that the calls may be automatically dialed and a recorded message may be played. You agree that such calls shall not be “unsolicited” calls for the purpose of state or federal law.”

Translation: screening us is breach of contract!

14

The Good Life and Gun Control

Like many of you, I’ve been horrified by the events in Newtown, and dismayed by the debate that has followed.  Josh Marshall (at TPM) thinks that “this is quickly veering from the merely stupid to a pretty ugly kind of victim-blaming.”  Naive realism, meet thy kettle!  Contrary to what you’ll see on various liberal outlets, the NRA didn’t cause Adam Lanza to kill innocent children and adults, nor did Alan Gura or the army of academics who helped to build the case for an individual right to gun ownership.  Reading discussions on the web, you might come to believe that we don’t all share the goal of a society where the moral order is preserved, and where our children can be put on the bus to school without a qualm.

But we do.

We just disagree about how to make it happen.

Dan Kahan’s post on the relationship between “the gun debate”, “gun deaths”, and Newtown is thus very timely.  Dan argues that if we really wanted to decrease gun deaths, we should try legalizing drugs.  (I’d argue, following Bill Stuntz, that we also/either would hire many more police while returning much more power to local control).  But decreasing gun deaths overall won’t (probably) change the likelihood of events like these:

“But here’s another thing to note: these very sad incidents “represent only a sliver of America’s overall gun violence.” Those who are appropriately interested in reducing gun homicides generally and who are (also appropriately) making this tragedy the occasion to discuss how we as a society can and must do more to make our citizens safe, and who are, in the course of making their arguments invoking(appropraitely!) the overall gun homicide rate should be focusing on what we can be done most directly and feasibly to save the most lives.

Repealing drug laws would do more —  much, much, much more — than banning assault rifles (a measure I would agree is quite appropriate); barring carrying of concealed handguns in public  (I’d vote for that in my state, if after hearing from people who felt differently from me, I could give an account of my position that fairly meets their points and doesn’t trade on tacit hostility toward or mere incomprehension of  whatever contribution owning a gun makes to their experience of a meaningful free life); closing the “gun show” loophole; extending waiting periods etc.  Or at least there is evidence for believing that, and we are entitled to make policy on the best understanding we can form of how the world works so long as we are open to new evidence and aren’t otherwise interfering with liberties that we ought, in a liberal society, to respect.”

Dan’s post is trying to productively redirect our public debate, and I wanted to use this platform to bring more attention to his point.  But, I think he’s missing something, and if you follow me after the jump, I’ll tell you what.

Read More

6

Unrepresentative Turkers?

Like many others, I’ve been using Amazon Mechanical Turk to recruit subjects for law & psychology experiments.  Turk is (i) cheap; (ii) fast; (iii) easy to use; and (iv) not controlled by the psychology department’s guardians.  Better yet, the literature to date has found that Turkers are more representative of the general population than you’d expect — and certainly better than college undergrads! Unfortunately, this post at the Monkey Cage provides a data point in the contrary direction:

“On Election Day, we asked 565 Amazon Mechanical Turk (MTurk) workers to take a brief survey on vote choice, ideology and demographics.  . . . We compare MTurk workers on Election Day to actual election results and exit polling.  The survey paid $0.05 and had seven questions:  gender, age, education, income, state of residence, vote choice, and ideology.  Overall, 73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for “Other.”  This is skewed in expected ways, matching the stereotypical image of online IT workers as liberal—or possibly libertarian since 12% voted for a third party in 2012, compared to 1.6% percent of all voters. . .  In sum, the MTurk sample is younger, more male, poorer, and more highly educated than Americans generally.  This matches the image of who you might think would be online doing computer tasks for a small amount of money…”

Food for thought.  What’s strange is that every sample of Turkers I’ve dealt with is older & more female than the general population.  Might it be that Turk workers who responded to a survey on election habits aren’t like the Turk population at large?  Probably so, but that doesn’t make me copacetic.

17

The Problem With Voting About Corporate Policies

The problems of corporate democracy are well illustrated by this embarrassing showing:

“The largest experiment yet in direct voting ended with a whimper on Monday, when Facebook closed its user polls on its new proposed terms of service, with what looked to be just 668,872 of Facebook’s 1.01 billion global users having even cast a vote, or just 0.067 percent (sixty-seven tenths of a percent) . . . Kicking-off December 6, Facebook had given all of its over 1.01 billion users around the globe one full week to vote on the changes it has proposed to its key “governing documents,” the Statement of Rights and Responsibilities and Data Use Policy, which spell out what type of user data Facebook can collect and what Facebook may do with it.”

Regarding corporate democracy (and its cousin, shareholder franchise): sounds nice, too bad people don’t act like they want it.

14

When is it ok to be “descriptive”

I presented a taxonomy of federal litigation today to a terrific audience at Rutgers-Camden. As I’ve covered in exhausting detail, the paper sets out to describe how lawyers organize causes of action together into complaints.  It uses a method called spectral clustering to illustrate the networks of legal theories that typically are pled together.  (It does some more stuff, but that’s the gist.)  As often happens when presenting this particular paper, it was pointed out to me that the project lacks a clearly defined normative “so what”.  This is basically correct. The “so what” of the paper is “this is a different, more-finely grained, way to see how attorneys think and produce cases. With pretty pictures. How do you like them apples?

As I said, I tend to get the so-what objection quite often when presenting this paper, and it’s pushed my co-authors and I to make the paper clearer about the implications of the method. At the same time, it has made me even more aware of the bias in legal writing to come up with papers that do more than taxonomize, or describe. This is a well-known problem with the legal academy.  True, taxonomies can be highly successful – Solove’s Taxonomy article is just one recent hit in a long parade of exceptionally good papers that basically try out different ways to organize legal concepts.  But those papers generally pitch the contribution of taxonomies  as systems to harmonize doctrine, or because they illustrate something about the world that needs fixing, or they uncover a missing category that is novel and interesting.

What’s less common is work that is no more than descriptive – this is what the world looks like; this is what happened – and doesn’t go on to fix or recommend a single thing.  Often such work is derided as mere reportage, a practitioner’s piece, or (worse) an uninteresting collection of facts, put together without a synthesis of why we should care.  (Actually, some papers are attacked on all three grounds.)  But other times, descriptive work is seen universally to be immensely important and valuable, even if it doesn’t advance any prescriptive agenda. Some of the middle-period Law and Society papers have this feel, though of course L&S generally is quite ideological.

You may be wondering: what’s the so-what of this post?  Here it comes:

-what is your sense of the appropriate criteria for deciding that purely descriptive scholarship makes a contribution?

-relatedly, if you were advising a first-time scholar, would you advise against writing a paper that is missing a policy solution in Part IV?  

My answer to the first question is that schools and faculties vary widely, and consequently I’d say the risk averse response to the second question is very, very clear.  Discuss.

0

Empirical Studies Workshop

Intrigued by the goings on at CELS VII?  Join the revolution.  Andrew Martin asked me to post the following:

Title: Conducting Empirical Legal Scholarship Workshop, May 22-24, 2013

On Wednesday, May 22, 2013 through Friday, May 24, 2013, Lee Epstein and Andrew Martin will be teaching their annual Conducting Empirical Legal Scholarship workshop.  This workshop will be held in Los Angeles, and is co-sponsored by USC Gould School of Law and Washington University Law. There is more information available about the workshop here:

http://law.usc.edu/EmpiricalWorkshop

The Conducting Empirical Legal Scholarship workshop is for law school and social science faculty interested in learning about empirical research.  The instructors provide the formal training necessary to design, conduct, and assess empirical studies, and to use statistical software (Stata) to analyze and manage data. Participants need no background or knowledge of statistics to enroll in the workshop.  Topics to be covered include research design, sampling, measurement, descriptive statistics, inferential statistics, and linear regression.

3

CELS VII: Low Variance, High Significance

[CELS VII, held November 9-10, 2012 at Stanford, was a smashing success due in no small part to the work of chief organizer Dan Ho, as well as Dawn Chutkow (of SELS and Cornell) and Stanford’s organizing committee.  For previous installments in the CELS recap series, see CELS III, IV, V, and VI. For those few readers of this post who are data-skeptics and don’t want to read a play-by-play, resistance is obviously futile and you might as well give up. I hear that TV execs were at CELS scouting for a statistic geek reality show, so think of this as a taste of what’s coming.]

Survey Research isn't just for the 1%!

Unlike last year, I got to the conference early and even went to a methods panel. Skipping the intimidating “Spatial Statistics and the GIS” and the ominous “Bureau of Justice Statistics” panels, I sat in on “Internet Surveys” with Douglas Rivers, of Stanford/Hoover and YouGuv. To give you a sense of the stakes, half of the people in the room regularly use mTurk to run cheap e-surveys. The other half regularly write nasty comments in JELS reviewer forms about using mTurk.  (Oddly, I’m in both categories, which would’ve created a funny weighting problem if I were asked my views.) The panel was devoted to the proposition “Internet surveys are much, much more accurate than you thought, and if you don’t believe me, check out some algebraic proof.  And the election.”  Two contrasting data points. First, as Rivers pointed out, all survey subjects are volunteers, and thus it’s a bit tough to distinguish internet convenience samples from some oddball scooped up by Gallup’s 9% survey response rate.  Second, and less comfortingly, 10-15% of the adult population has a reading disability that makes self-administration of a survey prompt online more than a bit dicey.  I say: as long as the disability isn’t biasing with respect to contract psychology or cultural cognition, let’s survey on the cheap!

Lunch next. Good note for presenters: avoid small pieces of spinach/swiss chard if you are about to present. No one will tell you that you’ve spinach on a front tooth.  Not even people who are otherwise willing to inform you that your slides are too brightly colored. Speaking of which, the next panel I attended was Civil Justice I. Christy and I presented Clusters are AmazingWe tag-teamed, with me taking 9 minutes to present 5 slides and her taking 9 minutes to present the remaining 16 or so.  That was just as well: no one really wanted to know how our work might apply more broadly anyway. We got through it just fine, although I still can’t figure out an intuitive way to describe spectral clustering. What about “magic black box” isn’t working for you?

Read More