Tagged: law reviews

6

Should Empirical Legal Scholars Have Special Responsibilities?

Before delving into the substance of my first post, I wanted to thank the crew at Concurring Opinions for inviting me to guest blog this month.

Recently, I have been thinking about whether empirical legal scholars have or should have special ethical responsibilities. Why special responsibilities? Two basic reasons. First, nearly all law reviews lack formal peer review. The lack of peer review potentially permits dubious data to be reported without differentiation alongside quality data. Second, empirical legal scholarship has the potential to be extremely influential on policy debates because it provides “data” to substantiate or refute claims. Unfortunately, many consumers of empirical legal scholarship — including other legal scholars, practitioners, judges, the media, and policy makers — are not sophisticated in empirical methods. Even more importantly, subsequent citations of empirical findings by legal scholars rarely take care to explain the study’s qualifications and limitations. Instead, subsequent citations often amplify the “findings” of the empirical study by over-generalizing the results. 

My present concern is about weak data. By weak data, I don’t mean data that is flat out incorrect (such as from widespread coding errors) or that misuses empirical methods (such as when the model’s assumptions are not met). Others previously have discussed issues relating to incorrect data and analysis in empirical legal studies. Rather, I am referring to reporting data that encourages weak or flawed inferences, that is not statistically significant, or that is of extremely limited value and thus may be misused. The precise question I have been considering is under what circumstances one should report weak data, even with an appropriate explanation of the methodology used and its potential limitations. (A different yet related question for another discussion is whether one should report lots of data without informing the reader which data the researcher views as most relevant. This scattershot approach has many of the same concerns as weak data.)

Read More

5

Does Blind Review See Race?*

In a comment to my earlier post suggesting that law review editors should seek out work from underrepresented demographic groups, my co-blogger Dave Hoffman asked an excellent question: Would blind review remedy these concerns? It seems to me that the answer here is complicated. Blind review would probably be an improvement on balance, but could still suffer from — err, blind spots. Here are a few reasons why.

The paradigmatic case for the merits of blind review comes from a well-known study of musician hiring, published about a decade ago by Claudia Goldin and Cecilia Rouse in the American Economic Review. Goldin and Rouse gathered data on symphony auditions, and found that blind auditions — that is, ones which concealed the gender of the auditioning musician — resulted in a significantly higher proportion of women musicians auditioning successfully. As Rouse commented,

“This country’s top symphony orchestras have long been alleged to discriminate against women, and others, in hiring. Our research suggests both that there has been differential treatment of women and that blind auditions go a long way towards resolving the problem.”

The Goldin-Rouse study shows that blind review can be a useful tool in combating bias. Would a similar review system work in the law review context?

Well, maybe. Read More

43

In Defense of Law Review Affirmative Action

As you may have seen, the new Scholastica submission service allows law reviews to collect demographic information from authors. A flurry of blog posts has recently cropped up in response (including some in this space); as far as I can tell, they range from negative to negative to kinda-maybe-negative to negative to still negative. The most positive post I’ve seen comes from Michelle Meyer at the Faculty Lounge, who discusses whether Scholastica’s norms are like symposium selection norms, and in the process implies that Scholastica’s model might be okay. Michael Mannheimer at Prawfs also makes a sort of lukewarm defense that editors were probably doing this anyway.

But is it really the case that law review affirmative action would be a bad thing? Read More

8

Articles Editors Dos and Don’ts

I promised one more post before I said goodbye. So I spent most of my time here giving my best articles editor advice to professors looking to submit their articles. And I hope that was helpful! But, let’s be real: there are plenty of problems on the law review side that need to be addressed as well. Some of the complaints folks have about law review editors are unfair — either because they don’t take into account important information from the student side of things, or because they put upon the students obligations that really ought to rest with the professors (most notably: if you want a peer-review system where third year law students aren’t reviewing your pieces, you are entirely free to start your own journals and submit to them exclusively. It is not the students’ responsibility to voluntarily cede power). But there are plenty of things — perfectly reasonable things — law reviews could do much better.

I’m assuming a “classic” law review model — student-edited, not blind, no peer review. Obviously, those are important potential areas of reform, but they’re beyond the scope of the advice I’m giving here.

Read More

8

The Answers to the Ultimate Questions

Thanks to everyone for posting such interesting questions. I’ll give my best stab at answering them below (I’m going to basically give my best truncated summary of each of the questions); if you have follow-ups or didn’t get a chance to ask, I’ll be happy to continue the conversation in the comments (but since I am moving apartments tomorrow, I may be only to hop on to this site sporadically).

Read More