Site Meter

The Utility of the Washington and Lee Rankings

You may also like...

15 Responses

  1. David Bernstein says:

    I’d go with a combination of US News ranking, academic reputation of the law school that publishes the law review, my general sense of whether the law review in question is generally better or worse than the other two factors would suggest, and a glance at the W&L rankings. No way I’d take Virginia over Harvard, or Houston over Georgetown, but I have taken offers from law reviews at lower-ranked schools over higher ones, when my sense was that the review in question was “better.” But the younger generation seems completely fixated on U.S., and that will eventually become destiny for the law reviews.

  2. k says:

    I think your instinct is right. I’m a recent graduate who just placed an article in a specialty journal and, when deciding among multiple offers, I went with the “fancier law school name.” I made that choice in order to maximize my opportunities, as it were. I imagine most hopeful academics would make the same choice. In a decade, assuming I’ve moved into academia and established myself, the chances are good I would not necessarily make the same choice—I anticipate that my concern then would be engaging in a dialogue with peers.

  3. Orin Kerr says:

    It’s a prestige game, not a citation game. In light of that, I am always puzzled by reliance on the W&L ranking. The one catch is that if junior people are following W&L, over time that may become a self-fulfilling prophecy.

  4. My sense is that the place-name-prestige heuristic is generally justified, but that it breaks down in two ways. First, among specialty journals in a given field, citation counts can matter more than they do among general law reviews. The relevant community of readers is more familiar with the actual journals. If I say “Santa Clara High Tech,” for example, people who do Internet law will nod their heads in a way they wouldn’t if I just said “Santa Clara.”

    Second, some journals really outperform their U.S. News rankings in the W&L index, and people’s sense of which journals these are can be surprisingly good. North Carolina, for example, does better in the W&L index by something like a dozen places, and when I’m flipping through the weekly list of new journal issues received by our library, the North Carolina page is definitely on the list of ones I’ve learned from experience to read more closely, since there are often articles worth looking through on it.

  5. Edward Swaine says:

    I have not looked into the discrepancy between the W&L index and the USNWR, and hope I never will, but here’s a theory: W&L’s authors did not accept offers based on their forecast of the future USNWR rank, which they couldn’t know, but rather based on their general impression of the UNSWR rank (or perhaps the academic reputation, or perhaps average USNWR ranking across a limited preceding period) at the time they placed their articles. So we will see discrepancies emerge as schools change rankings, with the W&L ranking remaining more stable.

    I have no idea whether this is true. But it might explain why North Carolina, which I think has dipped a little over time (but has always been well esteemed among academics), would do better in the W&L index: it’s a lag effect. It might explain why Wash U, which I think has moved in the opposite direction, does less well in that index relative to its current USNWR rank. It would also imply, if true, that there is in placement decisions no independent evaluation of a law review’s quality apart from the school’s standing (again, perhaps with more emphasis on ranking among academics). And it might cause one to worry that the rate of citations to a particular piece reinforced that tendency, such that articles get used by academics at a rate dictated more by USNWR rank than by article quality — unless one thinks that the placement market indirectly tests quality, in which case peer review is sent reeling!

    The specialty journal issue is a very different one.

  6. h says:

    The specialty journal issue is of course different, though I suspect they also track the USNWR school rankings. I’d think the real difficulty is when weighing between a specialty journal at a high-ranked school and a main law review at a lower one; how would you choose between, say, Harvard Int’l and Iowa? When does the name outweigh the main/secondary journal distinction?

  7. dwk says:

    This is a great blog topic. I am wondering, if the USN&WR rankings are the measure of prestige, how much should an author care about small differences in rank? So if I have publication offers from the general-readership, non-specialty law reviews at schools that are, for example, ranked 43 and 48, should I regard those as very different or fairly similar in terms of prestige?

  8. Daniel says:

    When you look at a law review citation, how much does the age of the publication matter? In other words, if you see that an article is in volume two of a specialty journal, does that significantly decrease your respect for the journal? Or would you still rather have 3 Georgetown Random Specialty Journal 35 rather than 84 Tier 4 School L. Rev. 386?

  9. Edward Swaine says:

    As to the subsequent questions, I have come into possession of a water-stained but UL-certified parchment providing a decision tree for those going to go on the market, or who are untenured:

    1. Choose based on the perceived quality of the journal among the group who will hire or promote you (easiest proxy: probably average USNWR peer rank over preceding 5 plus years, with penalty of clear but uncertain magnitude for “mere” specialized journals that varies among fields, and divergent views about degree of favorable bump for unaffiliated peer-reviewed journals).

    2. If #1 does not produce a pronounced difference, choose based on perceived quality of the journal among those you hope will read your work, cite it, and recommend you — and who may in some cases screen you for hiring (proxy: same as above, but much less substantial penalty and perhaps positive sign for specialty journals, again depending on field).

    3. If #2 remains close or indeterminate, things affecting your particular article: e.g., variety as against your previous published work, willingness to bargain on length, whether the journal is way behind, whether the slot meets any eccentric needs you may have, quality/expertise/zeal of editorial team promised.

    4. If #3 doesn’t help, everything else, including volume number (avoid 13, favor 100), in-journal status (lead article, essay, etc.), hue and appearance of journal and quality of reprints (glossy, cool emblem involving tree or shield or hologram, funny title anagrams).

    If everyone else (at least) ignores this, it might be a better world in which all articles are judged solely on their individual merit. So burn parchment after reading.

  10. Top-20 Articles Editor says:

    Based on my experience in selecting articles, citation count is a better metric to determine journal quality than the “prestige” of the school.

    Why? Law reviews are student-run institutions. So, the quality of a law review does not necessarily track the law faculty, acceptance rate, the library, the size of endowment, etc. In other words, traditional measurements of a school’s rank are largely irrelevant to the quality of the journal.

    Citation count, by contrast, is directly relevant in measuring the quality of the journal. High-cite pieces are, ceterus paribus, more influential than low-cite pieces. A high cite-count suggests that AEs (and the journal as an institution) have good judgment in selecting pieces that are provocative and substantively valuable. And a good selection of articles generally suggests that the law review will work harder and do a better job for professors. Many journals just sift through resumes or play the expedite game; others actually read every piece that comes through the door.

    Now imagine a journal whose W&L rank outperforms the school’s US News rank. (Say, Ohio State). What can we say about this journal? For one, the journal’s cite-count does not piggyback off its school’s prestige. People cite to OSU pieces because they are valuable, not because OSU is a top-20 institution. Indeed, we can say OSU L. J. selects provocative and valuable pieces IN SPITE OF THE FACT that professors are biased against journals from lower-ranked US News schools.

  11. Sigh. Rankings. Of course rankings matter the most, and far more than they should, but other considerations seem like good tiebreakers when the various rankings conflict.

    Such as: whether you keep the copyright or at least whether you can have generous license rights under the contract; how soon the journal will publish the piece; who else has published there recently; whether the student editor seemed agreeable, knowledgeable, enthusiastic; deadlines & editing schedule; etc; etc; etc…

  12. end the madness! says:

    Oy vey. When will we all wake up from this rankings nightmare? Let’s remember that we are talking about student-edited law journals, and submission cycles replete with games such as trading up and exploding offers on absurd deadlines. Let’s further remember that it does not matter one bit, in any intellectual sense, which of them your article runs in, especially in the age of SSRN and Westlaw.

    To be sure, some people think placements are important. They are snobs, and ultimately idiots too. We all know that they worship the false god of prestige. Yet somehow we think that because they think that placements are important, we should worry about placements, and thus placements become important.

    End the madness. Right now. Stop doing posts like this one. Maybe we can go even further. Can we all make a pact to submit widely and then accept the first journal that makes an offer that seems semi-competent at removing typos? Image the fun of this conversation: “I’m sorry, Harvard, you’re too slow this year. I’ve already taken an offer from Penn State.” What do you say?

  13. Edward Swaine says:

    end the madness!,

    I am with you in spirit, but no one involved is an idiot. Briefly . . .

    1. Those asking have a legitimate interest in understanding the system. Where and how long they work is at stake, regardless of whether it should be. Of course one need not respond.

    2. It isn’t the case that student editors can only aspire to be semi-competent at removing typos. A great many exercise good judgment in declining potential articles (and accepting them) and most offer excellent substantive suggestions once they have read into the materials. But we need not assume that all are equally serious or capable; here I think rank is a poor proxy helpful in the absence of any other information. I encourage obtaining other information.

    3. It is hardest to defend those using rank of a publication as a tool to evaluate someone else’s work, and I won’t try here. But I certainly understand the thinking behind using rank as *one* basis for deciding what to read, given the virtues (and vices) of the expedite and shopping system.

    4. As to your proposal, you’ll naturally be creating a race that will completely gut any attempt at evaluating pieces — probably resulting in instant acceptances based on letterhead. If the whole system collapses, perhaps you will engineered a blow against snobbery, but if not you may worsen it, and waste a lot of editorial time and paper . . . for the sake of a joke whose humor exploits and reinforces snobbery. I’d suggest submitting to one randomly selected law review at a time, or if you truly think there’s no value added, exploring other avenues for publishing.

  14. end the madness! says:

    Thanks, Ed. I wrote a bit too hot-headedly and thus opened myself up to misinterpretation. I was not trying to suggest that anyone who cares about law review rankings is an idiot. (Indeed, a call from, well, one of your successors at the YLJ is a recurrent dream of mine; why does the alarm clock have to go off before publication?) And you are of course right that many, many law review editors work hard and add value to an article, far beyond just removing typos. Others add errors, and you can’t predict which type your law review ill be solely on rankings. Moveover, you are right that we have to care about placements, because it might well matter in our careers.

    What I was trying to say is that valuing placements for hiring or promotion purposes — *that* is idiocy. For goodness sake, read, or consult experts in the field, rather than relying on 2nd and 3rd year law students! I’ve heard, as I’m sure we all have, praise for candidates from appointments folks solely on the basis of placements. Surely it is sad when we resign ourselves to participating in a system because many people use it as a substitute for thought and independent evaluation.

    I am open to better proposals. But let’s keep the proposals coming.

  15. Skeptic says:

    O.k. I realize I’m very late into this discussion string and might not get any readership, but just in case it’s still alive, here goes:

    I’m using the combined General and Specialty journals search and including all available years and factors:

    The Wash& Lee poll is sometimes valid but often plain nonsense.

    Here are examples:

    Harv. & Yale are # 1 & 1, so far so good, but wait, not it’s not. It’s much more difficult for Yale to get the number of citations that Harvard does b/c Harv. puts out a yearly Supreme Court issue. That means that practitioners and courts are far more likely to cite than to Yale, skewing the results in favor of Harv.

    I’m going to skip over a bunch b/c who knows whether the articles in a 5 yr period are better in Virginia or Stanford or California or Cornell. I certainly don’t know which is better, and I’m going to guess than neither do you unless you’ve read them all and have a truly objective criteria.

    Let’s look a bit further:

    DePaul Law Review is ranked above the following: Georgia L. Rev., University of Cincinnati Law Review, Yale Journal on Regulation, Chicago-Kent Law Review, etc., Alabama, Hofstra, Tulane, etc.

    Does this really make sense to anybody?

    Is the Wash. Law Rev. actually worse than Villanova and Oregon?

    And by the time I get to George Washington at #91, I realize I’m wasting my time.

    All right, all right a little more: Univ. of Miami #123, this is way below Lewis and Clark, Law and Social Inquiry #125 and Akron L. Rev. #83, and here’s a doozey Constitutional Commentary (one of the two premier peer review, Con law journals (in my opinion)) comes in at #174.

    Big point of this is that Wash& Lee skews toward journals that publish case comments and more articles, giving ego-boosting, but hardly useful information.

    As for using status of schools, that would require first believing the US News poll is accurate. But beyond that I’d rather publish in Hofstra, Cinn., Cardoza, Chi-Kent and othe rjournals than SMU, Utah, Baylor, or Tenn. even though U.S. News rates the former group lower than the latter.

    A final note, there is no comparable rating system for university presses but somehow academics have a good sense of what presses are better; moreover, they can assess the quality of the book on it’s merits.

    Happy Trails!