Category: Law School (Rankings)

2

Boston College Moves Up in Jurist Ranks

National Jurist has recalculated its law school rankings under pressure from critics who stressed the dubious reliability of the “Rate My Professor” component. Critics had objected to many other flaws in the methodology as well, some comparing it to the widely-ridiculed  approach taken by the Thomas Cooley Law School, in which that school turns out to be the second best law school in the country overall, edged out only by the Harvard Law School. Among the most vociferous critics of both systems, as well as pretty much every other but his own, is the ubiquitous Brian Leiter, professor at U. Chicago Law School.

Notably, Chicago was among a group of schools where “Rank My Professor” had manifest and profound flaws, such as counting professors who do not teach at the school.  About 8 schools moved up in the rankings, including my esteemed former employer Boston College. In correcting itself, the National Jurist’s headline beamed “Best Law Schools Updated, Corrected: U. Chicago Jumps Into Top 5.” 

If that were meant as a cynical ploy to silence Prof. Leiter, however, the plan has backfired, as he continues to opine that National Jurist should scrap its entire methodology and start over. He suggests hiring consultants to help with the task.  If they do, I would encourage editors to avoid retaining any present or former law professor, however, as they all naturally have tendencies akin to those behind the Cooley study. Go Eagles!

15

Law School Rankings

One of the most common complaints that you hear from law professors and deans is that the U.S. News and World Report rankings exert too much influence over legal education.  If given a choice between doing something to boost its ranking or doing something to help students, the incentives for a school are heavily weighted towards boosting the ranking.  This is true because rankings are widely publicized and provide a simple way for prospective students, alumni, and other interested constituencies to evaluate law school performance.

If people were confident about how the the rankings were done, then that influence might be acceptable.  But most faculty do not think that the methodology used by U.S. News is sound.  I’ve noted before that they give no weight to student or faculty diversity, and Malcolm Gladwell wrote an essay observing that the rankings do not take cost-effectiveness into account (which is especially strange in this era).  Granted, coming up with a standard that everyone would agree upon is impossible, but we can do better.

What is to be done?  The answer to monopoly is competition.  We need other organizations to conduct law school rankings. This would give people more information, especially if the alternatives explicitly take factors into account (e.g., cost) that are absent from the U.S. News rankings. It would also diminish the power of any single organization or person over law schools, and make gaming the ranking system far more difficult.

No single school can be trusted to do this for conflict-of-interest reasons, but there are plenty of other candidates.  The ABA and the AALS are two obvious ones assuming that no other commercial outfit wants to compete with U.S. News.  Or, dare I say it, a consortium of law blogs could organize and then disseminate these rankings for free.  It’s time to stop whining about U.S. News and start doing something to give schools better incentives to improve legal education.

7

Our Bar . . . is . . . an asylum for the lame, and the halt, and the blind from the law schools of this country. And they are still coming.

This guy has seen the same debate so many times it broke his back

Sorry for the blogging hiatus.  I’ve been writing.  I’m sorry also to have missed the latest NYT attack on legal education — in the form of a misleading hatchet job on NYLS.  The article – one of a shoddy series by David Segal – struck an academic nerve already made sensitive by Justice Roberts’ dismissal of legal scholarship.

Of course, arguments about law school’s worth and scholarship’s consequence are evergreen – they drive blogging traffic and comments & promise to motivate engagement between blogs by practicing lawyers and the academy.  But quite often, unfortunately, these discussions go nowhere.

On law professor blogs, there’s a tone of tetchy defensiveness: “the market tells us that we’re worthwhile – just look at the continuing number of lemmings pounding at the gate!”, or “of course our scholarship is consequential, let’s count the citations”; or, “no one ever promised that a JD was a job guarantee!”; or, “what’s their BATLS?” [The last is a truly obscure negotiation joke if there ever was one.]

For reporters, it feels like the scene in the Wire when they are talking about what to cover in the coming year. Sure, you could talk about complexity and globalization and economic markets and the changing nature of legal practice.  Or you might talk about the relationship between ABA regulation, thoughtless paternalism, and resulting distributional inequalities in education.  But that’s a set of sprawling stories – lacking an obvious villain to muckrake.  Rather, then, the news blames the dickensian aspect of law schools.  Reporters write articles that stir the pot, but aren’t recognizable to insiders, making them less likely to actually motivate change.

Last, not least, the practicing lawyers often articulate resentment toward ivory tower academics who ignore the realities of “trench lawyering”. (This happens even when the “academics” in question are actually practicing lawyers.)  Basically: impractical law professors versus practical lawyers.

Why does this “debate” feel so tired?  I have a partial hypothesis: because we ignore history. I had a great research assistant, Alex Radus, collect quotes about the ferment about legal education in the 1930s-1940s.  (Which is highlighted in Prosser’s famous 1948 speech to Temple’s law faculty, Lighthouse No Good.“)  After the jump, you’ll see some fantastic quotes from that era and before, which remind us that “what has been will be again / what has been done will be done again /there is nothing new under the sun.”

Read More

Protean Rankings in the Economy of Prestige

Paul Caron brings news of the ranking system from Thomas M. Cooley School of Law, which pegs itself at #2, between Harvard and Georgetown. Caron calls it “the most extreme example of the phenomenon we observed [in 2004]: in every alternative ranking of law schools, the ranker’s school ranks higher than it does under U.S. News.” I just wanted to note a few other problems with such systems, apart from what I’ve discussed in earlier blog posts and articles on search engine rankings.

Legendary computer scientist Brian W. Kernighan (co-author of the classic textbook on the C programming language) wrote a delightful editorial on rankings last fall:

In the 1980s, statisticians at Bell Laboratories studied the data from the 1985 “Places Rated Almanac,” which ranked 329 American cities on how desirable they were as places to live. (This book is still published every couple of years.) My colleagues at Bell Labs tried to assess the data objectively. To summarize a lot of first-rate statistical analysis and exposition in a few sentences, what they showed was that if one combines flaky data with arbitrary weights, it’s possible to come up with pretty much any order you like. They were able, by juggling the weights on the nine attributes of the original data, to move any one of 134 cities to first position, and (separately) to move any one of 150 cities to the bottom. Depending on the weights, 59 cities could rank either first or last! [emphasis added]

To illustrate the problem in a local setting, suppose that US News rated universities only on alumni giving rate, which today is just one of their criteria. Princeton is miles ahead on this measure and would always rank first. If instead the single criterion were SAT score, we’d be down in the list, well behind MIT and California Institute of Technology. . . . I often ask students in COS 109: Computers in Our World to explore the malleability of rankings. With factors and weights loosely based on US News data that ranks Princeton first, their task is to adjust the weights to push Princeton down as far as possible, while simultaneously raising Harvard up as much as they can.

Read More

10

The Numbers are REALLY In–Plus Two Modest Proposals

For those of you who had any doubts, our friends at Kaplan have just confirmed it:  Aspiring law students care more about law school rankings than anything else, including the prospects of getting a job, quality of program, or geography.

Sayeth Kaplan:

1,383 aspiring lawyers who took the October LSAT . . . [were] asked “What is most important to you when picking a law school to apply to?” According to the results, 30% say that a law school’s ranking is the most critical factor, followed by geographic location at 24%; academic programming at 19%; and affordability at 12%. Only 8% of respondents consider a law school’s job placement statistics to be the most important factor. In a related question asking, “How important a factor is a law school’s ranking in determining where you will apply?” 86% say ranking is “very important” or “somewhat important” in their application decision-making.

Mystal at ATL expresses shock–shock!–that potential law students could be so naive. Surely, he fairly observes, they should care most about job prospects.

Yes, that would be true if they were rational.  Yet, we all know from the behavioral literature that we apply a heavy discount rate to long-distance prospects.  How much can I or  should I care today about what may happen 3 (or 4) years from today?

If you think about it from the perspective of any law school applicant today, the one concrete thing they can lock onto that has present value is the school’s ranking:  It is simple, quantified, and–perhaps most important–tauntable.  No one’s face burns with shame because their enemy (or friend)  got into a law school with a better job placement rate.  Jealously and envy–the daily diet of anxious first-years–are driven by much simpler signals:  Is mine bigger (higher) than yours?

This is not to defend the students who place so much faith in numbers that have repeatedly been shown to be incredibly stupid.  It just means that Kaplan’s survey (and I have not seen the instrument or data) makes intuitive sense.

Which leads to me to offer two modest (and probably unoriginal) proposals:

Read More

1

Assessment Assessment

Academics – driven by their accrediting agencies – have a new buzzword.  We are all now charged with thinking about assessment.  How well are we doing at the goals we set out for ourselves?  How do we know?  How do we know if our processes of assessment are appropriate?  As an academic (non-legal) blogger observed:

Going beyond the reasonable notion that you should periodically take a deeper look at what you’re doing, pedagogical reformers of many sorts get convert zeal and treat assessment as a moral imperative.  But, when a religion has enough zealous adherents, it might suddenly become mainstream.  And when it goes mainstream, it goes from being pure to being mass market lowest common denominator oversaturation.  The word “assessment” is no longer just confined to careful examinations of how well something is working.  It isn’t even just applied to a bureaucratic ritual of report-writing focused on the curriculum.  It’s applied to every piece of paper, every report, every bit of data, any and every piece of bureaucracy and hoop-jumping and report-generating.  The odds are good that a time sheet will soon be marked “Hours assessment” and an account statement will be marked “Fiscal assessment.”

This proselytizing ideal has obviously caught on in the ABA’s self-study process, which requires not just a strategic plan and a strategic planning process, but also that the school show that it regularly evaluates its self-assessment and thinks about whether the school’s goals are good ones.  Schools which fail to have a process, plan, and plan assessment will be disapproved until they come to their senses.

It’s no small irony – nor I’m sure am I the first to note – that there is no evidence at all that schools which regularly engage in planned reflection produce better outcomes for students or for society than schools who muddle through with less formal techniques.  I’m not even sure that it is possible to design an experimental study that would make the case for assessment, given external validity concerns.  The case against self-reflection is pretty simple: deciding what academics ought to maximize is a hard problem, and any answer arrived at by any group of people will necessarily be too vague to provide hooks for truly useful tactical choices, especially when the time spent planning uses up  productive resources .  Indeed, it’s possible that designing ever-more-particularly assessment metrics (and plans for achieving those metrics) encourages us to set ever-more-narrow goals, which are then, comfortably, met.

All in all, I’d give the current assessment trend a 23.3 on an A to ∂ point scale, where our goal is to hit a ß.

1

Congrats to PA Bar Passers!

Pennsylvania’s Bar results came out last week.  Congratulations to all passers, and especially to my graduating students who are now licensed lawyers.  The statewide pass rate for first-time takers was 84.68 percent.  The rates for the Pennsylvania area law schools were:

Penn                92.86%

Temple            92.34%

Villanova        89.87%

Widener (H)   87.67%

Pitt                  86.93%

Duquesne       86.47%

Penn State      83.64%

Widener (D)   82.61%

Drexel             81.32%

Rutgers (C)    77.85%

Because failing the bar can be economically devastating, bar passage is a very, very important marker of  a law school’s success – certainly more so than SSRN download rankings!!   Being above the state’s average is a big deal, and worth celebrating.  Temple had a problem on this score about a decade ago, and we made serious efforts to help our students be better prepared to enter practice.  I’m glad to see that our efforts are bearing fruit.

Incidentally, the combined rates (July/Feb) are consistent – though Penn falls a bit — and I’ve posted that list after the jump:

Read More

4

The Top Law Reviews (Eigenfactor)

The latest way to measure scholarly influence is the eigenfactor, a term to describe various algorithms used to quantify aspects of knowledge.  The linked web site enables people to find top lists using assorted measures, including the top law reviews using article influence proxied by citation histories. 

According to this measure, the  following are the top-25 student-edited general interest law reviews published in the United States.   The list looks congruent with my sense of generally accepted understandings among law faculty of law review standings.  At first it may make one wonder whether tools like this are useful because they verify knowledge or useless because they don’t tell us anything new.   But, on second thought, people new to this profession may neither know nor want to ask.  Read More

12

The Influence of Law Blogs (2006-Present)

I asked my wonderful research assistant, Robert Blumberg (TLS ’12), to update the Yospe/Best study on court citation of blogs and the Best 2006 study on law review citation of blogs.  He used as a dataset the 2009 legal educator blog census (which we are currently updating – see future posts for details), excluded some general sites which happen to have a law professor as rare contributor (the Huffington Post), and ran searches in WL’s JLR database.  Since 2006, under those conditions, law blogs have been cited in the journals 5460 5883 times.  Here are the top twenty sites since 2006.  Total citations are in (parenthesis), 2006 rank in [brackets]:

  1. FindLaw’s Writ (618)
  2. The Volokh Conspiracy (402) [2]
  3. SCOTUSBlog (305) [4]
  4. Balkinization (259) [3]
  5. Patently-O: Patent Law Blog (211) [8]
  6. Concurring Opinions (162)
  7. Sentencing Law and Policy (160) [1]
  8. JURIST – Paper Chase (130)
  9. PrawfsBlawg (122)
  10. The Becker-Posner Blog (104) [10]
  11. Conglomerate (102)
  12. White Collar Crime Prof Blog (89) [12]
  13. Election Law @ Mortiz (85)
  14. Legal Theory Blog (85) [5]
  15. The University of Chicago Law School Faculty Blog (76)
  16. Technology & Marketing Law Blog (74)
  17. Lessig Blog (73) [6]
  18. The Harvard Law School Forum on Corporate Goverance and Financial Regulation (72)
  19. Ideoblog (72)
  20. Election Law Blog (69)

Overall, the top 20 represented around 63% of all citations over the four year study period.  In 2006, the top 20 represented 76% of  852 citations.  In 2007, the top 20 represented 68% of 1095 citations.  In 2008, the top 20 represented 61% of 1388 citations.  In 2009, the top 20 represented 63% of 1441 citations.  Finally, in 2010 (so far) the top 20 has represented 65% of 562 citations.  It is difficult to make out any clear trend lines in the data.  Even taking into account the lag time of publication for 2009 and 2010 volumes, the rate of citations to law blogs is not increasing. There is a very mild trend toward diffusion in influence, although the top blogs still appear to drive the conversation, even as the number of professors blogging increased.  In the aggregate, the top few  blogs would each (if considered to be individual scholars) be worthies on Leiter’s citation lists.

Read More

6

Author Order in Law Reviews

Other disciplines don’t kid around about the ordering of authors in publications.  In political science and economics, alphabetical or reverse-alphabetical ordering is the dominant approach, even though it distorts hiring decisions.  In science, the first and last names matter – woe to the middle men!  Harvard is so concerned about the trend that it instructs its faculty to “specify in their manuscript a description of the contributions of each author and how they have assigned the order in which they are listed so that readers can interpret their roles correctly [and] prepare a concise, written description of how order of authorship was decided.”

In law, lacking a tradition of co-authorship, there appears to be at best a weak norm that the first author is the primary contributor. That results in a set of interrelated problems:

1)  To law audiences, the first author did the most work, and is rewarded in two ways.  The first is qualitative, and pops up at tenure, promotion, and lateral review — “he was the driver on that piece,” or “she was just the second author.”  Quantitatively, the bluebook foolishly permits multiple author works to be et al’d, meaning that the second through nth authors never get to see their name in the citation print.  Given the rudimentary nature of impact citation analysis in the legal academy, this mean that people who are listed first get the citations and the people who aren’t don’t. This might be less troublesome if the “first author” norm was correct — that is, if first authors in law reviews actually did more work. But my bet is that given letter head bias, many co-authored pieces list as the first author the most prominent author (or at least the author at the best-ranked school).  The upshot: first authors in law reviews are rewarded for being first in both qualitative and quantitative terms, though it’s not clear they ought to be.

2)  To other disciplines, this is fundamentally screwy and is another reason not to publish in a law review.  But interdisciplinary co-authored work published outside of the law reviews becomes that much more difficult as a result.  If a law professor and a non law professor were to publish in an economics journal, my sense of the norm is to alphabetize. [Correct me if I’m wrong here.]  Non legal audiences look at this and understand that it doesn’t signify relative contribution.  Law audiences don’t have that filter on, and the result (again) is that the second author is punished, here for having a last name at the back of the alphabet.

3)  Making sense of this mess requires coordination, which is quite hard because we lack a learned society that is sufficiently respected to impose change from above.  We do have, however, a few very strong journals that have had remarkable success in changing otherwise intractable scholarly pathologies like article bloat.  If the Harvard Law Review could -almost singlehandedly – impose a 25,000 word limit, surely it could fix this problem too.  In my view, the top few journals (HYS) ought to, as a part of their blue-booking project, agree to impose something like the Harvard faculty author order guidelines on folks who are publishing joint projects in their pages. The default ought to be reverse alphabetical listing.  Each article should state the respective contributions of the authors and, to the extent that they have deviated from the alphabet, why.  Finally, HYS ought to reform the bluebook to insist that the first citation of any work include the names of all contributors to the piece, rather than permitting et al. treatment.