Law School Ranking: Measurement vs. Characterization
posted by Frank Pasquale
I have long been concerned about negative externalities from ranking systems. Perhaps people and institutions are always prone to try to distinguish themselves. If so, Brian Leiter’s expert consultation for the MacLean’s rankings of Canadian law schools may be a good thing, since, as he states, it resulted in “a ranking system that can not be gamed, that does not depend on self-reported data, and is not an indecipherable stew of a dozen different ingredients.”
However, Benjamin Alarie at Toronto has critiqued Leiter’s efforts. Here are a few issues:
One of the central problems with how Faculty quality is measured is that it doesn’t assess influence in publications aside from 33 Canadian law journals. As an initial matter, I think it is fair to say that academics seek to publish in places with the most active audiences for particular types of research—for example, the best journals to publish law and economics research in are likely to be American peer reviewed journals such as the Journal of Legal Studies, or the American Law and Economics Review, or even professional economics journals.
[I]t is unlikely that frequency of citation is a perfect proxy for quality; for example, overly provocative papers are sometimes cited for being so provocative.
It is unclear what the threshold used was for including the firms as among the “elite firms” used by Maclean’s.
[A national reach] measure . . . misses “international reach” of the law schools that regularly place students in the excluded top New York and Boston firms, in international NGOs, and in various other attractive positions.
[By the way, I put the links into those block quotes above.]
I think these are all valid points, but the problem is even larger. The consumers of these rankings are, by and large, students looking for a good education and firms looking for well-trained lawyers. Why so much focus on whether the law schools are feeders for “top” firms? Perhaps the best law teaching is that which manages to train people for diverse careers in law.
Finally, a more philosophical point.
I fear that a misguided quest for objectivity can lead us to present many judgments that ought to be characterizations as, instead, measurements or rankings. Leiter deserves great credit for not trying to mash an incommensurable array of statistics into a single figure. But I still think true accuracy in law school assessment might be better found in words, not numbers–in a long-form assessment of the school’s service to students, to the academy, and to its community. Perhaps such an evaluation would take a few hundred pages to cover all of Canada’s law schools. But if someone about to embark on a career in law is not willing to try to make sense of such a document, perhaps they should not be a lawyer in the first place.
Obviously such long-form documents would not achieve the concision of a ranking system. As someone who’s decried information overload, I see rankings’ appeal. But I also recall a favorite saying of Hilary Putnam on approaches to epistemology: “any theory that fits in a nutshell belongs there.” Perhaps the same can be said of rankings. There are some virtues to “knowing” less.