Archive for the ‘Law School (Rankings)’ Category
posted by Lawrence Cunningham
Good news for law professors now submitting articles seeking offers from high-status journals: the importance of status in American law schools is over-rated and is about to be reduced. At least that is the urging of an American Bar Association Task Force Working Paper released last Friday addressing contemporary challenges in U.S. legal education.
Obsession with status is a culprit in the woes of today’s American law schools and faculty, the Working Paper finds. It charges law professors with pitching in to redress prevailing woes by working to reduce the role of status as a measure of personal and institutional success. The group’s only other specific recommendation for law faculty is to become informed about the topics the 34-page Working Paper chronicles so we might help out as needed by our schoools.
Much of the rest of the Working Paper is admirable, however, making the two specific recommendations to law faculty not only patently absurd but strange in context. After all, the Working Paper urges reform of ABA/AALS and state regulations with a view toward increasing the variety of law schools. It calls for serious changes in the way legal education is funded, though it admits that the complex system of education finance in the U.S. is deeply and broadly problematic and well beyond the influence of a single professional task force.
The Task Force urges US News to stop counting expenditure levels as a positive factor in its rankings. It stresses problems arising from a cost-based rather than market-based method of setting tuition. It notes a lack of business mind-sets among many in legal education. It questions the prevailing structure of professorial tenure; degree of scholarship orientation; professors having institutional leadership roles; and, yes, faculty culture that makes status an important measure of individual and institutional success.
But amid all that, law professors have just two tasks: becoming informed and demoting status. So there must be some hidden meaning to this idea of status as a culprit and the prescription for prawfs to reduce the importance of status as a measure of success. I am not sure what it is. The Working Paper does not explain or illustrate the concept of status or how to reduce its importance.
I’ll to try to be concrete about what it might mean. Given the other problems the Task Force sees with today’s law faculty culture (tenure, scholarship and leadership roles), I guess they are suggesting that faculty stop making it important whether: Read the rest of this post »
August 7, 2013 at 6:56 am Tags: ABA Task Force on the Future of Legal Education Posted in: Law School, Law School (Hiring & Laterals), Law School (Law Reviews), Law School (Rankings), Law School (Scholarship), Law School (Teaching) Print This Post 6 Comments
posted by Dave Hoffman
Recently the ABA announced that it will no longer collect expenditures data from law schools: Leiter and Merritt offer thoughts on how that decision will influence the USWR rankings. Both posts are interesting, though somewhat impressionistic. Leiter thinks that state schools will benefit and Yale will lose it’s #1 spot; Merritt believes that USWR should reconfigure its method. [Update: Bodie adds his two cents.]
It’s well known that the influence of particular categories of data on the ranking can’t be determined simply by reading the charts that the magazine provides. Paul Caron notes that the rankings depend on on inputs that aren’t displayed (like expenditures). But it gets worse: (1) the point accumulation of each school influences that of every other school; (2) USWR changes the raw data through manipulations that are not well explained (placement discounts for law school funded positions) or are simply obscure (CoL adjustments for expenditures); (3) many schools don’t report information and USWR doesn’t advertise their missing-data imputation method; etc. etc. Bottom line: the rankings are very, very fragile. (Many would say they are meaningless except at 10,000 feet.) Luckily, Ted Seto’s work enables everyone to give their best shot to approximating each year’s ranking. Seto argues that variance within a category turns out to influence the final scores as much as the purported weight that USWR assigns to it.
As thought experiment, I decided to estimate what would happen if each school’s expenditure data was set to average school’s expenditure. I then used Seto’s method on 2011-2012 historic data to estimate the rankings in the absence of expenditure variance. This basically eliminates the influence of expenditure as a category. (A perhaps better, but more time consuming, approach would be to eliminate the expenditure categories altogether and re-jigger the equation accordingly). My back-of-the-napkin approach produces some wacky results, particular at the lower end of the ranking scale. To keep it simple, after the jump I’ll focus on the top ten winners and losers from the elimination of expenditure variance in the 2013 t100 and then offer some thoughts.
posted by Dave Hoffman
Inspired by this 2007 Taxprof post, I decided to compare the 2013 US News undergrad ranking to the 2013 overall law school rank. This project was a bit more complicated than it was six years ago, due both to scandal & to the proliferation of regionally rankings. But, ignoring schools that aren’t present on both lists, the results are illuminating. For figures, follow me after the jump.
posted by Dave Hoffman
While academics angst, law journal editors toil to manage the fire hose of submissions, real and fake expedites, and the uncertainty that comes with a new job. Many journal editors now seem to have the goal of “improving their ranking“. Seven years ago (!) I wrote some advice on that topic. It seems mostly right as far as it goes, but I want to revise and extend those comments below, in letter form.
posted by Lawrence Cunningham
National Jurist has recalculated its law school rankings under pressure from critics who stressed the dubious reliability of the “Rate My Professor” component. Critics had objected to many other flaws in the methodology as well, some comparing it to the widely-ridiculed approach taken by the Thomas Cooley Law School, in which that school turns out to be the second best law school in the country overall, edged out only by the Harvard Law School. Among the most vociferous critics of both systems, as well as pretty much every other but his own, is the ubiquitous Brian Leiter, professor at U. Chicago Law School.
Notably, Chicago was among a group of schools where “Rank My Professor” had manifest and profound flaws, such as counting professors who do not teach at the school. About 8 schools moved up in the rankings, including my esteemed former employer Boston College. In correcting itself, the National Jurist’s headline beamed “Best Law Schools Updated, Corrected: U. Chicago Jumps Into Top 5.”
If that were meant as a cynical ploy to silence Prof. Leiter, however, the plan has backfired, as he continues to opine that National Jurist should scrap its entire methodology and start over. He suggests hiring consultants to help with the task. If they do, I would encourage editors to avoid retaining any present or former law professor, however, as they all naturally have tendencies akin to those behind the Cooley study. Go Eagles!
posted by Gerard Magliocca
One of the most common complaints that you hear from law professors and deans is that the U.S. News and World Report rankings exert too much influence over legal education. If given a choice between doing something to boost its ranking or doing something to help students, the incentives for a school are heavily weighted towards boosting the ranking. This is true because rankings are widely publicized and provide a simple way for prospective students, alumni, and other interested constituencies to evaluate law school performance.
If people were confident about how the the rankings were done, then that influence might be acceptable. But most faculty do not think that the methodology used by U.S. News is sound. I’ve noted before that they give no weight to student or faculty diversity, and Malcolm Gladwell wrote an essay observing that the rankings do not take cost-effectiveness into account (which is especially strange in this era). Granted, coming up with a standard that everyone would agree upon is impossible, but we can do better.
What is to be done? The answer to monopoly is competition. We need other organizations to conduct law school rankings. This would give people more information, especially if the alternatives explicitly take factors into account (e.g., cost) that are absent from the U.S. News rankings. It would also diminish the power of any single organization or person over law schools, and make gaming the ranking system far more difficult.
No single school can be trusted to do this for conflict-of-interest reasons, but there are plenty of other candidates. The ABA and the AALS are two obvious ones assuming that no other commercial outfit wants to compete with U.S. News. Or, dare I say it, a consortium of law blogs could organize and then disseminate these rankings for free. It’s time to stop whining about U.S. News and start doing something to give schools better incentives to improve legal education.
Our Bar . . . is . . . an asylum for the lame, and the halt, and the blind from the law schools of this country. And they are still coming.
posted by Dave Hoffman
Sorry for the blogging hiatus. I’ve been writing. I’m sorry also to have missed the latest NYT attack on legal education — in the form of a misleading hatchet job on NYLS. The article – one of a shoddy series by David Segal – struck an academic nerve already made sensitive by Justice Roberts’ dismissal of legal scholarship.
Of course, arguments about law school’s worth and scholarship’s consequence are evergreen – they drive blogging traffic and comments & promise to motivate engagement between blogs by practicing lawyers and the academy. But quite often, unfortunately, these discussions go nowhere.
On law professor blogs, there’s a tone of tetchy defensiveness: “the market tells us that we’re worthwhile – just look at the continuing number of lemmings pounding at the gate!”, or “of course our scholarship is consequential, let’s count the citations”; or, “no one ever promised that a JD was a job guarantee!”; or, “what’s their BATLS?” [The last is a truly obscure negotiation joke if there ever was one.]
For reporters, it feels like the scene in the Wire when they are talking about what to cover in the coming year. Sure, you could talk about complexity and globalization and economic markets and the changing nature of legal practice. Or you might talk about the relationship between ABA regulation, thoughtless paternalism, and resulting distributional inequalities in education. But that’s a set of sprawling stories – lacking an obvious villain to muckrake. Rather, then, the news blames the dickensian aspect of law schools. Reporters write articles that stir the pot, but aren’t recognizable to insiders, making them less likely to actually motivate change.
Last, not least, the practicing lawyers often articulate resentment toward ivory tower academics who ignore the realities of “trench lawyering”. (This happens even when the “academics” in question are actually practicing lawyers.) Basically: impractical law professors versus practical lawyers.
Why does this “debate” feel so tired? I have a partial hypothesis: because we ignore history. I had a great research assistant, Alex Radus, collect quotes about the ferment about legal education in the 1930s-1940s. (Which is highlighted in Prosser’s famous 1948 speech to Temple’s law faculty, Lighthouse No Good.“) After the jump, you’ll see some fantastic quotes from that era and before, which remind us that “what has been will be again / what has been done will be done again /there is nothing new under the sun.”
posted by Frank Pasquale
Paul Caron brings news of the ranking system from Thomas M. Cooley School of Law, which pegs itself at #2, between Harvard and Georgetown. Caron calls it “the most extreme example of the phenomenon we observed [in 2004]: in every alternative ranking of law schools, the ranker’s school ranks higher than it does under U.S. News.” I just wanted to note a few other problems with such systems, apart from what I’ve discussed in earlier blog posts and articles on search engine rankings.
In the 1980s, statisticians at Bell Laboratories studied the data from the 1985 “Places Rated Almanac,” which ranked 329 American cities on how desirable they were as places to live. (This book is still published every couple of years.) My colleagues at Bell Labs tried to assess the data objectively. To summarize a lot of first-rate statistical analysis and exposition in a few sentences, what they showed was that if one combines flaky data with arbitrary weights, it’s possible to come up with pretty much any order you like. They were able, by juggling the weights on the nine attributes of the original data, to move any one of 134 cities to first position, and (separately) to move any one of 150 cities to the bottom. Depending on the weights, 59 cities could rank either first or last! [emphasis added]
To illustrate the problem in a local setting, suppose that US News rated universities only on alumni giving rate, which today is just one of their criteria. Princeton is miles ahead on this measure and would always rank first. If instead the single criterion were SAT score, we’d be down in the list, well behind MIT and California Institute of Technology. . . . I often ask students in COS 109: Computers in Our World to explore the malleability of rankings. With factors and weights loosely based on US News data that ranks Princeton first, their task is to adjust the weights to push Princeton down as far as possible, while simultaneously raising Harvard up as much as they can.
posted by Jonathan Lipson
For those of you who had any doubts, our friends at Kaplan have just confirmed it: Aspiring law students care more about law school rankings than anything else, including the prospects of getting a job, quality of program, or geography.
1,383 aspiring lawyers who took the October LSAT . . . [were] asked “What is most important to you when picking a law school to apply to?” According to the results, 30% say that a law school’s ranking is the most critical factor, followed by geographic location at 24%; academic programming at 19%; and affordability at 12%. Only 8% of respondents consider a law school’s job placement statistics to be the most important factor. In a related question asking, “How important a factor is a law school’s ranking in determining where you will apply?” 86% say ranking is “very important” or “somewhat important” in their application decision-making.
Mystal at ATL expresses shock–shock!–that potential law students could be so naive. Surely, he fairly observes, they should care most about job prospects.
Yes, that would be true if they were rational. Yet, we all know from the behavioral literature that we apply a heavy discount rate to long-distance prospects. How much can I or should I care today about what may happen 3 (or 4) years from today?
If you think about it from the perspective of any law school applicant today, the one concrete thing they can lock onto that has present value is the school’s ranking: It is simple, quantified, and–perhaps most important–tauntable. No one’s face burns with shame because their enemy (or friend) got into a law school with a better job placement rate. Jealously and envy–the daily diet of anxious first-years–are driven by much simpler signals: Is mine bigger (higher) than yours?
This is not to defend the students who place so much faith in numbers that have repeatedly been shown to be incredibly stupid. It just means that Kaplan’s survey (and I have not seen the instrument or data) makes intuitive sense.
Which leads to me to offer two modest (and probably unoriginal) proposals:
November 17, 2010 at 9:21 pm Tags: behaviorial economics, Law School (Rankings), legal employment, LSAT, US News, USNWR Posted in: Behavioral Law and Economics, Law School (Rankings), Law Student Discussions Print This Post 10 Comments
posted by Dave Hoffman
Academics – driven by their accrediting agencies - have a new buzzword. We are all now charged with thinking about assessment. How well are we doing at the goals we set out for ourselves? How do we know? How do we know if our processes of assessment are appropriate? As an academic (non-legal) blogger observed:
Going beyond the reasonable notion that you should periodically take a deeper look at what you’re doing, pedagogical reformers of many sorts get convert zeal and treat assessment as a moral imperative. But, when a religion has enough zealous adherents, it might suddenly become mainstream. And when it goes mainstream, it goes from being pure to being mass market lowest common denominator oversaturation. The word “assessment” is no longer just confined to careful examinations of how well something is working. It isn’t even just applied to a bureaucratic ritual of report-writing focused on the curriculum. It’s applied to every piece of paper, every report, every bit of data, any and every piece of bureaucracy and hoop-jumping and report-generating. The odds are good that a time sheet will soon be marked “Hours assessment” and an account statement will be marked “Fiscal assessment.”
This proselytizing ideal has obviously caught on in the ABA’s self-study process, which requires not just a strategic plan and a strategic planning process, but also that the school show that it regularly evaluates its self-assessment and thinks about whether the school’s goals are good ones. Schools which fail to have a process, plan, and plan assessment will be disapproved until they come to their senses.
It’s no small irony – nor I’m sure am I the first to note – that there is no evidence at all that schools which regularly engage in planned reflection produce better outcomes for students or for society than schools who muddle through with less formal techniques. I’m not even sure that it is possible to design an experimental study that would make the case for assessment, given external validity concerns. The case against self-reflection is pretty simple: deciding what academics ought to maximize is a hard problem, and any answer arrived at by any group of people will necessarily be too vague to provide hooks for truly useful tactical choices, especially when the time spent planning uses up productive resources . Indeed, it’s possible that designing ever-more-particularly assessment metrics (and plans for achieving those metrics) encourages us to set ever-more-narrow goals, which are then, comfortably, met.
All in all, I’d give the current assessment trend a 23.3 on an A to ∂ point scale, where our goal is to hit a ß.
posted by Dave Hoffman
Pennsylvania’s Bar results came out last week. Congratulations to all passers, and especially to my graduating students who are now licensed lawyers. The statewide pass rate for first-time takers was 84.68 percent. The rates for the Pennsylvania area law schools were:
Widener (H) 87.67%
Penn State 83.64%
Widener (D) 82.61%
Rutgers (C) 77.85%
Because failing the bar can be economically devastating, bar passage is a very, very important marker of a law school’s success – certainly more so than SSRN download rankings!! Being above the state’s average is a big deal, and worth celebrating. Temple had a problem on this score about a decade ago, and we made serious efforts to help our students be better prepared to enter practice. I’m glad to see that our efforts are bearing fruit.
Incidentally, the combined rates (July/Feb) are consistent – though Penn falls a bit — and I’ve posted that list after the jump:
posted by Lawrence Cunningham
The latest way to measure scholarly influence is the eigenfactor, a term to describe various algorithms used to quantify aspects of knowledge. The linked web site enables people to find top lists using assorted measures, including the top law reviews using article influence proxied by citation histories.
According to this measure, the following are the top-25 student-edited general interest law reviews published in the United States. The list looks congruent with my sense of generally accepted understandings among law faculty of law review standings. At first it may make one wonder whether tools like this are useful because they verify knowledge or useless because they don’t tell us anything new. But, on second thought, people new to this profession may neither know nor want to ask. Read the rest of this post »
posted by Dave Hoffman
I asked my wonderful research assistant, Robert Blumberg (TLS ’12), to update the Yospe/Best study on court citation of blogs and the Best 2006 study on law review citation of blogs. He used as a dataset the 2009 legal educator blog census (which we are currently updating – see future posts for details), excluded some general sites which happen to have a law professor as rare contributor (the Huffington Post), and ran searches in WL’s JLR database. Since 2006, under those conditions, law blogs have been cited in the journals 5460 5883 times. Here are the top twenty sites since 2006. Total citations are in (parenthesis), 2006 rank in [brackets]:
- FindLaw’s Writ (618)
- The Volokh Conspiracy (402) 
- SCOTUSBlog (305) 
- Balkinization (259) 
- Patently-O: Patent Law Blog (211) 
- Concurring Opinions (162)
- Sentencing Law and Policy (160) 
- JURIST – Paper Chase (130)
- PrawfsBlawg (122)
- The Becker-Posner Blog (104) 
- Conglomerate (102)
- White Collar Crime Prof Blog (89) 
- Election Law @ Mortiz (85)
- Legal Theory Blog (85) 
- The University of Chicago Law School Faculty Blog (76)
- Technology & Marketing Law Blog (74)
- Lessig Blog (73) 
- The Harvard Law School Forum on Corporate Goverance and Financial Regulation (72)
- Ideoblog (72)
- Election Law Blog (69)
Overall, the top 20 represented around 63% of all citations over the four year study period. In 2006, the top 20 represented 76% of 852 citations. In 2007, the top 20 represented 68% of 1095 citations. In 2008, the top 20 represented 61% of 1388 citations. In 2009, the top 20 represented 63% of 1441 citations. Finally, in 2010 (so far) the top 20 has represented 65% of 562 citations. It is difficult to make out any clear trend lines in the data. Even taking into account the lag time of publication for 2009 and 2010 volumes, the rate of citations to law blogs is not increasing. There is a very mild trend toward diffusion in influence, although the top blogs still appear to drive the conversation, even as the number of professors blogging increased. In the aggregate, the top few blogs would each (if considered to be individual scholars) be worthies on Leiter’s citation lists.
posted by Dave Hoffman
Other disciplines don’t kid around about the ordering of authors in publications. In political science and economics, alphabetical or reverse-alphabetical ordering is the dominant approach, even though it distorts hiring decisions. In science, the first and last names matter – woe to the middle men! Harvard is so concerned about the trend that it instructs its faculty to “specify in their manuscript a description of the contributions of each author and how they have assigned the order in which they are listed so that readers can interpret their roles correctly [and] prepare a concise, written description of how order of authorship was decided.”
In law, lacking a tradition of co-authorship, there appears to be at best a weak norm that the first author is the primary contributor. That results in a set of interrelated problems:
1) To law audiences, the first author did the most work, and is rewarded in two ways. The first is qualitative, and pops up at tenure, promotion, and lateral review — “he was the driver on that piece,” or “she was just the second author.” Quantitatively, the bluebook foolishly permits multiple author works to be et al’d, meaning that the second through nth authors never get to see their name in the citation print. Given the rudimentary nature of impact citation analysis in the legal academy, this mean that people who are listed first get the citations and the people who aren’t don’t. This might be less troublesome if the “first author” norm was correct — that is, if first authors in law reviews actually did more work. But my bet is that given letter head bias, many co-authored pieces list as the first author the most prominent author (or at least the author at the best-ranked school). The upshot: first authors in law reviews are rewarded for being first in both qualitative and quantitative terms, though it’s not clear they ought to be.
2) To other disciplines, this is fundamentally screwy and is another reason not to publish in a law review. But interdisciplinary co-authored work published outside of the law reviews becomes that much more difficult as a result. If a law professor and a non law professor were to publish in an economics journal, my sense of the norm is to alphabetize. [Correct me if I'm wrong here.] Non legal audiences look at this and understand that it doesn’t signify relative contribution. Law audiences don’t have that filter on, and the result (again) is that the second author is punished, here for having a last name at the back of the alphabet.
3) Making sense of this mess requires coordination, which is quite hard because we lack a learned society that is sufficiently respected to impose change from above. We do have, however, a few very strong journals that have had remarkable success in changing otherwise intractable scholarly pathologies like article bloat. If the Harvard Law Review could -almost singlehandedly – impose a 25,000 word limit, surely it could fix this problem too. In my view, the top few journals (HYS) ought to, as a part of their blue-booking project, agree to impose something like the Harvard faculty author order guidelines on folks who are publishing joint projects in their pages. The default ought to be reverse alphabetical listing. Each article should state the respective contributions of the authors and, to the extent that they have deviated from the alphabet, why. Finally, HYS ought to reform the bluebook to insist that the first citation of any work include the names of all contributors to the piece, rather than permitting et al. treatment.
posted by Dave Hoffman
Why is Bob Morse, USWR’s ranking guru, so unafraid of competition? There’s plenty of evidence that his rankings are fragile – for example, changes at the “bottom” of the scale have unexpected influences on schools at the top because the magazine engages in a statistically bizarre forced normalization – and that they measure factors that have little relevance to an underlying “quality” measure of importance in the market for legal jobs. But he persists in acting like an incumbent politician, prioritizing optics over real reform.
Here’s an example. A few weeks back, Bob Morse issued a stern warning to law school administrators out to game his rankings. In response to a problem created by “openness about our ranking model” Morse took a strong step in the direction of reform by…wait for it…threatening certain schools with punishment for gaming their employed-at-graduation statistic. For those who follow the rankings, this was a particularly galling and obnoxious post. The rankings model isn’t at all “open”: for most categories of concern, USNews engages in hidden manipulations of dubious value which make replicating the results quite difficult. See, e.g., LSAT percentile scoring, COLA adjustments; normalization, treatment of missing data, etc. Indeed, the rankings would likely fail the very low bar for openness and replication set by even a student-edited law journal, let alone a peer reviewed publication. And the irony is that savvy administrators would find the at-employment graduation statistic of very little interest, because its relative contribution to a school’s rank is significantly dampened by lack of variance. To put it another way, Morse seized on “gaming” of a factor that has the second least connection with overall ranking success (behind library volumes). Leiter said it best: “fiddling while rome burns.”
Why did Morse’s team focus on this particular statistic, as opposed to working on real reform? It’s all about the perception of legitimacy: an increasing number of schools weren’t reporting their employment numbers (because the formula for imputing missing values in this input factor produced a helpful number). When Paul Caron pointed this out, it was embarrassing for Morse, especially when Caron’s post was picked up widely. Real reform might result in dramatic changes that called into question earlier rankings, as well as adding cost and expense. This change, on the other hand, will have at most a marginal effects on scores, and costs essentially nothing. Mission accomplished!
Still, the question remains unanswered: what makes Bob Morse so convinced that incumbency renders his flawed product insensitive to normal market corrections?
posted by Gerard Magliocca
I wanted to make an observation about the rankings that were released yesterday. Duke Law School achieved something truly remarkable. All of their 2008 graduates (100%) were employed in the six months following graduation. No other school — not even Harvard or Yale — managed this. And to have that result as the stock market was imploding is even more amazing.
Or Duke’s claim is just . . . er . . . false. But I would never accuse a Top 11 school of misrepresenting the facts.
posted by Daniel Solove
Over at WSJ Blog, Ashby Jones contacted Robert Morse to get his reaction to my post about how raters should fill out the US News law school rankings forms:
We caught up with Bob Morse, the director of data services for U.S. News, who said in his estimation, the 1-5 options generally speaking matched up with the level of knowledge held by the raters. “We’ve felt that the level of judgment isn’t granular enough to provide a wider scale.”
He also said that because the survey reports the results of the reputation question out to the tenths place, “we’re actually publishing it on a scale of 50; the results average out to be more granular.”
Morse defends the granularity of the US News rankings by pointing to the fact that the average scores do have decimal points. Although this is true, it doesn’t address the problem I pointed out in my post — for the individuals doing the ratings, they can’t accurately reflect their sense of how a school compares to other schools.
Who gives Yale a 4 on only a 1-5 scale? Or Harvard? Either this person has a very different theory of law school reputation or is trying to game the system.
Let’s say that there are 10 reviewers, and they rate as follows:
Yale scores: 5, 5, 5, 5, 5, 5, 5, 5, 5, 4 = 4.9 average
Harvard scores: 5, 5, 5, 5, 5, 5, 5, 4, 3 = 4.7 average
Morse would conclude that the difference between Yale and Harvard is meaningful. I would conclude that the difference is attributable to either (1) a fluke due to quirky beliefs of a very small number of raters; or (2) gaming by some raters. I just don’t see how, on a 1-5 scale, Yale or Harvard would get any less than a 5 on all the forms. Their averages should both be 5.0. Any differences are the result of flukes or gaming and shouldn’t be taken seriously.
These problems exist beyond Yale and Harvard — they persist for the entire US News survey because there’s not enough granularity. If there’s a granularity problem for individual raters that makes their ratings flawed, then the problem doesn’t just disappear by aggregating flawed ratings.
With all due respect to Morse, I must also disagree with his first point. As my post demonstrated, Morse is wrong in his statement that “the level of judgment isn’t granular enough to provide a wider scale.” If he’s right, then readers of my post should conclude that my hypothetical dean is way outside the norm. But I’m willing to bet that among those in the academy, the consensus on the schools I mentioned in the post is that they deserve different ranking levels: Yale, Michigan, Cornell, USC, Emory, and American. My sense is that there’s a consensus that most raters would rate them in the order listed, and would think that they are all at different reputational levels.
If this is the consensus, then Morse’s 5 point scale isn’t granular enough for individual raters. And aggregating scores that are assigned based on a system of rating that isn’t granular enough doesn’t fix the problem. It just means that the outliers control the outcome, and those outliers are either people with views way outside the norm or people who are gaming the system. I don’t think we want the ratings to turn on what the outliers are doing.
I appreciate Morse’s response to Ashby Jones, and would be interested in his response to my points above.
Please note that I’m not an expert on statistics, so I’m open-minded about my claims. If there’s a statistics expert among our readers, I’d be very interested in your thoughts.
posted by Lawrence Cunningham
Just as Law School Deans heed US News rankings, for better or worse, they may heed scholarly impact rankings, including recently-released tabulation performed by U. Chicago Professor Brian Leiter. It counts high citation scholars over the past five years, with attribution and ordering (a) by school, (b) within school, and (c) across a dozen subject areas. Hats off to Professor Leiter for taking what must have been scores of hours to assemble the data.
When studying it, Deans may find it worthwhile to rearrange in any number of ways. In most rearrangements, data are likely to appear in line with the data as presented, though some interesting variations may appear. As an example, Professor Leiter’s report ranks schools according to weighted citations of all scholars within the school. An alternative would rank schools according to the number of times a school’s scholars are named in the high-impact lists by subject matter.
In that rearrangement, there are small differences in rankings of the top seven schools, but many more marked differences further on the line. Following are the top seven, ranked by number of times scholars appear in high-impact rankings by subject (that number appears in parentheses, followed in separate parentheses with the ranking in Professor Leiter’s report). Read the rest of this post »
posted by Daniel Solove
Every year, US News compiles its law school rankings by relying heavily on reputation ratings by law professors (mainly deans and associate deans) and practitioners and judges. They are asked to assign a score (from 1 to 5) for the roughly 200 law schools on the form. A 5 is the highest score and a 1 is the lowest. While many factors that go into the US News ranking have been criticized, the reputation ratings by and large are considered one of the best components in the ranking system. But should it be?
Let’s assume a knowledgeable dean filling out the form in good faith. How is he or she to go about filling out the form?
Here’s my hypothetical dean’s stream of consciousness:
Okay, I think Yale is the top law school, so I’ll give it a 5.
What about Michigan? Great school, but not quite as high as Yale. I’ll give it a 4.
Cornell is an excellent school too, one of the best. But it’s not Yale or Harvard, so I can’t give it a 5. It’s not as good as Michigan in my view, so I can’t give it a 4. I gave Penn and Berkeley 4′s too, and I think Cornell isn’t quite at the same level. So it’s a 3.
What about USC? Another excellent school, but it’s not as high as Cornell. So it’s a 2.
Ruh-roh! I’m not even out of the top 20, and I have 160+ law schools to assign scores to, and I only have one number left. But I must go on!
How about Emory? That’s a bit lower than USC in my view, so I’ll give it a 1.
What about American? Another terrific school, but I think Emory’s better. I can’t give American a 0. What do I do? Okay, I guess I’ll give it a 1 as well.
But I’m not even out of the top 50. Yikes! I’ve run out of numbers. Maybe I’ll call Robert Morse and ask him if I can start assigning negative numbers. What do I do?
Time to try some math. To make things easy, I’ll assume there are roughly 200 law schools. And I have 5 numbers to assign — 1, 2, 3, 4, and 5. Assuming an equal number of schools assigned to each number, that’s 40 schools for each number. OMG! So I need to give schools I rank 1-40 a score of 5, schools I rank 41-80 a 4, schools I rank 81-120 a 3, schools I rank 121-160 a 2, and every other school a 1. But that’s ridiculous. The law school I think ranks #40 isn’t anywhere near the law school ranked #1. This system is impossible.
Okay, maybe I give a score of 5 to the top 10 law schools, then a score of 4 to the next 90 law schools, then 3′s and 2′s to 100-150, and 1′s to the rest. But that still lumps too many schools together. If every person filling out the form did what I did, then there would be no way to distinguish the top 10, and no way to distinguish schools in the top 100. Only real outlier scores would determine the difference.
Dear Mr. Morse — what am I to do? Please help me!
Does anyone have any advice for our poor dean? How are people to fill out the US News ranking forms in good faith to reflect accurately their sense of law school reputations?
posted by Lawrence Cunningham
We are delighted that our esteemed guest blogger, Professor Alfred Yen (Boston College), with us this past month (and before), will stay another one. (You can see my post introducing Fred, my former colleague, here.)
In March, Fred contributed an amazingly insightful, thoughtful, reflective, and useful series of seven posts called Thoughts on Choosing a Law School. The 7-part series broke down as follows: (1) limited utility of popular rankings; (2) curriculum; (3) faculty staffing of instruction; (4) subject matter distinction; (5) faculty strength; (6) physical facilities; and (7) faculty publishing record.
These were formally directed to students considering which law school’s admissions offer to accept; and they also mean a great deal to we suppliers of legal education. We’re grateful for these contributions. And we’re delighted Fred will be back to contribute more wisdom, on these and the many other subjects within his capacious expertise.