Category: Law School (Rankings)

8

A Logic Puzzle

Several conclusions can be drawn from the following comparison. Which one do you take away?

1. In Grutter v. Bollinger, the Supreme Court described the admissions goals of the University of Michigan Law School (and law schools more generally) this way: “Our conclusion that the Law School has a compelling interest in a diverse student body is informed by our view that that attaining a diverse student body is at the heart of the Law School’s educational mission . . . [T]he Law School’s admissions policy promotes ‘cross-racial understanding,’ helps to break down racial stereotypes, and ‘enables [students] to better understand persons of different races.’ These benefits are ‘important and laudable,’ because ‘classroom discussion is livelier, more spirited, and simply more enlightening and interesting’ when the students have ‘the greatest possible variety of backgrounds.’ . . . Numerous studies show that student body diversity promotes learning outcomes, and ‘better prepares students for an increasingly diverse workforce and society, and better prepares them as professionals.’

2. . Consideration of diversity in the U.S. News and World Report Rankings: None

A Foucauldian View of Law School Rankings

Sociologists Michael Sauder and Wendy Nelson Espeland (NE&S) recently published an insightful article on the disciplinary function of law school rankings. They apply both Foucauldian and organizational theory to “unpack the power and influence of rankings as a peculiar type of environmental pressure.” They conclude:

[that r]ankings simultaneously seduce and coerce, and . . . [the fact that] this complex interplay of co-optation and resistance is conducted in the bland language of numbers makes it all the more compelling. At schools with improving rankings, even critics may find it hard to avoid a flush of pride, along with relief and anxiety about next year. The allure of rankings may be subtle, but it shapes resistance while securing the engagement of critics and supporters alike.

NE&S document several responses to the culture of rankings. I found their description of a dialectical “gaming/surveillance” dynamic particularly interesting, given some recent research I’ve been doing on trade secret protection for ranking algorithms:

Read More

4

How is Tom Barr Like Shane Battier: Or, Measuring Individuals’ Roles in Group Success

hls faculty.jpgMichael Lewis recently published a Times Magazine story on NBA player Shane Battier. The article is largely an anecdotally driven portrait of Battier, a player who supposedly makes his teammates better and opposing players worse, while engrossing few individual gains. But the Houston Rockets, who employ Battier, recognize his value, because they’ve finally cracked the nut of regressing success in group sports. According to Lewis, the Rockets use a sophisticated plus-minus measure:

One well-known statistic the Rockets’ front office pays attention to is plus-minus, which simply measures what happens to the score when any given player is on the court. In its crude form, plus-minus is hardly perfect: a player who finds himself on the same team with the world’s four best basketball players, and who plays only when they do, will have a plus-minus that looks pretty good, even if it says little about his play. Morey says that he and his staff can adjust for these potential distortions — though he is coy about how they do it — and render plus-minus a useful measure of a player’s effect on a basketball game. A good player might be a plus 3 — that is, his team averages 3 points more per game than its opponent when he is on the floor. In his best season, the superstar point guard Steve Nash was a plus 14.5. At the time of the Lakers game, Battier was a plus 10, which put him in the company of Dwight Howard and Kevin Garnett, both perennial All-Stars. For his career he’s a plus 6. “Plus 6 is enormous,” Morey says. “It’s the difference between 41 wins and 60 wins.”

The problem with the article is that it offers no perspective at all on how the Rockets tweak the statistic to make it useful and a competitive advantage. In that sense, the piece could be thought of as Moneyball III: This Time With No Data and No Human Interest. (Moneyball Had Data; Blind Side had a compelling story; this piece is unripe on both fronts.)

Nevertheless, in some quarters Lewis’s work has again caught the attention of legal innovators. Jim Chen, who has already opined that Deans should use a version of plus-minus to evaluate faculty performance, suggests that Battier is a promising case study: “the single factor that makes a great team player is the mirror image of the single factor that turns even the most productive scholar into a toxic Arschloch: selfishness.” To which an astute commentator responded: “If anything, a stats-driven evaluation process will almost certainly lead to the Battiers of academia being under-rewarded, rather than the reverse. Wouldn’t it be enough to reward those who just seem to distinguish themselves by their selflessness? . . . Note that, even within the NBA — in which it is much easier to do a plus/minus assessment — Battier gets undervalued by most teams, and if he weren’t still riding a six year contract would probably get paid a lot less even by the Rockets.”

Read More

20

Announcing the Moss Law School Rankings: Harvard #1, Yale #2!

Congratulations to Harvard on ranking #1 in the newly minted Moss Law School Rankings! Below are the raw statistics, which I explain below:

#1: Harvard (7 points)

#2: Yale (4 points)

#3: Tulane (3 points)

#4: NYU (2 points)

#5: Georgetown (2 points)

#5: Cincinnati (2 points)

#5: Rutgers (2 points)

#5: Pepperdine (2 points)

#5: Louisiana State (2 points)

#10: Fordham (1 point)

#10: Washington & Lee (1 point)

My ranking is unorthodox, I admit, but all the great statistical innovations yield unintuitive outcomes, no? Let me explain my methodology.

Read More

4

Do Mailings Lead to Better Rankings?

A haiku to celebrate the season:

Fall is in the air.

Law school mailings everywhere.

Rankings on the rise?

It’s fall, the season for tailgating, bonfires, and trick-or-treating. It’s also the time of the year when the U.S. News and World Report Magazine begins collecting data for the purpose of ranking U.S. law schools. When I check my work mailbox, I can almost always count on receiving postcards, pamphlets, magazines, and other mailings from law schools extolling the virtues of their programs and recent faculty hires. One of my colleagues – my most recently tenured colleague – even received law school swag.

Do these mailings actually positively impact the rankings of the schools that send them? Last year, the ranking methodology for law schools included a quality assessment based on two scores – a peer assessment score and an assessment score by lawyers and judges. Do these mailings lead to higher peer assessments and assessments by lawyers and judges?

4

A Voter Aptitude Test for U.S. News Law Rank Voters?

Professors are ablog about the U.S. News ballots recently arrived in law school mailrooms around the country. At Moneylaw, rankings guru Tom Bell (Chapman Law) relates interesting news of possible voting irregularities in the academic reputation balloting — with individuals other than deans, associate deans, recruiting chairs and most-recently-tenured profs receiving ballots even though they aren’t supposed to be U.S. News voters. At Prawfs, Jason Solomon (Georgia) reminds voters they are supposed to be assessing schools’ quality, rather than reputation.

Read More

14

Should the US News Ranking Include Part-Time and Evening Law Students?

usnwr1.jpgVia Brian Leiter, I learned that Rob Morse, the ranking czar of the US News law school rankings is considering including the LSAT and GPA stats for part-time and evening JD students in its calculations for law school rankings. Morse writes:

The first idea is that U.S. News should count both full-time and part-time entering student admission data for median LSAT scores and median undergraduate grade-point averages in calculating the school’s ranking. U.S. News’s current law school ranking methodology counts only full-time entering student data. Many people have told us that some law schools operate part-time J.D. programs for the purpose of enrolling students who have far lower LSAT and undergrad GPAs than the students admitted to the full-time program in order to boost their admission data reported to U.S. News and the ABA. In other words, many contend that these aren’t truly separate part-time programs but merely a vehicle to raise a law school’s LSAT and undergrad GPA for its U.S. News ranking. We have used only full-time program data because we believed that the part-time law programs were truly separate from the full-time ones. That no longer appears to be the case at many law schools. So, it can be argued that it is better analytically to compare the LSAT and undergrad GPAs of the entire entering class at all schools rather than just the full-time program data.

While much in the US News rankings should be changed, this change would wreak more havoc on legal education than it will solve. It is true that schools game the system with part-time and evening students, but any change should be focused on the gaming, not on including LSAT and GPAs of part-timers and evening students in with the law school’s regular LSAT/GPA stats. Leiter writes:

For many, probably most, part-time programs serve older, working students, who might not have time for fancy LSAT prep courses, but who bring levels of dedication, seriousness, and pertinent experience that enrich legal education and the legal profession. What a loss it will be if, out of fear of US News, schools start cutting back their part-time programs or rejecting these students whose numerical credentials might impede their crusade for a “higher ranking.”

I wholeheartedly agree. The result will be a dramatic curtailment of evening and part-time programs unless schools want to take a hit in their ranking. This will penalize schools with such programs, and I bet these programs will shrink rather dramatically if US News makes this change.

First, since many students in these programs are older and have been working many years following graduation from their colleges, their GPAs don’t matter as much. Such programs are a way to accommodate students who may not have excelled in their undergraduate studies but who have blossomed in the years afterward.

Second, these programs are also an small escape valve from the tyranny of the LSAT, which is often the end-all and be-all of law school admissions. While the LSAT is important and is correlated to successful performance at law school, it is also true that many students who didn’t do well on the LSAT also have success in law school and in their legal careers. Furthermore, statistically, several minority groups generally have lower LSAT scores than whites. The LSAT shouldn’t dominate so heavily in law school admissions, but it does (due in large part to the US News rankings). At least the evening and part-time programs could escape from this problem, but if US News makes the proposed change, there will be no escape.

Read More

0

This Month’s SSRN Rankings

Following up on postings in February 2008 and May 2007, here’s this month’s SSRN download ranking, measured by total new downloads. (The numbers in parentheses are the rankings from February. Total new downloads for these fifty institutions: 914,252)

1 George Washington University – Law School (1)

2 Harvard University – Harvard Law School (2)

3 Columbia University – Columbia Law School (3)

4 University of Chicago – Law School (4)

5 Yale University – Law School (6)

6 University of Texas at Austin – School of Law (5)

7 University of California, Los Angeles – School of Law (7)

8 Georgetown University – Law Center (9)

9 Stanford Law School (8)

10 New York University – School of Law (11)

11 University of Illinois College of Law (10)

12 University of Pennsylvania Law School (12)

13 University of California, Berkeley – School of Law (13)

14 Vanderbilt University – School of Law (14)

15 University of Minnesota – Twin Cities – School of Law (16)

16 George Mason University – School of Law (18)

17 Duke University – School of Law (17)

18 University of Tennessee, Knoxville – College of Law (15)

19 University of San Diego – School of Law (19)

20 University of Michigan at Ann Arbor – Law School (20)

21 University of Southern California – Law School (21)

22 Northwestern University – School of Law (22)

23 Temple University – James E. Beasley School of Law (28)

24 Florida State University – College of Law (25)

25 Boston University – School of Law (27)

26 Fordham University – School of Law (24)

27 Yeshiva University – Benjamin N. Cardozo School of Law (26)

28 American University – Washington College of Law (31)

29 Loyola Law School – Los Angeles (23)

30 University of Virginia – School of Law (29)

31 Cornell University – School of Law (34)

32 Ohio State University – Michael E. Moritz College of Law (30)

33 Suffolk University Law School (32)

34 Emory University – School of Law (36)

35 University of Louisville – Louis D. Brandeis School of Law (37)

36 Brooklyn Law School (35)

37 Indiana University School of Law-Bloomington (33)

38 Chapman University – School of Law (38)

39 St. John’s University – School of Law (43)

40 University of Florida – Fredric G. Levin College of Law (47)

41 Case Western Reserve University – School of Law (41)

42 Notre Dame Law School (40)

43 Boston College – Law School (39)

44 Rutgers, The State University of New Jersey – School of Law-Camden (44)

45 University of Houston Law Center (Off-list)

46 Wayne State University Law School (Off-list)

47 Loyola University of Chicago – School of Law (Off-list)

48 University of Arizona – James E. Rogers College of Law (46)

49 Northern Kentucky University – Salmon P. Chase College of Law (Off-list)

50 Seton Hall University – School of Law (48)

11

The Contradictory Goals of Law School Rankings

usnwr1.jpgAs usual, a ton of blogospheric attention has been devoted to the US News law school rankings. Over at PrawfsBlawg, Geoffrey Rapp has found a way to get the numerical rankings of law schools in the Third and Fourth Tiers. At TaxProf, Paul Caron ranks the law schools by reputation score. At Brian Leiter’s Law School Reports, Brian Leiter offers suggestions for improving the rankings. At Law Librarian Blog, Joe Hodnicki tracks law school rankings from 1996-present. I, too, have posted about the US News Rankings.

If we step back from this year’s frenzy, I believe that there’s an important fact about law school rankings that accounts for much of the displeasure about them. Law school ranking systems have contradictory goals. Here’s why. Law schools, like many institutions, are not incredibly dynamic and changing in the short term. They often change slowly, not dramatically. The result: We shouldn’t see much movement year to year in the rankings. Most schools should stay about where they are. A few schools might move over time, but any one year’s movement is not significant in the grand scheme of things. So to be accurate, rankings shouldn’t change all that much.

But rankings systems have a contradictory goal: They need to reflect some kind of change, or else looking at the rankings each year would be like watching glaciers move. There must be some drama in the rankings year by year. We eagerly await our rankings each year, and we don’t want rankings at five or ten year intervals. And we don’t want stable rankings — we want changes to cheer and kvetch about.

There is another value in rankings reflecting some degree of change each year beyond our enjoyment of babbling on about them. Law schools work very hard on hiring new and lateral professors, promoting their reputations, improving their schools, increasing their admissions selectivity, and so on. We want our work to be reflected in a tangible manner. We want results for a year’s worth of hard work in improving the school. We don’t want to wait a decade or longer to see results. Unfortunately, the US News rankings often don’t reflect this work very well. But they do show that something is happening. We can then complain about the disconnect between what we’re doing and our ranking: “We did all this, and our ranking hasn’t moved. Damn that US News for their flawed system!” Or, we can justify rises in our rankings: “We’ve moved up several spots in the rankings. This is, of course, due to all the wonderful improvements we’ve been making to our school.” Either way, at least we have something to talk about.

The reality is that probably very little we do has much effect vis-a-vis our ranking with other schools over a period of time. We might improve our faculty by hiring some great laterals, but over the course of time, our competitor schools will also likely have done the same. True, one school might outpace another, but big shifts are the exception not the norm.

turtle1.jpgSo the rankings need to reflect a state of affairs that is largely static, with a few gradual changes over the course of a long time. They must do so in a way that keeps people interested and excited. The rankings must display glacial change in a dramatic way. To use another metaphor, the rankings must make a turtle race seem exciting.

Read More

1

US News 2009

They again seem to have leaked early at lawschooldiscussion.org: read ‘em and weep. Swayed by some of the arguments Brian Leiter makes here, I’m not going to reproduce the list. (And besides, it seems like the folks who excavated the information deserve the hits, not that the equities much matter or that others will feel the same way.). After satiating your curiosity, come back here and talk about ways to make the system better.