Category: Economic Analysis of Law


Brian Tamanaha’s Straw Men (Part 2): Who’s Cherry Picking?

(Reposted from Brian Leiter’s Law School Reports)

BT Claim 2:  Using more years of data would reduce the earnings premium

BT Quote: There is no doubt that including 1992 to 1995 in their study would measurabley reduce the ‘earnings premium.'” 

Response:  Using more years of historical data is as likely to increase the earnings premium as to reduce it

We have doubts about the effect of more data, even if Professor Tamanaha does not.

Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.

The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing.   To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.

As a commenter on Tamanaha’s blog helpfully points out:

“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).

But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.

Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”

There is nothing magical about 1992.  If good quality data were available, why not go back to the 1980s or beyond?   Stephen Diamond and others make this point.

The 1980s are generally believed to be a boom time in the legal market.  Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it.  Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.

Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.




This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose.  Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here.  Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.


Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)


BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.


Brian Tamanaha’s Straw Men (Overview)

(Cross posted from Brian Leiter’s Law School Reports)

Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution.  Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”. 

In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data.   Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target.  While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.

Stephen Diamond explains why Tamanaha apparently changed his post: Ted Seto and Eric Rasmusen expressed concerns about Tamanaha’s use of ad hominem attacks.

Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them.  For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research.  Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals.  And his description of our present value calculations is way off the mark.

Here are some quick bullet point responses, with details below in subsequent posts:

  • Forecasting and Backfilling
    • Using more historical data from SIPP would likely have introduced continuity and other methodological problems
    • Using more years of data is as likely to increase the historical earnings premium as to reduce it
    • If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
    • The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
    • In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
  • Data sufficiency
    • 16 years of data is more than is used in similar studies to establish a baseline.  This includes studies Tamanaha cited and praised in his book.
    • Our data includes both peaks and troughs in the cycle.  Across the cycle, law graduates earn substantially more than bachelor’s.
  • Tamanaha’s errors and misreading
    • We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
    • This substantially reduces our earnings premium estimates
    • Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
    • Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
    • Tamanaha is confused about present value, opportunity cost, and discounting
    • Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
  • Tamanaha’s best line
    • “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”

I reviewed Mark Weiner’s Rule of the Clan from a libertarian perspective.

Libertarians are impressed by order that emerges in an unplanned, decentralized way.  No one knows how to make a pencil, and yet through the decentralized process of market trading, pencils are made readily available.  If making a pencil does not require a central planner, then why do we need a strong central government?

The Hobbesian answer is that without a strong central government, we would have the “war of all against all.”    The libertarian response echoes Karl Kraus.  Kraus famously said something to the effect that “psychoanalysis is the disease which it purports to cure.” Libertarians point out that the state, which purports to be the cure for the war of all against all, is the leading cause of violent death and incarceration.

Weiner’s book contains a message for libertarians that is decidedly mixed.  He argues, on the one hand, that there is a decentralized order that is an alternative to a strong central government.  On the other hand, this order is not at all libertarian.

The decentralized order that Weiner describes is the rule of the clan.  It is a cultural system in which individuals lack what we think of as liberty.  Instead, the individual is subordinate to the extended family.

Libertarians have been known to use medieval Iceland as an example proving that a strong central government is not needed to maintain order.  Weiner describes medieval Iceland as an example of the clan-based system of order, but from his depiction it is clearly not a model of a libertarian society.

Weiner uses legal historian Henry Maine’s distinction between a Society of Status and a Society of Contract.  Rule of the clan embodies a society of status.  Libertarians want to see a society of contract.

Libertarians see the “contract theory” of existing states as a fiction.  I never signed an agreement giving authority to the people and institutions of my federal, state, and local government.  Instead, those people and institutions have decided unilaterally what authority they can exercise over me.

Is it possible to extend the society of contract, giving less asymmetric power to the people and institutions that constitute the government?   Libertarians believes that the answer is “yes.”  However, Weiner claims that wherever the people and institutions of government lack strong asymmetric power, what we observe is the rule of the clan.  Libertarians are faced with the burden of showing that while he may be correct in describing the decentralized orders that we have observed, there may yet emerge a more decentralized order that does not degenerate into the rule of the clan.

The River of Purchasing Power Dries Up at Detroit

RiveraIf only Detroit were a big bank, Treasury officials would be working round the clock this weekend to save it. Alas, this city is no Citi. It lacks a “winning business model” (like lobbying and bonuses for key federal officials). So municipal bankruptcy is on the horizon.

Detroit was chronically mismanaged, and the victim of unforgiving political geography. But the decline of jobs there is also a bellwether for the rest of the country. As Juan Cole observes,

This rise of [robotized manufacturing] violates the deal that the capitalists made with American consumers after the great Depression, which is that they would provide people with well-paying jobs and the workers in turn would buy the commodities the factories produced, in a cycle of consumerism. If the goods can be produced without many workers, and if the workers then end up suffering long-term unemployment (as Detroit does), then who will buy the consumer goods? Capitalism can survive one Detroit, but what if we are heading toward having quite a few of them?

Read More


Nonrespondent law graduates and other sampling questions

The Washington Post reports one possible concern with estimates of the Economic Value of a Law Degree:

“[Paul] Campos argues that low-earning lawyers may be less likely to participate in SIPP in the first place because of the stigma involved in admitting that, even anonymously.”

By email, Jerry Organ asks related questions about the representativeness of our sample.

“SIPP” is the United States Census Bureau’s Survey of Income and Program Participation, and is one of the primary data sources used in The Economic Value of a Law Degree.  Campos worries about stigma and non-response.  Thankfully SIPP is specifically designed to deal with these problems and to include impoverished and stigmatized members of the population, including those who receive government aid.

The Census Bureau explains SIPP’s purpose as follows:

 “To collect source and amount of income, labor force information, program participation and eligibility data, and general demographic characteristics to measure the effectiveness of existing federal, state, and local programs; to estimate future costs and coverage for government programs, such as food stamps; and to provide improved statistics on the distribution of income and measures of economic well-being in the country.”

The Census Bureau elaborates on the use of SIPP to analyze participation in Food Stamps and other anti-poverty programs here.

Census explains in greater detail how SIPP handles issues related to response bias, non-response bias, and weighting here.  SIPP oversamples in poor neighborhoods, imputes when necessary, and adjust the sample weights to approach a nationally representative sample.

It is about a good a survey as one is likely to find conducted by people who care a great deal about nonresponse and accurate estimates.

Additionally, to the extent that any lingering nonresponse bias may cause those with low earnings to be less inclined to participate, this bias will affect both law graduates and bachelor’s degree holders.  What we measure in the Economic Value of a Law Degree is the earnings premium, or difference in earnings that is attributable to the law degree.  The biases should wash out, or more likely, bias down our estimates of the law degree earnings premium, because bachelors are far more likely than law graduates to live in poverty.

Indeed studies that have compared earnings reported in SIPP to earnings from administrative data (tax and social security administration data) find that SIPP data underestimates earnings premiums because more highly educated and higher income individuals tend to underreport earnings, while less educated and lower income individuals tend to over report.  We make no attempt to correct for this downward bias in our earnings premium estimates to offset any lingering selection on unobservables.

Individual response bias issues also won’t affect federal student loan default data, which is administrative data from the Department of Education.  As noted in the article and in previous blog posts, former law students default on their student loans much less frequently than former students of bachelor’s degree or other graduate degree programs.



Brian Tamanaha Says We Should Look at the Below Average Outcomes (And We Did)

Brian Tamanaha’s response to The Economic Value of a Law Degree, as reported by Inside Higher Education doesn’t capture the contents of the study.  According to IHE, Tamanaha said:

 “The study blends the winners and losers, to come up with its $1,000,000, earnings figure, but that misses the point of my book: which is that getting a law degree outside of top law school – and especially at bottom law schools –is a risky proposition . . . Nothing in the article refutes this point.”

Professor Tamanaha is correct that the $1 million figure is an average, but we didn’t write a 70 page article with only one number in it.

The Economic Value of a Law Degree not only reports the mean or average—it reports percentiles, or different points in the distribution.  At the 75th percentile, the pre-tax lifetime value is $1.1 million – $100,000 more than at the mean.  At the 50th percentile, the value is $600,000.  At the 25th percentile, the value is $350,000.  These points in the earnings distribution do better than breaking out returns by school—they allow that even some people at good schools have bad outcomes (and vice versa).  Thus we capture, and at length, exactly the concern Tamanaha expresses.

Lifetime earnings distribution slide


As we discuss in the article, for technical reasons related to regression of earnings to the median, our 75th and 25th percentile values are probably too extreme. The “75th percentile” value is likely closer to the 80th or 85th percentile for lifetime earnings, and the “25th percentile” is likely closer to the 20th or 15th percentile.

In other words, roughly the top 15 to 20 percent of law school graduates obtain a lifetime earnings premium worth more than $1.1 million as of the start of law school. Roughly the next 30 to 35 percent obtain an earnings premium between $1.1 million and $600,000. In the lower half of the distribution, roughly the first 30 to 35 percent obtain an earnings premium between $350,000 and $600,000. Roughly the bottom 15 to 20 percent obtain an earnings premium below $350,000. These numbers are pre-tax and pre-tuition.

Even toward the bottom of the distribution, even after taxes, and even after tuition, a law degree is a profitable investment.  And that is before income based repayment, which can substantially reduce the risk at the bottom of the distribution.

We also present student loan default rates for 25 standalone law schools, most of which are low ranked institutions, and all of which have student loan default rates that are below the average for bachelor’s and graduate degree programs.  The average law school default rate is approximately one third of the average default rate for bachelor’s and graduate programs.

Student Loan Defaults


People with law degrees are not immune from risk.  No one is.  But the law degree reduces the risk of financial hardship.  Law degree holders face significantly less risk of low earnings than those with bachelor’s degrees, and also face lower risk of unemployment.  Increased earnings and reduced risk appear to more than offset the cost of the law degree for the overwhelming majority of law students.

Frank McIntyre and I did not miss the point of Brian Tamanaha’s Failing Law Schools.   Rather, we disagree with his conclusions about the riskiness of a law degree because data on law degree holders does not support his conclusions.  We discuss Tamanaha’s analysis on pages 20 to 24 of The Economic Value of a Law Degree.

We believe that Professor Tamanaha’s views deserve more attention than we could give them in the Economic Value of a Law Degree. Because of this, last Spring, we also wrote a book review of Failing Law Schools, pointing out both the strengths and weaknesses of his analysis.  We will make the book review available on SSRN soon.

If Professor Tamanaha disagrees with our estimates of the value of a law degree at the low end, we’re happy to hear it.  But he should not say that we ignored the issue.  We look forward to a productive exchange with him, on the merits.

The Locust and the Bee

LocustBeeFables have been in the politico-economic air of late. The FT’s Martin Wolf considered the locust part of a master metaphor for the future of the global economy. He concluded that “the financial crisis was the product of an unstable interaction between ants (excess savers), grasshoppers (excess borrowers) and locusts (the financial sector that intermediated between the two).”

Now Geoff Mulgan has entered the fray with the excellent book The Locust and the Bee: Predators and Creators in Capitalism’s Future. As Mulgan observes,

If you want to make money, you can choose between two fundamentally different strategies. One is to create genuinely new value by bringing resources together in ways that serve people’s wants and needs. The other is to seize value through predation, taking resources, money, or time from others, whether they like it or not.

Read More


Gently Nudging with Liability Rules?

No Smoking symbolWhy have sexual harassment and anti-smoking laws been so successful in changing entrenched social norms in the U.S. over the past few decades? In a 2000 U. Chicago Law Review article, Dan Kahan observed that combatting these ills took the approach of “gentle nudges,” imposing moderate remedies that were within the range of what decisionmakers (e.g. judges and juries) thought was reasonably proportional to the violation. Because these moderate remedies were enforced, norms shifted, and lawmakers could ratchet up the remedies. By contrast, Kahan observed that “hard shoves” imposing remedies substantially exceeding social norms fail to be enforced or to change norms. For example, France tackled sexual harassment by making it a criminal offense, which French society saw as vastly disproportionate. As a result, French sexual-harassment law went unenforced against conduct that would have easily incurred liability under U.S. law, and French norms barely shifted.

There is an underexplored connection between Kahan’s “gentle nudge” vs. “hard shove” dichotomy, and Calabresi & Melamed’s “property rule” vs. “liability rule” dichotomy. Calabresi & Melamed observed that remedies are either (1) liability rules, such as compensatory damages, or (2) property rules, such as injunctions or prison, which aim to deter. Liability rules generally overlap with “gentle nudges” in that they aim for proportional compensation. Property rules largely overlap with “hard shoves.”

The debate over the relative merits of property rules and liability rules has raged in academia and the courts. Bringing Kahan’s observations into the mix weighs in favor of liability rules, which are more likely to be enforced – and to shift norms.

I explore the relationship between these two dichotomies in sections II.C.3 and IV.C of a forthcoming article looking at IRS enforcement (or lack thereof). But their interrelationship is promising for anyone interested in either the property-rule/liability-rule debate or in altering social norms.