Tagged: Economic Value of a Law Degree

7

Gatekeeping and the Economic Value of a Law Degree (Part 1)

gatekeepers image

When I first read the commentary concerning Michael Simkovic and Frank McIntyre’s “The Economic Value of a Law Degree,” I was most surprised by the attention that the commenters paid to the paper’s passing reference to the typewriter. S&M are aware that their work arrives at a time when it is popular to believe that technology has wrought a structural change to lawyers’ earnings. For their part, S&M cite Frank Miles Finch’s obloquy against typewriters in the first volume of the Columbia Law Review to show that worries of technological ruin are nothing new in our line of work. After listing several other examples (such as word processing and Westlaw), S&M maintain that “lawyers have prospered while adapting to once threatening new technologies and modes of work.”

Taken out of context, this last statement might sound as if S&M are engaging in bold fortunetelling based on a scant historical record, but a few paragraphs later, S&M concede that “past performance does not guarantee future returns” and “[t]he return to a law degree in 2020 can only be known for certain in 2020.” When read in conjunction with the rest of the paper, the typewriter reference serves as a brief and lighthearted reminder that we, like others before us, can fall victim to nostalgic gloom and doom.

Despite its minor role in the article, commenters have been eager to mention the typewriter observation, with references ranging from the favorable (here), to the neutral (here and here), to the mildly dismissive (here and here), to the critical (here). Having given some thought to the last entry on this list, Deborah Merritt’s wonderful blog entry on Law School Cafe, I now realize that I shouldn’t have been surprised by the attention paid to the typewriter; it turns out to be an important point for S&M to make.

Merritt argues contra S&M that (1) Finch was not engaging in sky-is-falling melodrama and (2) that the typewriter “may have contributed” to a structural change in lawyers’ earnings—specifically, the creation of three-year law schools and formal schooling requirements for bar admission.  As to the first point, Merritt explains that Finch mentioned the typewriter to bolster his argument that apprenticeships had ceased to be a viable training environment for lawyers. He was not predicting that the typewriter would lead to the demise of his profession; rather, he was talking about the need for an adequate training substitute. As to the second point, Merritt points out that the New York bar adopted Finch’s recommendations, in part, because it was persuaded by his Columbia article. I add that the ABA would eventually adopt similar requirements as well, also referencing Finch’s article in the process. Merritt highlights that Finch’s main point was that the typewriter limited apprentices’ exposure to the study of important legal texts and created a difficult learning environment. As a result, Finch argued, law school was the far better educational option.

Merritt’s post is thoughtful, well-researched, and concise. Moreover, she is largely right. Finch was not engaging in nostalgic sky-is-falling reasoning. In S&M’s defense, however, the notion of a Typewriter Doomsday was not altogether uncommon in the early Twentieth Century. To take but one example, Arkansas law titan George B. Rose mentioned the following in a 1920 speech to the Tennessee Bar Association:

A great menace to the wellbeing of the bar is the disproportionate increase of its numbers. With the invention of the typewriter, the simplification of pleadings and the improved methods of travel, one lawyer can now do the work of two in the olden time; yet the proportion of lawyers to the remainder of the community has enormously increased.

Rose’s remarks were received with great applause and an honorary membership into the Tennessee bar.

More importantly, Merritt stands on solid ground when she argues that technological change contributed to a shift in the business practices of legal professionals and, in turn, the shape of American legal education. There can be little doubt that this shift can be described as “structural.”

But I disagree with Merritt insofar as she believes that a structural shift in schooling requirements weakens S&M’s paper. To the contrary, it helps the paper by providing a prima facie explanation for relative stability in the law degree’s value.

We must be mindful of the distinction between structural shifts in lawyers’ earnings and structural shifts in other aspects of the legal profession, such as educational requirements. Clearly, Merritt’s focus is the latter, and S&M’s focus is the former. And just because S&M have chosen to focus on one kind of structural shift does not mean that they have “dismissed” other structural shifts, as Merritt says. S&M readily acknowledge that the structural shifts can occur with law school enrollment:

These distinctions and widespread publicity may enable critics to influence college graduates’ career plans, the judiciary, and perhaps the future of legal education. They may have already contributed to a steep three-year decline in law school applications and enrollments.

The more critical point is that breaking up structural shifts into various types can be a useful analytic tool. Distinguishing between structural shifts in the value of a law degree and structural shifts in access to the practice of law permits us to make an important observation—namely, that it is possible for the latter to prevent the former. Critics of S&M doubt that the past performance of law degree holders is a reliable predictor of future performance. We can hypothesize that, to the extent law degree holders can insulate themselves from exogenous forces that threaten the value of their services, they will increase the stability of the degree’s value and, therefore, the reliability of predictions based on their past performance.  The underlying reasoning for the hypothesis is as follows.  All other things being held constant, those who are within service industries that have the power and willingness to manipulate the supply of available service providers will likely be better at braving exogenous shocks than those who are not. Under those circumstances, when such measures are taken to protect those already possessing the credentials necessary to perform that service, the value of those credentials will tend to be relatively stable.  Whether these measures have been or will be effective enough to stabilize the value of the law degree is a question worth considering.

There are several important gatekeepers to the practice of law: law schools, the American Bar Association, state bar associations, state supreme courts, etc. These gatekeepers possess, and sometimes use, tools that have the potential to protect the economic value of the law degree. They can change the qualifications for entry, expand or contract the domain of permissible services, raise or lower rate maximums, or regulate advertising practices, among other things.  And while a considerable minority of law degree holders do not practice law (about 40% according to the SIPP data that S&M consider), there are enough practicing lawyers to give protectionist measures a fighting chance to stabilize the overall value of the degree.

Merritt deserves much credit for bringing this observation to the fore in connection with the S&M paper, although she did not expand upon it (an excusable omission in light of the fact that we are talking about a single blog post).

Having the luxury of multiple posts, I will use Part 2 to discuss a few of the protectionist measures that gatekeepers have taken over the last century.  I will focus in particular on the measure that Merritt discusses–the advent of a law school prerequisite for admission to the bar.

0

Brian Tamanaha’s Straw Men (Part 2): Who’s Cherry Picking?

(Reposted from Brian Leiter’s Law School Reports)

BT Claim 2:  Using more years of data would reduce the earnings premium

BT Quote: There is no doubt that including 1992 to 1995 in their study would measurabley reduce the ‘earnings premium.’” 

Response:  Using more years of historical data is as likely to increase the earnings premium as to reduce it

We have doubts about the effect of more data, even if Professor Tamanaha does not.

Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.

The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing.   To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.

As a commenter on Tamanaha’s blog helpfully points out:

“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).

But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.

Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”

There is nothing magical about 1992.  If good quality data were available, why not go back to the 1980s or beyond?   Stephen Diamond and others make this point.

The 1980s are generally believed to be a boom time in the legal market.  Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it.  Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.

Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.

 

Cycles

 

This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose.  Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here.  Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.

0

Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)

 

BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.

0

Brian Tamanaha’s Straw Men (Overview)

(Cross posted from Brian Leiter’s Law School Reports)

Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution.  Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”. 

In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data.   Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target.  While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.

Stephen Diamond explains why Tamanaha apparently changed his post: Ted Seto and Eric Rasmusen expressed concerns about Tamanaha’s use of ad hominem attacks.

Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them.  For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research.  Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals.  And his description of our present value calculations is way off the mark.

Here are some quick bullet point responses, with details below in subsequent posts:

  • Forecasting and Backfilling
    • Using more historical data from SIPP would likely have introduced continuity and other methodological problems
    • Using more years of data is as likely to increase the historical earnings premium as to reduce it
    • If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
    • The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
    • In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
  • Data sufficiency
    • 16 years of data is more than is used in similar studies to establish a baseline.  This includes studies Tamanaha cited and praised in his book.
    • Our data includes both peaks and troughs in the cycle.  Across the cycle, law graduates earn substantially more than bachelor’s.
  • Tamanaha’s errors and misreading
    • We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
    • This substantially reduces our earnings premium estimates
    • Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
    • Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
    • Tamanaha is confused about present value, opportunity cost, and discounting
    • Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
  • Tamanaha’s best line
    • “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”
36

Nonrespondent law graduates and other sampling questions

The Washington Post reports one possible concern with estimates of the Economic Value of a Law Degree:

“[Paul] Campos argues that low-earning lawyers may be less likely to participate in SIPP in the first place because of the stigma involved in admitting that, even anonymously.”

By email, Jerry Organ asks related questions about the representativeness of our sample.

“SIPP” is the United States Census Bureau’s Survey of Income and Program Participation, and is one of the primary data sources used in The Economic Value of a Law Degree.  Campos worries about stigma and non-response.  Thankfully SIPP is specifically designed to deal with these problems and to include impoverished and stigmatized members of the population, including those who receive government aid.

The Census Bureau explains SIPP’s purpose as follows:

 “To collect source and amount of income, labor force information, program participation and eligibility data, and general demographic characteristics to measure the effectiveness of existing federal, state, and local programs; to estimate future costs and coverage for government programs, such as food stamps; and to provide improved statistics on the distribution of income and measures of economic well-being in the country.”

The Census Bureau elaborates on the use of SIPP to analyze participation in Food Stamps and other anti-poverty programs here.

Census explains in greater detail how SIPP handles issues related to response bias, non-response bias, and weighting here.  SIPP oversamples in poor neighborhoods, imputes when necessary, and adjust the sample weights to approach a nationally representative sample.

It is about a good a survey as one is likely to find conducted by people who care a great deal about nonresponse and accurate estimates.

Additionally, to the extent that any lingering nonresponse bias may cause those with low earnings to be less inclined to participate, this bias will affect both law graduates and bachelor’s degree holders.  What we measure in the Economic Value of a Law Degree is the earnings premium, or difference in earnings that is attributable to the law degree.  The biases should wash out, or more likely, bias down our estimates of the law degree earnings premium, because bachelors are far more likely than law graduates to live in poverty.

Indeed studies that have compared earnings reported in SIPP to earnings from administrative data (tax and social security administration data) find that SIPP data underestimates earnings premiums because more highly educated and higher income individuals tend to underreport earnings, while less educated and lower income individuals tend to over report.  We make no attempt to correct for this downward bias in our earnings premium estimates to offset any lingering selection on unobservables.

Individual response bias issues also won’t affect federal student loan default data, which is administrative data from the Department of Education.  As noted in the article and in previous blog posts, former law students default on their student loans much less frequently than former students of bachelor’s degree or other graduate degree programs.

 

36

Brian Tamanaha Says We Should Look at the Below Average Outcomes (And We Did)

Brian Tamanaha’s response to The Economic Value of a Law Degree, as reported by Inside Higher Education doesn’t capture the contents of the study.  According to IHE, Tamanaha said:

 “The study blends the winners and losers, to come up with its $1,000,000, earnings figure, but that misses the point of my book: which is that getting a law degree outside of top law school – and especially at bottom law schools –is a risky proposition . . . Nothing in the article refutes this point.”

Professor Tamanaha is correct that the $1 million figure is an average, but we didn’t write a 70 page article with only one number in it.

The Economic Value of a Law Degree not only reports the mean or average—it reports percentiles, or different points in the distribution.  At the 75th percentile, the pre-tax lifetime value is $1.1 million – $100,000 more than at the mean.  At the 50th percentile, the value is $600,000.  At the 25th percentile, the value is $350,000.  These points in the earnings distribution do better than breaking out returns by school—they allow that even some people at good schools have bad outcomes (and vice versa).  Thus we capture, and at length, exactly the concern Tamanaha expresses.

Lifetime earnings distribution slide

 

As we discuss in the article, for technical reasons related to regression of earnings to the median, our 75th and 25th percentile values are probably too extreme. The “75th percentile” value is likely closer to the 80th or 85th percentile for lifetime earnings, and the “25th percentile” is likely closer to the 20th or 15th percentile.

In other words, roughly the top 15 to 20 percent of law school graduates obtain a lifetime earnings premium worth more than $1.1 million as of the start of law school. Roughly the next 30 to 35 percent obtain an earnings premium between $1.1 million and $600,000. In the lower half of the distribution, roughly the first 30 to 35 percent obtain an earnings premium between $350,000 and $600,000. Roughly the bottom 15 to 20 percent obtain an earnings premium below $350,000. These numbers are pre-tax and pre-tuition.

Even toward the bottom of the distribution, even after taxes, and even after tuition, a law degree is a profitable investment.  And that is before income based repayment, which can substantially reduce the risk at the bottom of the distribution.

We also present student loan default rates for 25 standalone law schools, most of which are low ranked institutions, and all of which have student loan default rates that are below the average for bachelor’s and graduate degree programs.  The average law school default rate is approximately one third of the average default rate for bachelor’s and graduate programs.

Student Loan Defaults

 

People with law degrees are not immune from risk.  No one is.  But the law degree reduces the risk of financial hardship.  Law degree holders face significantly less risk of low earnings than those with bachelor’s degrees, and also face lower risk of unemployment.  Increased earnings and reduced risk appear to more than offset the cost of the law degree for the overwhelming majority of law students.

Frank McIntyre and I did not miss the point of Brian Tamanaha’s Failing Law Schools.   Rather, we disagree with his conclusions about the riskiness of a law degree because data on law degree holders does not support his conclusions.  We discuss Tamanaha’s analysis on pages 20 to 24 of The Economic Value of a Law Degree.

We believe that Professor Tamanaha’s views deserve more attention than we could give them in the Economic Value of a Law Degree. Because of this, last Spring, we also wrote a book review of Failing Law Schools, pointing out both the strengths and weaknesses of his analysis.  We will make the book review available on SSRN soon.

If Professor Tamanaha disagrees with our estimates of the value of a law degree at the low end, we’re happy to hear it.  But he should not say that we ignored the issue.  We look forward to a productive exchange with him, on the merits.