Category: Economic Analysis of Law

0

UCLA Law Review Vol. 61, Discourse

Volume 61, Discourse Discourse

Fighting Unfair Credit Reports: A Proposal to Give Consumers More Power to Enforce the Fair Credit Reporting Act Jeffrey Bils 226
A Legal “Red Line”? Syria and the Use of Chemical Weapons in Civil Conflict Jillian Blake & Aqsa Mahmud 244
Alleyne v. United States, Age as an Element, and the Retroactivity of Miller v. Alabama Beth Colgan TBD

A Nobel for Shiller

When I read Robert Shiller’s Finance and the Good Society last year, I had a sense the author treated the work as the penultimate step in a scholarly cursus honorum, to culminate in the Nobel. Thus my cautionary note in this review:

[Shiller] has eloquently analyzed the role of human psychology in markets, and he predicted both the tech and housing bubbles. He has been a methodological trailblazer, introducing behavioral science to the ossified academic discipline of finance. Time’s Michael Grunwald has called him a “must-read” among wonks in the Obama Administration. Shiller’s past books command respect and repay close reading. Given his sterling career, it is deeply disappointing to see Shiller divert the “behavioral turn” in economics into the apologetics of Finance and the Good Society.

As I explain in the review, in Finance and the Good Society Shiller engages in the cardinal sin of celebrity economists: he presumes to comment authoritatively on legal, poltical, and moral matters far from his real domain of expertise. As for co-winner Eugene Fama’s contributions, Justin Fox’s work is useful (as summarized in this 2009 review):

Eugene Fama . . . promulgated the efficient markets hypothesis in its most widely recognised form by combining it with the capital asset pricing model that portrays investing as a trade-off between risk and return. . . . [I]n the early 1990s, Fama and Kenneth French published a large empirical survey of stock market returns since 1940 and found several ways in which returns were not random and which could not be explained by [Fama's theory]. In aggregate, smaller companies did better than larger ones, while “value” stocks, which are cheap compared with the book value on their balance sheet, also outperformed. There was even a “momentum” effect – stocks that had been doing well for a while tended to continue to do so. . . . . Fox makes clear that this was tantamount to the founder of efficient markets admitting his theory was wrong and quotes the judgment of one critic: “The Pope said God was dead.” He is also scathing about Fama’s attempt to rescue the theory by categorising all these effects as “risk factors”. . . . All of this came more than a decade before last year’s implosion. So why did regulators continue to enshrine assumptions of efficiency in the rules they set?

The person who can answer that last question truly deserves a Nobel.

6

The Credit Card Merchant Fee Litigation Settlement

I’d like to thank Concurring Opinions for inviting me to blog about In re: Payment Card Interchange Fee and Merchant Discount Antitrust Litigation.  This eight-year-old multi-district litigation has produced the largest proposed cash settlement in litigation history  ($7.25 billion) along with what is perhaps the most extraordinary release from liability ever concocted.  It may also be the most contentious.  Over half the name plaintiffs and over 25% of the class, including most large merchants (think Walmart, Target) and most merchant organizations, have objected.  On September 12, Eastern District of New York Judge John Gleesaon held a fairness hearing to consider the settlement, and the parties are awaiting his decision.  An appeal is a virtual certainty.

This post will provide background on the credit card industry pricing mechanisms that led to this litigation, the legal issues in the case, and the structure of the settlement.  (You can read more about the history of the credit card industry’s relationship to the antitrust laws here.)  In subsequent posts, I’ll separately analyze the damages and relief provisions in the settlement.  (If you can’t wait 8-) my working paper analyzing the settlement is here.)  If there are particular issues that you’d like to read more about, let me know in the comments and I will respond in subsequent posts.

The credit card industry is atypical, but not unique, in that it competes in a two-sided market, i.e., one that serves two distinct customer bases.  A card system like Visa provides both a purchasing device (credit cards) to consumers and a payment acceptance service to merchants.  (By way of comparison, the legal blogging market is also two-sided.  Concurring Opinions provides both an information forum to its readers and a platform to its advertisers.)

Read More

6

The Economics of the Baby Shortage: A Horrifying Counter-example

In Landes and Posner’s famous, The Economics of the Baby Shortage, the authors consider the possibility that baby buyers are likely to self-selecting monsters.  Not so, they argue, as

“Moreover, concern for child abuse should not be allowed to obscure the fact that abuse is not the normal motive for adopting a child.  And once we put abuse aside, willingness to pay money for a baby would seem on the whole a reassuring factor from the standpoint of child welfare. Few people buy a car or television set to smash it.  In general, the more costly a purchase, the more care the purchaser will lavish on it.”

I’ve always found these lines to be particularly bizarre  (even in the context of an otherwise famously provocative, probably misleading, essay). In any event, they came to mind when a student in my L&E class forwarded on this chilling story.

“KIEL, Wisconsin, Sept 9 (Reuters) – Todd and Melissa Puchalla struggled more than two years to raise Quita, the troubled teenager they’d adopted from Liberia. When they decided to give up the 16-year-old, they found new parents to take her in less than two days – by posting an ad on the Internet…”

Read More

No Margin for Error

FastSmileSuzanne Kim’s post below on the economic and social pressures for “smile surgery” reminds me of Jonathan Crary’s excellent book, 24/7: Late Capitalism and the Ends of Sleep. Reviewing developments ranging from military use of modafinil to the rise of energy drinks, Crary concludes that “Time for human rest and regeneration is now simply too expensive to be structurally possible within contemporary capitalism.” Might the same be said for unsmiling faces in hypercompetitive service industries?

The key questions here are: who’s in charge, and what are their values? A recent story on gender dynamics at Harvard Business School offers some clues:

The men at the top of the heap worked in finance, drove luxury cars and advertised lavish weekend getaways on Instagram, many students observed in interviews. Some belonged to the so-called Section X, an on-again-off-again secret society of ultrawealthy, mostly male, mostly international students known for decadent parties and travel. Women were more likely to be sized up on how they looked. . . .

As a a recent discussion on the problem of “Second Generation” gender bias showed, emphasis on appearance may be a key “unseen barrier” to equity.

Image Credit: book by Robin Leidner on the commodification of affect.

What Drives Innovation? The State

Magazines like The Economist mock industrial policy while piling praise on the private sector. But the more one knows about the intertwining of state and market in health care, defense, telecommunications, energy, and banking, the less realistic any strict divide between “public” and “private” appears. Moreover, even the internet sector, that last bastion of venture capital and risk-taking, is more a creature of state intervention than market forces. As Mariana Mazzucato argues:

Whether an innovation will be a success is uncertain, and it can take longer than traditional banks or venture capitalists are willing to wait. In countries such as the United States, China, Singapore, and Denmark, the state has provided the kind of patient and long-term finance new technologies need to get off the ground.

Apple is a perfect example. In its early stages, the company received government cash support via a $500,000 small-business investment company grant. And every technology that makes the iPhone a smartphone owes its vision and funding to the state: the Internet, GPS, touch-screen displays, and even the voice-activated smartphone assistant Siri all received state cash. The U.S. Defense Advanced Research Projects Agency bankrolled the Internet, and the CIA and the military funded GPS. So, although the United States is sold to us as the model example of progress through private enterprise, innovation there has benefited from a very interventionist state.

VC’s and other financiers exaggerated their role in promoting innovation in order to get capital gains tax breaks. And while they retreat ever further from taking risks on game-changing advances in productivity, the tax breaks endure, starving the state of the revenues it needs to continue subsidizing innovation. The California Ideology gradually undoes its own material foundations, but its adherents are unfazed. They are content to reap the benefits of past decades of government investment. From Silicon Valley to Wall Street, seed corn is the tax-cutters’ favorite meal.

X-Posted: Madisonian.

King’s Economic Legacy

Joey Fishkin highlights a very important part of Martin Luther King’s march on Washington:

Threaded through the demands of the March on Washington for Jobs and Freedom were calls for economic justice. The marchers demanded a nationwide minimum wage of “at least” $2.00 (it was then $1.25, so a 60% raise), in order to “give all Americans a decent standard of living.” They demanded a “massive federal program to train and place all unemployed workers — Negro and white — on meaningful and dignified jobs at decent wages.”

The legacy lives on. As David Dayen observes, “fast food and retail worker” strikes reflect the original marchers’ demands. An entity like “McDonald’s is so vast and lucrative that it could easily survive a major wage increase.” Such increases are desperately needed. As worker Willietta Dukes puts it:

I make $7.85 at Burger King as a guest ambassador and team leader, where I train new employees on restaurant regulations and perform the manager’s duties in their absence. . . . I’ve worked in fast-food for 15 years, and I can’t even afford my own rent payments. . . .My hours, like many of my coworkers, were cut this year, and I now work only 25 to 28 hours each week. I can’t afford to pay my bills working part time and making $7.85, and last month, I lost my house.

Dukes is one of the millions of faces behind aggregate statistics that suggest grotesque unfairness at the heart of the American economy. They won’t get much of a hearing in a mainstream media obsessed with the problems of the fortunate. But there is hope that a critical mass of actions by them, like the Washington civil rights march of 1963, will eventually force those at the top to take notice.

0

Brian Tamanaha’s Straw Men (Part 2): Who’s Cherry Picking?

(Reposted from Brian Leiter’s Law School Reports)

BT Claim 2:  Using more years of data would reduce the earnings premium

BT Quote: There is no doubt that including 1992 to 1995 in their study would measurabley reduce the ‘earnings premium.'” 

Response:  Using more years of historical data is as likely to increase the earnings premium as to reduce it

We have doubts about the effect of more data, even if Professor Tamanaha does not.

Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.

The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing.   To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.

As a commenter on Tamanaha’s blog helpfully points out:

“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).

But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.

Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”

There is nothing magical about 1992.  If good quality data were available, why not go back to the 1980s or beyond?   Stephen Diamond and others make this point.

The 1980s are generally believed to be a boom time in the legal market.  Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it.  Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.

Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.

 

Cycles

 

This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose.  Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here.  Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.

0

Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)

 

BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.

0

Brian Tamanaha’s Straw Men (Overview)

(Cross posted from Brian Leiter’s Law School Reports)

Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution.  Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”. 

In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data.   Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target.  While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.

Stephen Diamond explains why Tamanaha apparently changed his post: Ted Seto and Eric Rasmusen expressed concerns about Tamanaha’s use of ad hominem attacks.

Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them.  For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research.  Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals.  And his description of our present value calculations is way off the mark.

Here are some quick bullet point responses, with details below in subsequent posts:

  • Forecasting and Backfilling
    • Using more historical data from SIPP would likely have introduced continuity and other methodological problems
    • Using more years of data is as likely to increase the historical earnings premium as to reduce it
    • If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
    • The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
    • In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
  • Data sufficiency
    • 16 years of data is more than is used in similar studies to establish a baseline.  This includes studies Tamanaha cited and praised in his book.
    • Our data includes both peaks and troughs in the cycle.  Across the cycle, law graduates earn substantially more than bachelor’s.
  • Tamanaha’s errors and misreading
    • We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
    • This substantially reduces our earnings premium estimates
    • Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
    • Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
    • Tamanaha is confused about present value, opportunity cost, and discounting
    • Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
  • Tamanaha’s best line
    • “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”