Archive for the ‘Economic Analysis of Law’ Category
posted by UCLA Law Review
Volume 61, Issue 1 (August 2013)
|Against Endowment Theory: Experimental Economics and Legal Scholarship||Gregory Klass & Kathryn Zeiler||2|
|Why Broccoli? Limiting Principles and Popular Constitutionalism in the Health Care Case||Mark D. Rosen & Christopher W. Schmidt||66|
December 6, 2013 at 6:59 pm Tags: 4th amendment, article iii, broccoli, Constitutional Law, critical race theory, Current Events, endowment effect, endowment theory, Fourth Amendment, health care, portable electronic devices, search, united states v. cotterman Posted in: Civil Rights, Constitutional Law, Economic Analysis of Law, Health Law, Law Rev (UCLA), Legal Theory, Race, Supreme Court Print This Post No Comments
posted by Orly Lobel
What a rollercoaster week of incredibly thoughtful reviews of Talent Wants to Be Free! I am deeply grateful to all the participants of the symposium. In The Age of Mass Mobility: Freedom and Insecurity, Anupam Chander, continuing Frank Pasquale’s and Matt Bodie’s questions about worker freedom and market power, asks whether Talent Wants to Be Free overly celebrates individualism, perhaps at the expense of a shared commitment to collective production, innovation, and equality. Deven Desai in What Sort of Innovation? asks about the kinds of investments and knowledge that are likely to be encouraged through private markets versus. And in Free Labor, Free Organizations,Competition and a Sports Analogy Shubha Ghosh reminds us that to create true freedom in markets we need to look closely at competition policy and antitrust law. These question about freedom/controls; individualism/collectivity; private/public are coming from left and right. And rightly so. These are fundamental tensions in the greater project of human progress and Talent Wants to Be Free strives to shows how certain dualities are pervasive and unresolvable. As Brett suggested, that’s where we need to be in the real world. From an innovation perspective, I describe in the book how “each of us holds competing ideas about the essence of innovation and conflicting views about the drive behind artistic and inventive work. The classic (no doubt romantic) image of invention is that of exogenous shocks, radical breakthroughs, and sweeping discoveries that revolutionize all that was before. The lone inventor is understood to be driven by a thirst for knowledge and a unique capacity to find what no one has seen before. But the solitude in the romantic image of the lone inventor or artist also leads to an image of the insignificance of place, environment, and ties…”. Chapter 6 ends with the following visual:
Dualities of Innovation:
Individual / Collaborative
Passion / Profit
And yet, the book takes on the contrarian title Talent Wants to Be Free! We are at a moment in history in which the pendulum has shifted too far. We have too much, not too little, controls over information, mobility and knowledge. We uncover this imbalance through the combination of a broad range of methodologies: historical, empirical, experimental, comparitive, theoretical, and normative. These are exciting times for innovation research and as I hope to convince the readers of Talent, insights from all disciplines are contributing to these debates.
November 16, 2013 at 12:56 pm Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Book Reviews, Bright Ideas, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post No Comments
posted by Orly Lobel
I promised Victor Fleisher to return to his reflections on team production. Vic raised the issue of team production and the challenge of monitoring individual performance. In Talent Wants to Be Free I discuss some of these challenges in the connection to my argument that much of what firms try to achieve through restrictive covenants could be achieved through positive incentives:
“Stock options, bonuses, and profit-sharing programs induce loyalty and identification with the company without the negative effects of over-surveillance or over-restriction. Performance-based rewards increase employees’ stake in the company and increase their commitment to the success of the firm. These rewards (and the employee’s personal investment in the firm that is generated by them) can also motivate workers to monitor their co-workers. We now have evidence that companies that use such bonus structures and pay employees stock options outperform comparable companies .”
But I also warn:
“[W]hile stock options and bonuses reward hard work, these pay structures also present challenges. Measuring employee performance in innovative settings is a difficult task. One of the risks is that compensation schemes may inadvertently emphasize observable over unobservable outputs. Another risk is that when collaborative efforts are crucial, differential pay based on individual contribution will be counterproductive and impede teamwork, as workers will want to shine individually. Individual compensation incentives might lead employees to hoard information, divert their efforts from the team, and reduce team output. In other words, performance-based pay in some settings risks creating perverse incentives, driving individuals to spend too much time on solo inventions and not enough time collaborating. Even more worrisome is the fear that employees competing for bonus awards will have incentives to actively sabotage one another’s efforts.
A related potential pitfall of providing bonuses for performance and innovative activities is the creation of jealousy and a perception of unfairness among employees. Employees, as all of us do in most aspects of our lives, tend to overestimate their own abilities and efforts. When a select few employees are rewarded unevenly in a large workplace setting, employers risk demoralizing others. Such unintended consequences will vary in corporate and industry cultures across time and place, but they may explain why many companies decide to operate under wage compression structures with relatively narrow variance between their employees’ paychecks. For all of these concerns, the highly innovative software company Atlassian recently replaced individual performance bonuses with higher salaries, an organizational bonus, and stock options, believing that too much of a focus on immediate individual rewards depleted team effort.
Still, despite these risks, for many businesses the carrots of performance-based pay and profit sharing schemes have effectively replaced the sticks of controls. But there is a catch! Cleverly, sticks can be disguised as carrots. The infamous “golden handcuffs”- stock options and deferred compensation with punitive early exit trigger – can operate as de facto restrictive contracts….”
All this is in line with what Vic is saying about the advantages of organizational forms that encourage longer term attachment. But the fundamental point is that stickiness (or what Vic refers to as soft control) is already quite strong through the firm form itself, along with status quo biases, risk aversion, and search lags. The stickiness has benefits but it also has heavy costs when it is compounded and infused with legal threats.
November 15, 2013 at 12:05 am Posted in: Behavioral Law and Economics, Bright Ideas, Contract Law & Beyond, Corporate Finance, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
As Catherine Fisk and Danielle Citron point out in their thoughtful reviews here and here, the wisdom of freeing talent must go beyond private firm level decisions; beyond the message to corporations about what the benefits of talent mobility, beyond what Frank Pasquale’s smartly spun as “reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared.” To get to an optimal equilibrium of knowledge exchanges and mobility, smart policy is needed and policymakers must to pay attention to research. Both Fisk and Citron raise questions about the likelihood that we will see reforms anytime soon. As Fisk points out — and as her important historical work has skillfully shown, and more recently, as we witness developments in several states including Michigan, Texas and Georgia as well as (again as Fisk and Citron point out) in certain aspects of the pending Restatement of Employment — the movement of law and policy has actually been toward more human capital controls rather than less. This is perhaps unsurprising to many of us. Like with the copyright extension act which was the product of heavyweight lobbying, these shifts were supported by strong interest groups. What is perhaps different with the talent wars is the robust evidence that suggests that everyone, corporations large and small, new and old, can gain from loosening controls. Citron points to an irony that I too have been quite troubled by: the current buzz is about the intense need for talent, the talent drought, the shortage in STEM graduates. As Citron describes, the art and science of recruitment is all the rage. But while we debate reforms in schooling and reforms in immigration policies, we largely neglect to consider a reality of much deadweight loss of through talent controls.
The good news is that not only in Massachusetts, where the governor has just expressed his support in reforming state law to narrow the use of non-competes, but also in other state legislatures , courts and agencies, we see a greater willingness to think seriously about positive reforms. At the state level, the jurisdictional variations points to the double gain of regions that void or at least strongly narrow the use of non-competes. California for example gains twice: first by encouraging more human capital flow intra-regionally and second, by its willingness to give refuge to employees who have signed non-competes elsewhere. In other words, the positive effects stem not only from having the right policies of setting talent free but also from its comparative advantage vis-à-vis more controlling states. This brain gain effect has been shown empirically: areas that enforce strong post-employment controls have higher rates of departure of inventors to other regions. States that weakly enforce non-competes are on the receiving side of the cream of the crop. One can only hope that legislature and business leaders will take these findings very seriously.
At the federal level, in a novel approach to antitrust the federal government recently took up the investigation of anti-competitive practices between high-tech giants that had agreed not to poach one another’s employee. This in fact relates to Shubha Gosh’s questions about defining competition and the meaning of free and open labor markets. And it is a good moment to pause about the extent to which we encourage secrecy in both private and public organizations. It is a moment in which the spiraling scandals of economic espionage by governments coupled with leaks and demand for more transparency require us to think hard. In this context, Citron is right to raise the question of government 2.0 – for individuals to be committed and motivated to contribute to innovation, they need some assurances that their contributions will not be entirely appropriated by concentrated interests.
November 14, 2013 at 1:36 am Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Government Secrecy, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post One Comment
posted by Orly Lobel
Both Vic Fleisher and Shubha Ghosh in their thoughtful commentary about Talent Wants to Be Free invoke the theory of the firm to raise question about the extent of desirable freedom in talent and knowledge flows. In its basic iteration, the theory of the firm suggests that arms-length contracting will not be optimal when one party has the ability to renegotiate and hold the other party up, which is the conventional rational for the desirability of talent controls. This is what I describe in the book as the Orthodox Model of employment intellectual property: firms fear making relational investment in employees and then having the employees renegotiate the contract under a threat of exit. Firms respond through mobility restrictions aimed at eliminating the transaction costs of this kind of opportunism. In the book, I accept, at least for some situations, this aspect of the benefits and confidence that are created for firms in internalizing production and ensuring ongoing loyalty by all players. The orthodox model thus explains post-employment controls as necessary to encourage optimal investment within the corporation. More company controls = more internal R&D and human capital investment. The new model developed in the book doesn’t deny these benefits but argues that the orthodox model is incomplete. The Dynamic-Dyadic Model asks about the costs and benefits when controls are employed. It suggests that yes, often, protecting human capital and trade secret investments is often in the immediate interest of a company, but that too much control becomes a double-edged sword. This is because of both the demotivating effects on employee performance when lateral markets are reduced and because over-time, although information leakage and job-hopping by talented workers may provide competitors with undue know-how, expertise, and technologies, constraining mobility reduces knowledge spillovers and information sharing that outweigh the occasional losses. The enriched model is supported by a growing body of empirical evidence that finds that regions with less controls and more talent freedom, such as California, have in fact more R&D investment, quicker economic growth and greater innovation.
Vic is of course right that one solution to this problem is to recreate high-powered (market-like) incentives for performance within the firm. This is an aspect that I am greatly interested in and I analyze it in Talent Wants to Be Free as the question of whether controls and restrictions can effectively alternate with the carrots of performance-based compensation, vesting interests, loyalty inducing work environments, employee stock options and so forth. I too like Shubha am a fan of Hirschman’s Exit, Voice, and Loyalty and have found it useful in analyzing employment relations. I view the behavioral research as shedding light on these questions of what these intra-firm incentives need to look like in order to preserve the incentive to innovate. In a later post I will elaborate on the monitoring and motivational tradeoffs that exist in individual and group performance.
More generally, though, the research suggests that at least in certain industries, most paradigmatically fast-paced, high-tech fields, innovation is most likely when the contracting environments have thick networks of innovators that are mobile (i.e. Silicon valley) and firms themselves are horizontally networked. The flow of talent and ideas is important to innovation and rigid boundaries of the firm can stifle that interaction even with the right intra-firm incentives. The benefits in terms of innovation rise in these structures of denser inter-firm connections, but also, the costs of opportunism that drive the conventional wisdom are in fact lower than the traditional theory of the firm would predict. This is because talent mobility is a repeated game and at any given moment, a firm can be on either side of the raiding and poaching. Policies against talent controls have the effect of reducing the costs of opportunistic renegotiation by ensuring the firm can hire replacement innovators when it loses its people. To push back on Vic’s phrasing, talent wants to be appreciated and free. MIT economist Daron Acemoglu’s analysis of investments and re-investments in workers as a key ingredient of production and growth is helpful in understanding some of this dynamic. People invest in their own human capital without knowing the exact work they will eventually do, just as companies must make investment decisions in technology and capital funds without always knowing who they will end up hiring. Acemoglu describes the positive upward trajectory under these conditions of uncertainty: When workers invest more in their human capital, businesses will invest more because of the prospects of acquiring good talent. In turn, workers will invest more in their human capital as they may end up in one or more of these companies. The likelihood of finding good employers creates incentives for overall investments in human capital.
November 13, 2013 at 1:12 am Posted in: Behavioral Law and Economics, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
This is a thrilling week for Talent Wants to Be Free. I am incredibly honored and grateful to all the participants of the symposium and especially to Deven Desai for putting it all together. It’s only Monday morning, the first official day of the symposium, and there are already a half a dozen fantastic posts up, all of which offer so much food for thought and so much to respond to. Wow! Before posting responses to the various themes and comments raised in the reviews, I wanted to write a more general introductory post to describe the path, motivation, and goals of writing the book.
Talent Wants to Be Free: Why We Should Learn to Love Leaks, Raids and Free Riding comes at a moment in time in which important developments in markets and research have coincided, pushing us to rethink innovation policy and our approaches to human capital. First, the talent wars are fiercer than ever and the mindset of talent control is rising. The stats about the rise of restrictions over human capital across industries and professions are dramatic. Talent poaching is global, acquisition marathons increasingly focus on the people and their skills and potential for innovation as much as they look at the existing intellectual property of the company. And corporate espionage is the subject of heated international debates. Second, as a result of critical mass of new empirical studies coming out of business schools, law, psychology, economics, geography, we know so much more today compared to just a few years ago about what supports and what hinders innovation. The theories and insights I develop in the book attempt to bring together my behavioral research and economic analysis of employment law, including my experimental studies about the effects of non-competes on motivation, my theoretical and collaborative experimental studies about employee loyalty and institutional incentives, and my scholarship about the changing world of work, along with theories about endogenous growth and agglomeration economies by leading economists, such as Paul Romer and Michael Porter, and new empieircal field studies by management scholars such as Mark Garmaise, Olav Sorenson, Sampsa Samila, Matt Marx, and Lee Fleming. Third, as several of the posts point out, these are exciting times because legislatures and courts are actually interested in thinking seriously about innovation policy and have become more receptive to new evidence about the potential for better reforms.
As someone who teaches and writes in the fields of employment law, I wrote the book in the hopes that we can move beyond what I viewed as a stale conversation that framed these issues of non-competes, worker mobility, trade secrets and ownership over ideas as labor versus business; protectionism versus free markets (as is often the case with other key areas of my research such as whistleblowing and discrimination). A primary goal was to shift the debate to include questions about how human capital law affects competitiveness and growth more generally. Writing about work policy, my first and foremost goal is to understand the nature of work in its many evolving iterations. Often in these debates we get sidetracked. While we have an active ongoing debate about the right scope of intellectual property, under the radar human capital controls have been expanding, largely without serious public conversation. My hope has been to encourage broad and sophisticated exchanges between legal scholars, policymakers, business leaders, investors, and innovators.
And still, there is so much more to do! The participants of the symposium are pushing me forward with next steps. The exchanges this week will certainly help crystalize a lot of the questions that were beyond the scope of the single book and several new projects are already underway. I will mention in closing a couple of other colleagues who have written about the book elsewhere and hope they too will join in the conversation. These include a thoughtful review by Raizel Liebler on The Learned FanGirl, a Q&A with CO’s Dan Solove, and other advance reviews here. Once again, let me say how grateful and appreciative I am to all the participants. Nothing is more rewarding.
November 11, 2013 at 5:25 pm Posted in: Behavioral Law and Economics, Book Reviews, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Victor Fleischer
Orly’s book is terrific–a model for pulling together theory, stories, and data to argue for a dynamic system of free-flowing employees, resources, and ideas. I am persuaded that non-competes and other human capital controls often cause more harm than good.
But amidst the many stories and studies, I would have welcomed more theory. Okay, employee mobility is good. But how good? How far should we push this idea?
Consider a society where every worker is an at-will contractor, working singly on a single project or task. Such a purely contractual world would be dynamic, as workers jump from project to project like insects chasing nectar. But would it maximize the value of production?
To have a theory of employee mobility, one must have a theory of the firm. Why do we have firms and employees in the first place, instead of a web of independent contractor relationships? The idea of team production (Alchian & Demsetz) may be useful here: production may be maximized by working in teams. This seems to be as likely to be true for innovative production as any other form of production.
Through this team production lens, the critical information cost is the difficult and costly monitoring and metering of individual performance within a group activity. While Orly notes that “[t]oo much supervision can smother creative sparks” (p. 133), too little supervision means that high performers are not appreciated and rewarded. Organizing team production within a firm gives entrepreneurs and managers an incentive to monitor effectively. It is impossible to get performance incentives precisely right at the individual level, and unhappy employees have a tendency to jump ship. But human capital controls–hard or soft–may induce employees to stay put and allow managers to evaluate employees over a longer period of time.
(Soft controls–under the umbrella of what Orly calls “stickiness”–include health insurance, deferred comp, workplace perks (e.g. free food, working on a beautiful campus), and the transaction costs of moving.)
Firms are themselves a soft form of human capital control. When we agree to work for someone else, we give up some freedom. But the existence of firms is, perhaps, a better way to maximize team production and, at least under some conditions, to promote innovation.
My point is that talent doesn’t want to be free, it wants to be appreciated. Firms that find the most efficient way to appreciate and reward talent (financially and otherwise) without devolving into an eat-what-you-kill culture have a competitive advantage.
posted by Dave Hoffman
Where were we? I know: throwing stink-bombs at a civil procedure panel!
At the crack of dawn saturday I stumbled into the Contracts II panel. Up first was Ian Ayres, presenting Remedies for the No Read Problem in Consumer Contracting, co-authored with Alan Schwartz. Florencia Marotta-Wurgler provided comments. The gist of Ayres’ paper is that consumers are optimistic about only a few hidden terms in standard-form contracts. For most terms, they guess the content right. Ayres argued that should be concerned only when consumers believe that terms are better than they actually are. The paper proposes that firms make such terms more salient with a disclosure box, after requiring firms to learn about consumer’s knowledge on a regular basis. Basically: Schumer’s box, psychologically-calibrated, for everyone. Florencia M-W commented that since standard-form contracts evolve rapidly, such a calibrated disclosure duty might be much more administratively complex than Ayres/Schwartz would’ve thought. A commentator in the crowd pointed out that since the proposal relies on individuals’ perceptions of what terms are standard, in effect it creates a one-way ratchet. The more people learn about terms through the Ayres/Schwartz box, the weaker the need for disclosure. I liked this point, though it appears to assume that contract terms react fairly predictably to market forces. Is that true? Here are some reasons to doubt it.
Zev Eigen then presented An Experimental Test of the Effectiveness of Terms & Conditions. Ridiculously fun experiment — the subjects were recruited to do a presidential poll. The setup technically permitted them to take the poll multiple times, getting paid each time. Some subjects were exhorted not to cheat in this way; others told that the experimenters trusted them not to cheat; others were given terms and conditions forbidding cheating. Subjects exhorted not to cheat and trusted not to cheat both took the opportunity to game the system significantly less often than those presented with terms and conditions. Assuming external validity, this raises a bit of a puzzle: why do firms attempt to control user behavior through T&Cs? Maybe T&Cs aren’t actually intended to control behavior at all! I wondered, but didn’t ask, if T&Cs that wrapped up with different formalities (a scan of your fingerprint; a blank box requiring you to actually try to sign with your mouse) would get to a different result. Maybe T&Cs now signal “bad terms that I don’t care to read” instead of “contract-promise.” That is, is it possible to turn online T&Cs back into real contracts?
Next, I went to Law and Psych to see “It All Happened So Slow!”: The Impact of Action Speed on Assessments of Intentionality by Zachary C. Burns and Eugene M. Caruso. Bottom line: prosecutors should use slow motion if they want to prove intent. Second bottom line: I need to find a way to do cultural cognition experiments that involving filming friends jousting on a bike. I then hopped on over to International Law, where Adam Chilton presented an experimental paper on the effect of international law rules on public opinion. He used a mTurk sample. I was a concern troll, and said something like “Dan Kahan would be very sad were he here.” Adam had a good set of responses, which boiled down to “mTurk is a good value proposition!” Which it is.
After lunch it was off to a blockbuster session on Legal Education. There was a small little paper on the value of law degrees. And then, Ghazala Azmat and Rosa Ferrer presented Gender Gaps in Performance: Evidence from Young Lawyers. They found that holding all else equal, young women lawyers tend to bill somewhat fewer hours than men, a difference attributable to being less likely to report being highly interested in becoming partners while spending more time on child care. What was noteworthy was the way they were able to mine the After the JD dataset. What seemed somewhat more troubling was the use of hours billed as a measure of performance, since completely controlling for selection in assignments appeared to me to be impossible given the IVs available. Next, Dan Ho and Mark Kelman presented Does Class Size Reduce the Gender Gap? A Natural Experiment in Law. Ho and Kelman found that switching to small classes significantly increases the GPA of female law students (eliminating the gap between men and women). This is a powerful finding – obviously,it would be worth it to see if it is replicable at other schools.
The papers I regret having missed include How to Lie with Rape Statistics by Corey Yung (cities are lying with rape statistics); Employment Conditions and Judge Performance: Evidence from State Supreme Courts by Elliott Ash and W. Bentley MacLeod (judges respond to job incentives); and Judging the Goring Ox: Retribution Directed Towards Animals by Geoffrey Goodwin and Adam Benforado. I also feel terrible having missed Bill James, who I hear was inspirational, in his own way.
Overall, it was a tightly organized conference – kudos to Dave Abrams, Ted Ruger, and Tess Wilkinson-Ryan. There could’ve been more law & psych, but that seems to be an evergreen complaint. Basically, it was a great two days. I just wish there were more Twiqbal papers.
October 29, 2013 at 8:37 pm Posted in: Capital Punishment, Civil Procedure, Civil Rights, Conferences, Constitutional Law, Contract Law & Beyond, Courts, Economic Analysis of Law, Empirical Analysis of Law Print This Post No Comments
posted by Dave Hoffman
Barry Schwartz might’ve designed the choice set facing me at the opening of CELS. Should I go to Civil Procedure I (highlighted by a Dan Klerman paper discussing the limits of Priest-Klein selection), Contracts I (where Yuval Feldman et al. would present on the relationship between contract clause specificity and compliance), on Judicial Decisionmaking and Settlement (another amazing Kuo-Chang Huang paper). [I am aware, incidentally, that for some people this choice would be Morton's. But those people probably weren't the audience for this post, were they.] I bit the bullet and went to Civ Pro, on the theory that it’d be a highly contentious slugfest between heavyweights in the field, throwing around words like “naive” and “embarrassing.” Or, actually, I went hoping to learn something from Klerman, which I did. The slugfest happened after he finished.
In response to a new FJC paper on pleading practices, a discussant and a subsequent presenter criticized the FJC’s work on Twiqbal. The discussant argued that the FJC’s focus on the realities of lawyers’ practice was irrelevant to the Court’s power-grab in Twombly, and that pleading standards mattered infinitely more than pleading practice. The presenter argued that the FJC committed methodological error in their important 2011 survey, and that their result (little effect) was misleading. The ensuing commentary was not restrained. Indeed, it felt a great deal like the infamous CELS death penalty debate from 2008. One constructive thing did come out of the fire-fight: the FJC’s estimable Joe Cecil announced that he would be making the FJC’s Twombly dataset available to all researchers through Vandy’s Branstetter program. We’ll all then be able to replicate the work done, and compare it to competing coding enterprises. Way to go, Joe!
But still, it was a tense session. As it was wrapping up, an economically-trained empiricist in the room commented how fun he had found it & how he hoped to see more papers on the topic of Twombly in the future. I’d been silent to that point, but it was time to say something. Last year in this space I tried being nice: “My own view would go further: is Twiqbal’s effect as important a problem as the distribution of CELS papers would imply?” This year I was, perhaps impolitically, more direct.
I conceded that analyzing the effect of Twombly/Iqbal wasn’t a trivial problem. But if you had to make a list of the top five most important issues in civil procedure that data can shed light on, it wouldn’t rank.* I’m not sure it would crack the top ten. Why then have Twiqbal papers eaten market share at CELS and elsewhere since 2011? Some hypotheses (testable!) include: (1) civil procedure’s federal court bias; (2) giant-killing causes publication, and the colossi generally write normative articles praising transsubstantive procedure and consequently hate Twombly; (3) network effects; and (4) it’s where the data are. But these are bad reasons. Everyone knows that there is too much work on Twombly. We should stop spending so much energy on this question. It is quickly becoming a dead end.
So I said much of that and got several responses. One person seemed to suggest that a good defense of Twiqbal fixation was that it provided a focal point to organize our research and thus build an empirical community. Another suggested that even if law professors were Twiqbal focused, the larger empirical community was not (yet) aware of the importance of pleadings, so more attention was beneficent. And the rest of folks seemed to give me the kind of dirty look you give the person who blocks your view at a concert. Sit down! Don’t you see the show is just getting started?
Anyway, after that bit of theatre, I was off to a panel on Disclosure. I commented (PPT deck) on Sah/Lowenstein, Nothing to Declare: Mandatory and Voluntary Disclosure leads advisors to avoid conflicts of interest. This was a very, very good paper, in the line of disclosure papers I’ve previously blogged here. The innovation was that advisors were permitted to walk away from conflicts instead of being assigned to them immutably. This one small change cured disclosure’s perverse effect. Rather than being morally licensed by disclosure to lie, cheat and steal, advisors free to avoid conflicts were chastened by disclosure just as plain-vanilla Brandeisian theory would’ve predicted. In my comments, I encouraged Prof. Sah to think about what happened if advisors’ rewards in the COI were returned to a third party instead of to them personally, since I think that’s the more legally-relevant policy problem. Anyway, definitely worth your time to read the paper.
Then it was off to the reception. Now, as our regular readers know, the cocktail party/poster session is a source of no small amount of stress. On the one hand, it’s a concern for the organizers. Will the food be as good as the legendary CELS@Yale? The answer, surprisingly, was “close to it”, headlined by some grapes at a cheese board which were the size of small apples and tasted great. Also, very little messy finger food, which is good because the room is full of the maladroit. But generally, poster sessions are terribly scary for those socially awkward introverts in the crowd. Which is to say, the crowd. In any event, I couldn’t socialize because I had to circle the crowd for you. Thanks for the excuse!
How about those posters? I’ll highlight two. The first was a product of Ryan Copus and Cait Unkovic of Bolt’s JSP program. They automated text processing of appellate opinions and find significant judge-level effects on whether the panel reverses the district court’s opinion, as well as strong effects for the decision to designate an opinion for publication in the first instance. That was neat. But what was neater was the set of judicial base cards, complete with bubble-gum and a judge-specific stat pack, that they handed out. My pack included Andrew Kleinfeld, a 9th circuit judge who inspired me to go to law school. The second was a poster on the state appellate courts by Thomas Cohen of the AO. The noteworthy findings were: (1) a very low appeal-to-merits rate; and (2) a higher reversal rates for plaintiff than defendant wins at trial. Overall, the only complaint I’d make about the posters was that they weren’t clearly organized in the room by topic area, which would have made it easier to know where to spend time. Also, the average age of poster presenters was younger than the average age of presenters of papers, while the average quality appeared as high or higher. What hypotheses might we formulate to explain that distribution?
That was all for Day 1. I’ll write about Day 2, which included a contracts, international law, and legal education sessions, in a second post.
*At some point, I’ll provide a top ten list. I’m taking nominations. If it has federal court in the title, you are going to have to convince me.
posted by Frank Pasquale
The challenge to the US Airways/American merger led Justin Fox to reconsider the much-vaunted “success” of passenger airline deregulation:
Before deregulation, airlines in the U.S. were pretty reliable moneymakers. [After deregulation they] lost $41.6 billion (in 2011 dollars). And it’s not just shareholders who have come off terribly. The past few decades have been, if anything, an even bigger disaster for airline employees, many of whom have seen their pensions mostly evaporate and their pay and status diminish. Taxpayers haven’t come off untouched, either — getting stuck with partial pension bailouts and big loan guarantees to aid the ailing industry in recent years along with ongoing subsidies for airport construction and improvement.
Between 1963 (when the figures begin) and 1979, the airfare subindex of the CPI grew 25% more slowly than the overall CPI. Since 1979, it’s growth 2.4 times as fast as overall inflation. A major reason for this is that there are many fewer nonstop flights than in the regulated days, and far tighter advance purchase restrictions. To the Bureau of Labor Statistics, which computes the CPI, such quality decreases are the same as price increases. (This is the opposite of the logic prevailing in computers, where rapidly increasing power is the same as a price decline.) And then ridership. Between 1948 and 1978, annual passenger miles flown grew 12% a year; since then, they’ve grown less than 4%.
Perhaps we can thank the deregulators for one thing: cutting the climate impact of a carbon-intensive industry.
posted by UCLA Law Review
Volume 61, Discourse Discourse
October 15, 2013 at 7:05 pm Posted in: Civil Rights, Constitutional Law, Current Events, Economic Analysis of Law, Financial Institutions, International & Comparative Law, Law Rev (UCLA), Politics Print This Post No Comments
posted by Frank Pasquale
When I read Robert Shiller’s Finance and the Good Society last year, I had a sense the author treated the work as the penultimate step in a scholarly cursus honorum, to culminate in the Nobel. Thus my cautionary note in this review:
[Shiller] has eloquently analyzed the role of human psychology in markets, and he predicted both the tech and housing bubbles. He has been a methodological trailblazer, introducing behavioral science to the ossified academic discipline of finance. Time’s Michael Grunwald has called him a “must-read” among wonks in the Obama Administration. Shiller’s past books command respect and repay close reading. Given his sterling career, it is deeply disappointing to see Shiller divert the “behavioral turn” in economics into the apologetics of Finance and the Good Society.
As I explain in the review, in Finance and the Good Society Shiller engages in the cardinal sin of celebrity economists: he presumes to comment authoritatively on legal, poltical, and moral matters far from his real domain of expertise. As for co-winner Eugene Fama’s contributions, Justin Fox’s work is useful (as summarized in this 2009 review):
Eugene Fama . . . promulgated the efficient markets hypothesis in its most widely recognised form by combining it with the capital asset pricing model that portrays investing as a trade-off between risk and return. . . . [I]n the early 1990s, Fama and Kenneth French published a large empirical survey of stock market returns since 1940 and found several ways in which returns were not random and which could not be explained by [Fama's theory]. In aggregate, smaller companies did better than larger ones, while “value” stocks, which are cheap compared with the book value on their balance sheet, also outperformed. There was even a “momentum” effect – stocks that had been doing well for a while tended to continue to do so. . . . . Fox makes clear that this was tantamount to the founder of efficient markets admitting his theory was wrong and quotes the judgment of one critic: “The Pope said God was dead.” He is also scathing about Fama’s attempt to rescue the theory by categorising all these effects as “risk factors”. . . . All of this came more than a decade before last year’s implosion. So why did regulators continue to enshrine assumptions of efficiency in the rules they set?
The person who can answer that last question truly deserves a Nobel.
posted by Steve Semeraro
I’d like to thank Concurring Opinions for inviting me to blog about In re: Payment Card Interchange Fee and Merchant Discount Antitrust Litigation. This eight-year-old multi-district litigation has produced the largest proposed cash settlement in litigation history ($7.25 billion) along with what is perhaps the most extraordinary release from liability ever concocted. It may also be the most contentious. Over half the name plaintiffs and over 25% of the class, including most large merchants (think Walmart, Target) and most merchant organizations, have objected. On September 12, Eastern District of New York Judge John Gleesaon held a fairness hearing to consider the settlement, and the parties are awaiting his decision. An appeal is a virtual certainty.
This post will provide background on the credit card industry pricing mechanisms that led to this litigation, the legal issues in the case, and the structure of the settlement. (You can read more about the history of the credit card industry’s relationship to the antitrust laws here.) In subsequent posts, I’ll separately analyze the damages and relief provisions in the settlement. (If you can’t wait my working paper analyzing the settlement is here.) If there are particular issues that you’d like to read more about, let me know in the comments and I will respond in subsequent posts.
The credit card industry is atypical, but not unique, in that it competes in a two-sided market, i.e., one that serves two distinct customer bases. A card system like Visa provides both a purchasing device (credit cards) to consumers and a payment acceptance service to merchants. (By way of comparison, the legal blogging market is also two-sided. Concurring Opinions provides both an information forum to its readers and a platform to its advertisers.)
posted by Dave Hoffman
In Landes and Posner’s famous, The Economics of the Baby Shortage, the authors consider the possibility that baby buyers are likely to self-selecting monsters. Not so, they argue, as
“Moreover, concern for child abuse should not be allowed to obscure the fact that abuse is not the normal motive for adopting a child. And once we put abuse aside, willingness to pay money for a baby would seem on the whole a reassuring factor from the standpoint of child welfare. Few people buy a car or television set to smash it. In general, the more costly a purchase, the more care the purchaser will lavish on it.”
I’ve always found these lines to be particularly bizarre (even in the context of an otherwise famously provocative, probably misleading, essay). In any event, they came to mind when a student in my L&E class forwarded on this chilling story.
“KIEL, Wisconsin, Sept 9 (Reuters) – Todd and Melissa Puchalla struggled more than two years to raise Quita, the troubled teenager they’d adopted from Liberia. When they decided to give up the 16-year-old, they found new parents to take her in less than two days – by posting an ad on the Internet…”
posted by Frank Pasquale
Suzanne Kim’s post below on the economic and social pressures for “smile surgery” reminds me of Jonathan Crary’s excellent book, 24/7: Late Capitalism and the Ends of Sleep. Reviewing developments ranging from military use of modafinil to the rise of energy drinks, Crary concludes that “Time for human rest and regeneration is now simply too expensive to be structurally possible within contemporary capitalism.” Might the same be said for unsmiling faces in hypercompetitive service industries?
The key questions here are: who’s in charge, and what are their values? A recent story on gender dynamics at Harvard Business School offers some clues:
The men at the top of the heap worked in finance, drove luxury cars and advertised lavish weekend getaways on Instagram, many students observed in interviews. Some belonged to the so-called Section X, an on-again-off-again secret society of ultrawealthy, mostly male, mostly international students known for decadent parties and travel. Women were more likely to be sized up on how they looked. . . .
Image Credit: book by Robin Leidner on the commodification of affect.
posted by Frank Pasquale
Magazines like The Economist mock industrial policy while piling praise on the private sector. But the more one knows about the intertwining of state and market in health care, defense, telecommunications, energy, and banking, the less realistic any strict divide between “public” and “private” appears. Moreover, even the internet sector, that last bastion of venture capital and risk-taking, is more a creature of state intervention than market forces. As Mariana Mazzucato argues:
Whether an innovation will be a success is uncertain, and it can take longer than traditional banks or venture capitalists are willing to wait. In countries such as the United States, China, Singapore, and Denmark, the state has provided the kind of patient and long-term finance new technologies need to get off the ground.
Apple is a perfect example. In its early stages, the company received government cash support via a $500,000 small-business investment company grant. And every technology that makes the iPhone a smartphone owes its vision and funding to the state: the Internet, GPS, touch-screen displays, and even the voice-activated smartphone assistant Siri all received state cash. The U.S. Defense Advanced Research Projects Agency bankrolled the Internet, and the CIA and the military funded GPS. So, although the United States is sold to us as the model example of progress through private enterprise, innovation there has benefited from a very interventionist state.
VC’s and other financiers exaggerated their role in promoting innovation in order to get capital gains tax breaks. And while they retreat ever further from taking risks on game-changing advances in productivity, the tax breaks endure, starving the state of the revenues it needs to continue subsidizing innovation. The California Ideology gradually undoes its own material foundations, but its adherents are unfazed. They are content to reap the benefits of past decades of government investment. From Silicon Valley to Wall Street, seed corn is the tax-cutters’ favorite meal.
posted by Frank Pasquale
Joey Fishkin highlights a very important part of Martin Luther King’s march on Washington:
Threaded through the demands of the March on Washington for Jobs and Freedom were calls for economic justice. The marchers demanded a nationwide minimum wage of “at least” $2.00 (it was then $1.25, so a 60% raise), in order to “give all Americans a decent standard of living.” They demanded a “massive federal program to train and place all unemployed workers — Negro and white — on meaningful and dignified jobs at decent wages.”
The legacy lives on. As David Dayen observes, “fast food and retail worker” strikes reflect the original marchers’ demands. An entity like “McDonald’s is so vast and lucrative that it could easily survive a major wage increase.” Such increases are desperately needed. As worker Willietta Dukes puts it:
I make $7.85 at Burger King as a guest ambassador and team leader, where I train new employees on restaurant regulations and perform the manager’s duties in their absence. . . . I’ve worked in fast-food for 15 years, and I can’t even afford my own rent payments. . . .My hours, like many of my coworkers, were cut this year, and I now work only 25 to 28 hours each week. I can’t afford to pay my bills working part time and making $7.85, and last month, I lost my house.
Dukes is one of the millions of faces behind aggregate statistics that suggest grotesque unfairness at the heart of the American economy. They won’t get much of a hearing in a mainstream media obsessed with the problems of the fortunate. But there is hope that a critical mass of actions by them, like the Washington civil rights march of 1963, will eventually force those at the top to take notice.
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim 2: Using more years of data would reduce the earnings premium
Response: Using more years of historical data is as likely to increase the earnings premium as to reduce it
We have doubts about the effect of more data, even if Professor Tamanaha does not.
Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.
The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing. To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.
As a commenter on Tamanaha’s blog helpfully points out:
“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).
But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.
Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”
There is nothing magical about 1992. If good quality data were available, why not go back to the 1980s or beyond? Stephen Diamond and others make this point.
The 1980s are generally believed to be a boom time in the legal market. Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it. Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.
Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.
This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose. Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here. Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.
July 29, 2013 at 11:38 am Tags: Economic Value of a Law Degree, economics, law and economics Posted in: Accounting, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim: We could have used more historical data without introducing continuity and other methodological problems
BT quote: “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”
Response: Using more historical data from SIPP would likely have introduced continuity and other methodological problems
SIPP does indeed go back farther than 1996. We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day. SIPP was substantially redesigned in 1996 to increase sample size and improve data quality. Combining different versions of SIPP could have introduced methodological problems. That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.
Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.
Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data. All else being equal, a larger sample size and more years of data are preferable. However, data quality issues suggest focusing on more recent data.
If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data. We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology. Such adjustments would inevitably have been controversial.
Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes. There are also gaps in SIPP data from the 1980s because of insufficient funding.
These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.
Changes to the new 1996 version of SIPP include:
Roughly doubling the sample size
This improves the precision of estimates and shrinks standard errors
Lengthening the panels from 3 years to 4 years
This reduces the severity of the regression to the median problem
Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data
Most government surveys topcode income data—that is, there is a maximum income that they will report. This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.
Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.
Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.
These are only a subset of the problems extending the SIPP data back past 1996 would have introduced. For us, the costs of backfilling data appear to outweigh the benefits. If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.
July 28, 2013 at 5:01 pm Tags: economic rec, Economic Value of a Law Degree, economics Posted in: Accounting, Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science, Sociology of Law Print This Post No Comments
posted by Michael Simkovic
(Cross posted from Brian Leiter’s Law School Reports)
Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution. Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”.
In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data. Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target. While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.
Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them. For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research. Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals. And his description of our present value calculations is way off the mark.
Here are some quick bullet point responses, with details below in subsequent posts:
- Forecasting and Backfilling
- Using more historical data from SIPP would likely have introduced continuity and other methodological problems
- Using more years of data is as likely to increase the historical earnings premium as to reduce it
- If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
- The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
- In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
- Data sufficiency
- 16 years of data is more than is used in similar studies to establish a baseline. This includes studies Tamanaha cited and praised in his book.
- Our data includes both peaks and troughs in the cycle. Across the cycle, law graduates earn substantially more than bachelor’s.
- Tamanaha’s errors and misreading
- We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
- This substantially reduces our earnings premium estimates
- Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
- Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
- Tamanaha is confused about present value, opportunity cost, and discounting
- Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
- Tamanaha’s best line
- “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”
July 26, 2013 at 1:26 pm Tags: Economic Value of a Law Degree, economics Posted in: Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments