Archive for the ‘Empirical Analysis of Law’ Category
posted by Orly Lobel
What a rollercoaster week of incredibly thoughtful reviews of Talent Wants to Be Free! I am deeply grateful to all the participants of the symposium. In The Age of Mass Mobility: Freedom and Insecurity, Anupam Chander, continuing Frank Pasquale’s and Matt Bodie’s questions about worker freedom and market power, asks whether Talent Wants to Be Free overly celebrates individualism, perhaps at the expense of a shared commitment to collective production, innovation, and equality. Deven Desai in What Sort of Innovation? asks about the kinds of investments and knowledge that are likely to be encouraged through private markets versus. And in Free Labor, Free Organizations,Competition and a Sports Analogy Shubha Ghosh reminds us that to create true freedom in markets we need to look closely at competition policy and antitrust law. These question about freedom/controls; individualism/collectivity; private/public are coming from left and right. And rightly so. These are fundamental tensions in the greater project of human progress and Talent Wants to Be Free strives to shows how certain dualities are pervasive and unresolvable. As Brett suggested, that’s where we need to be in the real world. From an innovation perspective, I describe in the book how “each of us holds competing ideas about the essence of innovation and conflicting views about the drive behind artistic and inventive work. The classic (no doubt romantic) image of invention is that of exogenous shocks, radical breakthroughs, and sweeping discoveries that revolutionize all that was before. The lone inventor is understood to be driven by a thirst for knowledge and a unique capacity to find what no one has seen before. But the solitude in the romantic image of the lone inventor or artist also leads to an image of the insignificance of place, environment, and ties…”. Chapter 6 ends with the following visual:
Dualities of Innovation:
Individual / Collaborative
Passion / Profit
And yet, the book takes on the contrarian title Talent Wants to Be Free! We are at a moment in history in which the pendulum has shifted too far. We have too much, not too little, controls over information, mobility and knowledge. We uncover this imbalance through the combination of a broad range of methodologies: historical, empirical, experimental, comparitive, theoretical, and normative. These are exciting times for innovation research and as I hope to convince the readers of Talent, insights from all disciplines are contributing to these debates.
November 16, 2013 at 12:56 pm Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Book Reviews, Bright Ideas, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post No Comments
posted by Orly Lobel
I promised Victor Fleisher to return to his reflections on team production. Vic raised the issue of team production and the challenge of monitoring individual performance. In Talent Wants to Be Free I discuss some of these challenges in the connection to my argument that much of what firms try to achieve through restrictive covenants could be achieved through positive incentives:
“Stock options, bonuses, and profit-sharing programs induce loyalty and identification with the company without the negative effects of over-surveillance or over-restriction. Performance-based rewards increase employees’ stake in the company and increase their commitment to the success of the firm. These rewards (and the employee’s personal investment in the firm that is generated by them) can also motivate workers to monitor their co-workers. We now have evidence that companies that use such bonus structures and pay employees stock options outperform comparable companies .”
But I also warn:
“[W]hile stock options and bonuses reward hard work, these pay structures also present challenges. Measuring employee performance in innovative settings is a difficult task. One of the risks is that compensation schemes may inadvertently emphasize observable over unobservable outputs. Another risk is that when collaborative efforts are crucial, differential pay based on individual contribution will be counterproductive and impede teamwork, as workers will want to shine individually. Individual compensation incentives might lead employees to hoard information, divert their efforts from the team, and reduce team output. In other words, performance-based pay in some settings risks creating perverse incentives, driving individuals to spend too much time on solo inventions and not enough time collaborating. Even more worrisome is the fear that employees competing for bonus awards will have incentives to actively sabotage one another’s efforts.
A related potential pitfall of providing bonuses for performance and innovative activities is the creation of jealousy and a perception of unfairness among employees. Employees, as all of us do in most aspects of our lives, tend to overestimate their own abilities and efforts. When a select few employees are rewarded unevenly in a large workplace setting, employers risk demoralizing others. Such unintended consequences will vary in corporate and industry cultures across time and place, but they may explain why many companies decide to operate under wage compression structures with relatively narrow variance between their employees’ paychecks. For all of these concerns, the highly innovative software company Atlassian recently replaced individual performance bonuses with higher salaries, an organizational bonus, and stock options, believing that too much of a focus on immediate individual rewards depleted team effort.
Still, despite these risks, for many businesses the carrots of performance-based pay and profit sharing schemes have effectively replaced the sticks of controls. But there is a catch! Cleverly, sticks can be disguised as carrots. The infamous “golden handcuffs”- stock options and deferred compensation with punitive early exit trigger – can operate as de facto restrictive contracts….”
All this is in line with what Vic is saying about the advantages of organizational forms that encourage longer term attachment. But the fundamental point is that stickiness (or what Vic refers to as soft control) is already quite strong through the firm form itself, along with status quo biases, risk aversion, and search lags. The stickiness has benefits but it also has heavy costs when it is compounded and infused with legal threats.
November 15, 2013 at 12:05 am Posted in: Behavioral Law and Economics, Bright Ideas, Contract Law & Beyond, Corporate Finance, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
Each in his own sharp and perceptive way, Brett Frischmann, Frank Pasquale and Matthew Bodie present what are probably the hardest questions that the field of human capital law must contemplate. Brett asks about a fuller alternative vision for line drawing between freedom and control. He further asks how we should strike the balance between regulatory responses and private efforts in encouraging more openness. Finally, he raises the inevitable question about the tradeoffs between nuanced, contextual standards (what, as Brett points out, I discuss as the Goldilocks problem) versus rigid absolute rules (a challenge that runs throughout IP debates and more broadly throughout law). Frank and Matt push me on the hardest problems for any politically charged debate: the distributive, including inadvertent and co-optive, effects of my vision. I am incredibly grateful to receive these hard questions even though I am sure I am yet to uncover fully satisfying responses. Brett writes that he wanted more when the book ended and yes, there will be more. For one, Brett wanted to hear more about the commons and talent pools. I have been invited to present a new paper, The New Cognitive Property in the Spring at a conference called Innovation Beyond IP at Yale and my plan is to write more about the many forms of knowledge that need to be nurtured, nourished, and set free in our markets.
Matt describes his forthcoming paper where he demonstrates that “employment” is reliant on our theory and idea of the firm: we have firms to facilitate joint production but we need to complicate our vision of what that joint production, including from a governance perspective, looks like. “Employers are people too” Matt reminds us, as he asks, “Do some of the restrictions we are talking about look less onerous if we think of employers as groups of people?” And my answer is yes, of course there is a lot of room for policy and contractual arrangements that prevent opportunism and protect investment: my arguments have never been of the anarchic flavor “let’s do away with all IP, duties of loyalty, and contractual restrictions”. Rather, as section 2 (chapters 3-8) of Talent Wants to Be Free is entitled we need to Choose Our Battles. The argument is nicely aligned with the way Peter Lee frames it: we have lots of forms of control, we have many tools, including positive tool, to create the right incentives, let us now understand how we’ve gotten out of balance, how we’ve developed an over-control mentality that uses legitimate concerns over initial investment and risks of opportunism and hold-up to allow almost any form of information and exchange to be restricted. So yes: we need certain forms of IP – we have patents, we have copyright, we have trademark. Each one of these bodies of law too needs to be examined in its scope and there is certainly some excess out there but in general: we know where we stand. But what about human capital beyond IP? And what about ownership over IP between employees and employers?
So yes, we need joint inventorship doctrines for sure when two inventors work together. But what about firm-employee doctrines? Do we need work-for-hire and hired-to-invent doctrines? Here we arrive to core questions about the differences between employment versus joint ventures or partnerships between people. And even here, the argument is that we continue to need during employment certain firm protections over ownership. But the reality is that so many highly inventive and developed countries, diverse as Finland, Sweden, Korea, Japan, Germany, and China, all have drawn more careful lines about what can fall under “service inventions” or inventions produced within a corporation. These countries have some requirement for fair compensation of the employee, some stake in inventions, rather than a carte blanche to everything produced within the contours of the firm. The key is a continuous notion of sharing, fairness and boundaries that we’ve lost sight of. Intense line-drawing as Brett would have it that is based on context and evidence, not on an outdated version of the meaning of free markets.
What about non-competes and trade secrets? Again, my argument is that these protections alternate, they should be discussed in relation to one another, and we need to understand their logic, goals, and the cost/benefit of each given that they exist in a spectrum. Non-competes is the harshest restriction: an absolute prohibition post-employment to continue in one’s professional path outside the corporation. This is unnecessary. The empirics are there to support their absolute ban rather than the fine dance that of balancing that is needed with some of the other protections. Sure it makes life momentarily easier for those who want to use non-competes, but over time, not only can we all live without that harsh tool, we will actually benefit from ceding that chemical weapon in the battle over brains and instead employ more conventional arms. And yet, even in California, this insight doesn’t and shouldn’t extend to partnerships. The California policy against non-competes is limited to the employment context. If two people, as in Matt’s hypo, are together forming a business, their joint property rights in that business suggest to us that allowing some form of a covenant not to compete will be justified. There will still be a cost to positive externalities but the difference between the two forms of relationships allow for absolute ban in one and a standard of reasonableness for the other. And yes, as Brett alludes to, the world is not black and white and we will have to tread carefully in our distinctions between employees and partners.
I completely agree with Matt and Frank that there are fundamental injustices created by our entire regime of work law. Talent Wants to Be Free takes those deep structures into account in developing the more immediate and positive vision for better innovation regimes and richer talent pools. Matt writes that a more radical alternative lies within Talent but “deserves more exegesis: namely, whether we should eliminate the concept of employment entirely.” What if people will always be independent contractors?, he asks. The reforms promoted in Talent Wants to Be Free, allowing more employees more control over their human capital, indeed bring these two categories – employees and independent contractors – closer together in some respects. But far more would be needed to shift our work relations to be more “democratic and egalitarian: a post-industrial Jeffersonian economy.” As both Frank and Matt show, in their own scholarship and in their provocative comments here, this will require us to rethink so much of the world we live in.
Frank Pasquale’s review is so rich that I hope he extends and publishes it as a full article. Frank says that “for every normative term that animates [Orly’s] analysis (labor mobility, freedom of contract, innovation, creative or constructive destruction) there is a shadow term (precarity, exploitation, disruption, waste) that goes unexplored.” I would agree that the background rules that define our labor market, at will employment, inequality, class and power relations, are not themselves the target of the book. They do however deeply inform my analysis. To me, the symmetry I draw between job insecurity and the need for job opportunity is not what Frank describes as a “comforting symmetry”. It is a call for the partial correction of an outrageous asymmetry. And yes, as I mentioned at the very beginning of the symposium, I hoped in writing the book to shift some of the debates about human capital from the stagnating repetition of arguments framed as business-labor which I view not only as paralyzing and strategically unwise but also as simply incorrect and distorting. There is so much more room for win-win than both businesses and labor seem to believe. On that level, I think Frank and I actually disagree about what we would define as abuse. I do in fact believe that many of us can passionately decide to give monetary gains in return for a job that provides intangible benefits of doing something we love to do. Is that always buying into the corporate fantasy? Is that always exploitation? Don’t all of us do that when we become scholars? Still, of course I agree with many of the concrete examples that Frank raises as exploitation and precarious work – he points to domestic workers, which is a subject I have written about in a few articles (which I just realized I should probably put on ssrn - Family Geographies: Global Care Chains, Transnational Parenthood, and New Legal Challenges in an Era of Labor Globalization, 5 CURRENT LEGAL ISSUES 383 (2002) and Class and Care, 24 HARVARD WOMEN’S LAW JOURNAL 89 (2001)]. Frank describes a range of discontent in such celebrated workplaces as Silicon Valley giants, which I too am concerned with and have thought about how new hyped up forms of employment can become highly coercive. Freeing up more of our human capital is huge, but yes, I agree, it doesn’t solve all the problems of our world and by no means should my arguments about the California advantage in the region’s approach to human capital and knowledge flow be read as picturing everything and anything Californian as part of a romantic ideal.
November 14, 2013 at 4:21 pm Posted in: Behavioral Law and Economics, Book Reviews, Bright Ideas, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Inequality, Law and Psychology, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
As Catherine Fisk and Danielle Citron point out in their thoughtful reviews here and here, the wisdom of freeing talent must go beyond private firm level decisions; beyond the message to corporations about what the benefits of talent mobility, beyond what Frank Pasquale’s smartly spun as “reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared.” To get to an optimal equilibrium of knowledge exchanges and mobility, smart policy is needed and policymakers must to pay attention to research. Both Fisk and Citron raise questions about the likelihood that we will see reforms anytime soon. As Fisk points out — and as her important historical work has skillfully shown, and more recently, as we witness developments in several states including Michigan, Texas and Georgia as well as (again as Fisk and Citron point out) in certain aspects of the pending Restatement of Employment — the movement of law and policy has actually been toward more human capital controls rather than less. This is perhaps unsurprising to many of us. Like with the copyright extension act which was the product of heavyweight lobbying, these shifts were supported by strong interest groups. What is perhaps different with the talent wars is the robust evidence that suggests that everyone, corporations large and small, new and old, can gain from loosening controls. Citron points to an irony that I too have been quite troubled by: the current buzz is about the intense need for talent, the talent drought, the shortage in STEM graduates. As Citron describes, the art and science of recruitment is all the rage. But while we debate reforms in schooling and reforms in immigration policies, we largely neglect to consider a reality of much deadweight loss of through talent controls.
The good news is that not only in Massachusetts, where the governor has just expressed his support in reforming state law to narrow the use of non-competes, but also in other state legislatures , courts and agencies, we see a greater willingness to think seriously about positive reforms. At the state level, the jurisdictional variations points to the double gain of regions that void or at least strongly narrow the use of non-competes. California for example gains twice: first by encouraging more human capital flow intra-regionally and second, by its willingness to give refuge to employees who have signed non-competes elsewhere. In other words, the positive effects stem not only from having the right policies of setting talent free but also from its comparative advantage vis-à-vis more controlling states. This brain gain effect has been shown empirically: areas that enforce strong post-employment controls have higher rates of departure of inventors to other regions. States that weakly enforce non-competes are on the receiving side of the cream of the crop. One can only hope that legislature and business leaders will take these findings very seriously.
At the federal level, in a novel approach to antitrust the federal government recently took up the investigation of anti-competitive practices between high-tech giants that had agreed not to poach one another’s employee. This in fact relates to Shubha Gosh’s questions about defining competition and the meaning of free and open labor markets. And it is a good moment to pause about the extent to which we encourage secrecy in both private and public organizations. It is a moment in which the spiraling scandals of economic espionage by governments coupled with leaks and demand for more transparency require us to think hard. In this context, Citron is right to raise the question of government 2.0 – for individuals to be committed and motivated to contribute to innovation, they need some assurances that their contributions will not be entirely appropriated by concentrated interests.
November 14, 2013 at 1:36 am Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Government Secrecy, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post One Comment
posted by Orly Lobel
Both Vic Fleisher and Shubha Ghosh in their thoughtful commentary about Talent Wants to Be Free invoke the theory of the firm to raise question about the extent of desirable freedom in talent and knowledge flows. In its basic iteration, the theory of the firm suggests that arms-length contracting will not be optimal when one party has the ability to renegotiate and hold the other party up, which is the conventional rational for the desirability of talent controls. This is what I describe in the book as the Orthodox Model of employment intellectual property: firms fear making relational investment in employees and then having the employees renegotiate the contract under a threat of exit. Firms respond through mobility restrictions aimed at eliminating the transaction costs of this kind of opportunism. In the book, I accept, at least for some situations, this aspect of the benefits and confidence that are created for firms in internalizing production and ensuring ongoing loyalty by all players. The orthodox model thus explains post-employment controls as necessary to encourage optimal investment within the corporation. More company controls = more internal R&D and human capital investment. The new model developed in the book doesn’t deny these benefits but argues that the orthodox model is incomplete. The Dynamic-Dyadic Model asks about the costs and benefits when controls are employed. It suggests that yes, often, protecting human capital and trade secret investments is often in the immediate interest of a company, but that too much control becomes a double-edged sword. This is because of both the demotivating effects on employee performance when lateral markets are reduced and because over-time, although information leakage and job-hopping by talented workers may provide competitors with undue know-how, expertise, and technologies, constraining mobility reduces knowledge spillovers and information sharing that outweigh the occasional losses. The enriched model is supported by a growing body of empirical evidence that finds that regions with less controls and more talent freedom, such as California, have in fact more R&D investment, quicker economic growth and greater innovation.
Vic is of course right that one solution to this problem is to recreate high-powered (market-like) incentives for performance within the firm. This is an aspect that I am greatly interested in and I analyze it in Talent Wants to Be Free as the question of whether controls and restrictions can effectively alternate with the carrots of performance-based compensation, vesting interests, loyalty inducing work environments, employee stock options and so forth. I too like Shubha am a fan of Hirschman’s Exit, Voice, and Loyalty and have found it useful in analyzing employment relations. I view the behavioral research as shedding light on these questions of what these intra-firm incentives need to look like in order to preserve the incentive to innovate. In a later post I will elaborate on the monitoring and motivational tradeoffs that exist in individual and group performance.
More generally, though, the research suggests that at least in certain industries, most paradigmatically fast-paced, high-tech fields, innovation is most likely when the contracting environments have thick networks of innovators that are mobile (i.e. Silicon valley) and firms themselves are horizontally networked. The flow of talent and ideas is important to innovation and rigid boundaries of the firm can stifle that interaction even with the right intra-firm incentives. The benefits in terms of innovation rise in these structures of denser inter-firm connections, but also, the costs of opportunism that drive the conventional wisdom are in fact lower than the traditional theory of the firm would predict. This is because talent mobility is a repeated game and at any given moment, a firm can be on either side of the raiding and poaching. Policies against talent controls have the effect of reducing the costs of opportunistic renegotiation by ensuring the firm can hire replacement innovators when it loses its people. To push back on Vic’s phrasing, talent wants to be appreciated and free. MIT economist Daron Acemoglu’s analysis of investments and re-investments in workers as a key ingredient of production and growth is helpful in understanding some of this dynamic. People invest in their own human capital without knowing the exact work they will eventually do, just as companies must make investment decisions in technology and capital funds without always knowing who they will end up hiring. Acemoglu describes the positive upward trajectory under these conditions of uncertainty: When workers invest more in their human capital, businesses will invest more because of the prospects of acquiring good talent. In turn, workers will invest more in their human capital as they may end up in one or more of these companies. The likelihood of finding good employers creates incentives for overall investments in human capital.
November 13, 2013 at 1:12 am Posted in: Behavioral Law and Economics, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
This is a thrilling week for Talent Wants to Be Free. I am incredibly honored and grateful to all the participants of the symposium and especially to Deven Desai for putting it all together. It’s only Monday morning, the first official day of the symposium, and there are already a half a dozen fantastic posts up, all of which offer so much food for thought and so much to respond to. Wow! Before posting responses to the various themes and comments raised in the reviews, I wanted to write a more general introductory post to describe the path, motivation, and goals of writing the book.
Talent Wants to Be Free: Why We Should Learn to Love Leaks, Raids and Free Riding comes at a moment in time in which important developments in markets and research have coincided, pushing us to rethink innovation policy and our approaches to human capital. First, the talent wars are fiercer than ever and the mindset of talent control is rising. The stats about the rise of restrictions over human capital across industries and professions are dramatic. Talent poaching is global, acquisition marathons increasingly focus on the people and their skills and potential for innovation as much as they look at the existing intellectual property of the company. And corporate espionage is the subject of heated international debates. Second, as a result of critical mass of new empirical studies coming out of business schools, law, psychology, economics, geography, we know so much more today compared to just a few years ago about what supports and what hinders innovation. The theories and insights I develop in the book attempt to bring together my behavioral research and economic analysis of employment law, including my experimental studies about the effects of non-competes on motivation, my theoretical and collaborative experimental studies about employee loyalty and institutional incentives, and my scholarship about the changing world of work, along with theories about endogenous growth and agglomeration economies by leading economists, such as Paul Romer and Michael Porter, and new empieircal field studies by management scholars such as Mark Garmaise, Olav Sorenson, Sampsa Samila, Matt Marx, and Lee Fleming. Third, as several of the posts point out, these are exciting times because legislatures and courts are actually interested in thinking seriously about innovation policy and have become more receptive to new evidence about the potential for better reforms.
As someone who teaches and writes in the fields of employment law, I wrote the book in the hopes that we can move beyond what I viewed as a stale conversation that framed these issues of non-competes, worker mobility, trade secrets and ownership over ideas as labor versus business; protectionism versus free markets (as is often the case with other key areas of my research such as whistleblowing and discrimination). A primary goal was to shift the debate to include questions about how human capital law affects competitiveness and growth more generally. Writing about work policy, my first and foremost goal is to understand the nature of work in its many evolving iterations. Often in these debates we get sidetracked. While we have an active ongoing debate about the right scope of intellectual property, under the radar human capital controls have been expanding, largely without serious public conversation. My hope has been to encourage broad and sophisticated exchanges between legal scholars, policymakers, business leaders, investors, and innovators.
And still, there is so much more to do! The participants of the symposium are pushing me forward with next steps. The exchanges this week will certainly help crystalize a lot of the questions that were beyond the scope of the single book and several new projects are already underway. I will mention in closing a couple of other colleagues who have written about the book elsewhere and hope they too will join in the conversation. These include a thoughtful review by Raizel Liebler on The Learned FanGirl, a Q&A with CO’s Dan Solove, and other advance reviews here. Once again, let me say how grateful and appreciative I am to all the participants. Nothing is more rewarding.
November 11, 2013 at 5:25 pm Posted in: Behavioral Law and Economics, Book Reviews, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Dave Hoffman
Earlier this week, I argued that civil procedure empiricists are spending too much time on the Twiqbal problem. That’s not the same as saying that Twiqbal is an unimportant set of cases. It probably signals an important shift in federal pleading doctrine, and, arguably, some litigants we care about are being shut out of federal court. I mean to say merely this: the amount of attention paid to Twiqbal is exceeding its importance to litigants (over state and federal court). Our focus is being driven largely by data availability and law professor incentives. We can do better.
I’m starting to make a genre of these “people should be writing about X not Y” posts. Boy, that could get tiresome fast! Luckily, no one actually has to listen to me except for the poor 1Ls. In any event, it seemed useful to start a conversation about what topics are more worth writing about than Twiqbal. Use the comment thread below to generate a list and if there’s enough interest I’ll create a poll. To qualify, the topic has to be real-data-driven (i.e., not merely doctrinal analysis, not experimental, etc.); and there must be a way, in theory, to get the data. For example,
- Does law influence outcomes in small claims court?
- How well do choice of law clauses work in state court?
- When do attorneys matter?
- What are the determinants of summary judgment grant rates in state courts?
- Is there a way to get a handle on which cases are being “diverted” to arbitration or “carved-[back]-in“?
posted by Dave Hoffman
Where were we? I know: throwing stink-bombs at a civil procedure panel!
At the crack of dawn saturday I stumbled into the Contracts II panel. Up first was Ian Ayres, presenting Remedies for the No Read Problem in Consumer Contracting, co-authored with Alan Schwartz. Florencia Marotta-Wurgler provided comments. The gist of Ayres’ paper is that consumers are optimistic about only a few hidden terms in standard-form contracts. For most terms, they guess the content right. Ayres argued that should be concerned only when consumers believe that terms are better than they actually are. The paper proposes that firms make such terms more salient with a disclosure box, after requiring firms to learn about consumer’s knowledge on a regular basis. Basically: Schumer’s box, psychologically-calibrated, for everyone. Florencia M-W commented that since standard-form contracts evolve rapidly, such a calibrated disclosure duty might be much more administratively complex than Ayres/Schwartz would’ve thought. A commentator in the crowd pointed out that since the proposal relies on individuals’ perceptions of what terms are standard, in effect it creates a one-way ratchet. The more people learn about terms through the Ayres/Schwartz box, the weaker the need for disclosure. I liked this point, though it appears to assume that contract terms react fairly predictably to market forces. Is that true? Here are some reasons to doubt it.
Zev Eigen then presented An Experimental Test of the Effectiveness of Terms & Conditions. Ridiculously fun experiment — the subjects were recruited to do a presidential poll. The setup technically permitted them to take the poll multiple times, getting paid each time. Some subjects were exhorted not to cheat in this way; others told that the experimenters trusted them not to cheat; others were given terms and conditions forbidding cheating. Subjects exhorted not to cheat and trusted not to cheat both took the opportunity to game the system significantly less often than those presented with terms and conditions. Assuming external validity, this raises a bit of a puzzle: why do firms attempt to control user behavior through T&Cs? Maybe T&Cs aren’t actually intended to control behavior at all! I wondered, but didn’t ask, if T&Cs that wrapped up with different formalities (a scan of your fingerprint; a blank box requiring you to actually try to sign with your mouse) would get to a different result. Maybe T&Cs now signal “bad terms that I don’t care to read” instead of “contract-promise.” That is, is it possible to turn online T&Cs back into real contracts?
Next, I went to Law and Psych to see “It All Happened So Slow!”: The Impact of Action Speed on Assessments of Intentionality by Zachary C. Burns and Eugene M. Caruso. Bottom line: prosecutors should use slow motion if they want to prove intent. Second bottom line: I need to find a way to do cultural cognition experiments that involving filming friends jousting on a bike. I then hopped on over to International Law, where Adam Chilton presented an experimental paper on the effect of international law rules on public opinion. He used a mTurk sample. I was a concern troll, and said something like “Dan Kahan would be very sad were he here.” Adam had a good set of responses, which boiled down to “mTurk is a good value proposition!” Which it is.
After lunch it was off to a blockbuster session on Legal Education. There was a small little paper on the value of law degrees. And then, Ghazala Azmat and Rosa Ferrer presented Gender Gaps in Performance: Evidence from Young Lawyers. They found that holding all else equal, young women lawyers tend to bill somewhat fewer hours than men, a difference attributable to being less likely to report being highly interested in becoming partners while spending more time on child care. What was noteworthy was the way they were able to mine the After the JD dataset. What seemed somewhat more troubling was the use of hours billed as a measure of performance, since completely controlling for selection in assignments appeared to me to be impossible given the IVs available. Next, Dan Ho and Mark Kelman presented Does Class Size Reduce the Gender Gap? A Natural Experiment in Law. Ho and Kelman found that switching to small classes significantly increases the GPA of female law students (eliminating the gap between men and women). This is a powerful finding – obviously,it would be worth it to see if it is replicable at other schools.
The papers I regret having missed include How to Lie with Rape Statistics by Corey Yung (cities are lying with rape statistics); Employment Conditions and Judge Performance: Evidence from State Supreme Courts by Elliott Ash and W. Bentley MacLeod (judges respond to job incentives); and Judging the Goring Ox: Retribution Directed Towards Animals by Geoffrey Goodwin and Adam Benforado. I also feel terrible having missed Bill James, who I hear was inspirational, in his own way.
Overall, it was a tightly organized conference – kudos to Dave Abrams, Ted Ruger, and Tess Wilkinson-Ryan. There could’ve been more law & psych, but that seems to be an evergreen complaint. Basically, it was a great two days. I just wish there were more Twiqbal papers.
October 29, 2013 at 8:37 pm Posted in: Capital Punishment, Civil Procedure, Civil Rights, Conferences, Constitutional Law, Contract Law & Beyond, Courts, Economic Analysis of Law, Empirical Analysis of Law Print This Post No Comments
posted by Dave Hoffman
Barry Schwartz might’ve designed the choice set facing me at the opening of CELS. Should I go to Civil Procedure I (highlighted by a Dan Klerman paper discussing the limits of Priest-Klein selection), Contracts I (where Yuval Feldman et al. would present on the relationship between contract clause specificity and compliance), on Judicial Decisionmaking and Settlement (another amazing Kuo-Chang Huang paper). [I am aware, incidentally, that for some people this choice would be Morton's. But those people probably weren't the audience for this post, were they.] I bit the bullet and went to Civ Pro, on the theory that it’d be a highly contentious slugfest between heavyweights in the field, throwing around words like “naive” and “embarrassing.” Or, actually, I went hoping to learn something from Klerman, which I did. The slugfest happened after he finished.
In response to a new FJC paper on pleading practices, a discussant and a subsequent presenter criticized the FJC’s work on Twiqbal. The discussant argued that the FJC’s focus on the realities of lawyers’ practice was irrelevant to the Court’s power-grab in Twombly, and that pleading standards mattered infinitely more than pleading practice. The presenter argued that the FJC committed methodological error in their important 2011 survey, and that their result (little effect) was misleading. The ensuing commentary was not restrained. Indeed, it felt a great deal like the infamous CELS death penalty debate from 2008. One constructive thing did come out of the fire-fight: the FJC’s estimable Joe Cecil announced that he would be making the FJC’s Twombly dataset available to all researchers through Vandy’s Branstetter program. We’ll all then be able to replicate the work done, and compare it to competing coding enterprises. Way to go, Joe!
But still, it was a tense session. As it was wrapping up, an economically-trained empiricist in the room commented how fun he had found it & how he hoped to see more papers on the topic of Twombly in the future. I’d been silent to that point, but it was time to say something. Last year in this space I tried being nice: “My own view would go further: is Twiqbal’s effect as important a problem as the distribution of CELS papers would imply?” This year I was, perhaps impolitically, more direct.
I conceded that analyzing the effect of Twombly/Iqbal wasn’t a trivial problem. But if you had to make a list of the top five most important issues in civil procedure that data can shed light on, it wouldn’t rank.* I’m not sure it would crack the top ten. Why then have Twiqbal papers eaten market share at CELS and elsewhere since 2011? Some hypotheses (testable!) include: (1) civil procedure’s federal court bias; (2) giant-killing causes publication, and the colossi generally write normative articles praising transsubstantive procedure and consequently hate Twombly; (3) network effects; and (4) it’s where the data are. But these are bad reasons. Everyone knows that there is too much work on Twombly. We should stop spending so much energy on this question. It is quickly becoming a dead end.
So I said much of that and got several responses. One person seemed to suggest that a good defense of Twiqbal fixation was that it provided a focal point to organize our research and thus build an empirical community. Another suggested that even if law professors were Twiqbal focused, the larger empirical community was not (yet) aware of the importance of pleadings, so more attention was beneficent. And the rest of folks seemed to give me the kind of dirty look you give the person who blocks your view at a concert. Sit down! Don’t you see the show is just getting started?
Anyway, after that bit of theatre, I was off to a panel on Disclosure. I commented (PPT deck) on Sah/Lowenstein, Nothing to Declare: Mandatory and Voluntary Disclosure leads advisors to avoid conflicts of interest. This was a very, very good paper, in the line of disclosure papers I’ve previously blogged here. The innovation was that advisors were permitted to walk away from conflicts instead of being assigned to them immutably. This one small change cured disclosure’s perverse effect. Rather than being morally licensed by disclosure to lie, cheat and steal, advisors free to avoid conflicts were chastened by disclosure just as plain-vanilla Brandeisian theory would’ve predicted. In my comments, I encouraged Prof. Sah to think about what happened if advisors’ rewards in the COI were returned to a third party instead of to them personally, since I think that’s the more legally-relevant policy problem. Anyway, definitely worth your time to read the paper.
Then it was off to the reception. Now, as our regular readers know, the cocktail party/poster session is a source of no small amount of stress. On the one hand, it’s a concern for the organizers. Will the food be as good as the legendary CELS@Yale? The answer, surprisingly, was “close to it”, headlined by some grapes at a cheese board which were the size of small apples and tasted great. Also, very little messy finger food, which is good because the room is full of the maladroit. But generally, poster sessions are terribly scary for those socially awkward introverts in the crowd. Which is to say, the crowd. In any event, I couldn’t socialize because I had to circle the crowd for you. Thanks for the excuse!
How about those posters? I’ll highlight two. The first was a product of Ryan Copus and Cait Unkovic of Bolt’s JSP program. They automated text processing of appellate opinions and find significant judge-level effects on whether the panel reverses the district court’s opinion, as well as strong effects for the decision to designate an opinion for publication in the first instance. That was neat. But what was neater was the set of judicial base cards, complete with bubble-gum and a judge-specific stat pack, that they handed out. My pack included Andrew Kleinfeld, a 9th circuit judge who inspired me to go to law school. The second was a poster on the state appellate courts by Thomas Cohen of the AO. The noteworthy findings were: (1) a very low appeal-to-merits rate; and (2) a higher reversal rates for plaintiff than defendant wins at trial. Overall, the only complaint I’d make about the posters was that they weren’t clearly organized in the room by topic area, which would have made it easier to know where to spend time. Also, the average age of poster presenters was younger than the average age of presenters of papers, while the average quality appeared as high or higher. What hypotheses might we formulate to explain that distribution?
That was all for Day 1. I’ll write about Day 2, which included a contracts, international law, and legal education sessions, in a second post.
*At some point, I’ll provide a top ten list. I’m taking nominations. If it has federal court in the title, you are going to have to convince me.
posted by Stanford Law Review
Although the solutions to many modern economic and societal challenges may be found in better understanding data, the dramatic increase in the amount and variety of data collection poses serious concerns about infringements on privacy. In our 2013 Symposium Issue, experts weigh in on these important questions at the intersection of big data and privacy.
September 3, 2013 at 7:47 am Posted in: Behavioral Law and Economics, Constitutional Law, Criminal Law, Cyber Civil Rights, Cyberlaw, Empirical Analysis of Law, Intellectual Property, Law Rev (Stanford) Print This Post 2 Comments
posted by Dave Hoffman
[A brief note of apology: it's been a terrible blogging summer for me, though great on other fronts. I promise I'll do better in the coming academic year. In particular, I'd like to get back to my dark fantasy/law blogging series. If you've nominations for interviewees, email me.]
One of the major lessons of the cultural cognition project is that empirical arguments are a terrible way to resolve value conflicts. On issues as diverse as the relationship between gun ownership and homicide rates, the child-welfare effects of gay parenting, global warming, and consent in rape cases, participants in empirically-infused politics behave as if they are spectators at sporting events. New information is polarized through identity-protective lenses; we highlight those facts that are congenial to our way of life and discounts those that are not; we are subject to naive realism. It’s sort of dispiriting, really. Data can inflame our culture wars.
One example of this phenomenon is the empirical debate over minimum wage laws. As is well known, there is an evergreen debate in economics journals about the policy consequences which flow from a wage floor. Many (most) economists argue that the minimum wage retards growth and ironically hurts the very low-wage workers it is supposed to hurt. Others argue that the minimum wage has the opposite effect. What’s interesting about this debate -to me, anyway- is that it seems to bear such an orthogonal relationship to how the politics of the minimum wage play out, and the kinds of arguments that persuade partisans on one side or another. Or to put it differently, academic liberals in favor of the minimum wage have relied on regression analyses, but I don’t think they’ve persuaded many folks who weren’t otherwise disposed to agree with them. Academic critics of the minimum wage too have failed to move the needle on public opinion, which (generally) is supportive of a much higher level of minimum wage than is currently the law.
How to explain this puzzle? My colleague Brishen Rogers has a terrific draft article out on ssrn, Justice at Work: Minimum Wage Laws and Social Equality. The paper urges a new kind of defense of minimum wages, which elides the empirical debate about minimum wages’ effect on labor markets altogether. From the abstract:
“Accepting for the sake of argument that minimum wage laws cause inefficiency and unemployment, this article nevertheless defends them. It draws upon philosophical arguments that a just state will not simply redistribute resources, but will also enable citizens to relate to one another as equals. Minimum wage laws advance this ideal of “social equality” in two ways: they symbolize the society’s commitment to low-wage workers, and they help reduce work-based class and status distinctions. Comparable tax-and-transfer programs are less effective on both fronts. Indeed, the fact that minimum wage laws increase unemployment can be a good thing, as the jobs lost will not always be worth saving. The article thus stands to enrich current increasingly urgent debates over whether to increase the minimum wage. It also recasts some longstanding questions of minimum wage doctrine, including exclusions from coverage and ambiguities regarding which parties are liable for violations.”
I’m a huge fan of Brishen’s work, having been provoked and a bit convinced by his earlier work (here) on a productive way forward for the union movement. What seems valuable in this latest paper is that the minimum wage laws are explicitly defended with reference to a widely shared set of values (dignity, equality). Foregrounding such values I think would increase support for the minimum wage among members of the populace. The lack of such dignitary discussions in the academic debate to date has level the minimum wage’s liberal defenders without a satisfying and coherent ground on which to stand. Worth thinking about in the waning hours of Labor’s day.
September 2, 2013 at 9:02 pm Posted in: Behavioral Law and Economics, Civil Rights, Consumer Protection Law, Contract Law & Beyond, Culture, Current Events, Empirical Analysis of Law, Employment Law Print This Post 2 Comments
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim 2: Using more years of data would reduce the earnings premium
Response: Using more years of historical data is as likely to increase the earnings premium as to reduce it
We have doubts about the effect of more data, even if Professor Tamanaha does not.
Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.
The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing. To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.
As a commenter on Tamanaha’s blog helpfully points out:
“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).
But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.
Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”
There is nothing magical about 1992. If good quality data were available, why not go back to the 1980s or beyond? Stephen Diamond and others make this point.
The 1980s are generally believed to be a boom time in the legal market. Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it. Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.
Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.
This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose. Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here. Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.
July 29, 2013 at 11:38 am Tags: Economic Value of a Law Degree, economics, law and economics Posted in: Accounting, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim: We could have used more historical data without introducing continuity and other methodological problems
BT quote: “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”
Response: Using more historical data from SIPP would likely have introduced continuity and other methodological problems
SIPP does indeed go back farther than 1996. We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day. SIPP was substantially redesigned in 1996 to increase sample size and improve data quality. Combining different versions of SIPP could have introduced methodological problems. That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.
Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.
Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data. All else being equal, a larger sample size and more years of data are preferable. However, data quality issues suggest focusing on more recent data.
If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data. We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology. Such adjustments would inevitably have been controversial.
Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes. There are also gaps in SIPP data from the 1980s because of insufficient funding.
These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.
Changes to the new 1996 version of SIPP include:
Roughly doubling the sample size
This improves the precision of estimates and shrinks standard errors
Lengthening the panels from 3 years to 4 years
This reduces the severity of the regression to the median problem
Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data
Most government surveys topcode income data—that is, there is a maximum income that they will report. This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.
Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.
Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.
These are only a subset of the problems extending the SIPP data back past 1996 would have introduced. For us, the costs of backfilling data appear to outweigh the benefits. If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.
July 28, 2013 at 5:01 pm Tags: economic rec, Economic Value of a Law Degree, economics Posted in: Accounting, Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science, Sociology of Law Print This Post No Comments
posted by Michael Simkovic
(Cross posted from Brian Leiter’s Law School Reports)
Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution. Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”.
In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data. Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target. While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.
Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them. For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research. Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals. And his description of our present value calculations is way off the mark.
Here are some quick bullet point responses, with details below in subsequent posts:
- Forecasting and Backfilling
- Using more historical data from SIPP would likely have introduced continuity and other methodological problems
- Using more years of data is as likely to increase the historical earnings premium as to reduce it
- If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
- The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
- In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
- Data sufficiency
- 16 years of data is more than is used in similar studies to establish a baseline. This includes studies Tamanaha cited and praised in his book.
- Our data includes both peaks and troughs in the cycle. Across the cycle, law graduates earn substantially more than bachelor’s.
- Tamanaha’s errors and misreading
- We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
- This substantially reduces our earnings premium estimates
- Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
- Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
- Tamanaha is confused about present value, opportunity cost, and discounting
- Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
- Tamanaha’s best line
- “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”
July 26, 2013 at 1:26 pm Tags: Economic Value of a Law Degree, economics Posted in: Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments
posted by Michael Simkovic
Here is the overview.
Here is the first part.
July 24, 2013 at 8:52 am Tags: Economic Value of a Law Degree, economics Posted in: Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Law Talk Print This Post No Comments
posted by Michael Simkovic
posted by Christine Chabot
Do Justices vote independently of all political forces surrounding their appointments? My earlier post discusses how, even in recent decades, Justices’ votes have been surprisingly independent of the ideologies of Senates to which they were nominated. Even so, it may be that presidents fared better than the Senate and recently enhanced their ability to appoint ideologically-compatible Justices.
History is rife with examples of Justices who disappointed their appointing presidents. As recounted by Henry Abraham, Teddy Roosevelt complained vociferously about Justice Holmes’ ruling in Northern Securities, Truman called Justice Clark his “biggest mistake,” and Eisenhower also referred to Justices Warren and Brennan as “mistakes.” My earlier study finds frequent grounds for presidential disappointment, based on voting records for eighty-nine Justices over a 172-year period. Just under half of these Justices voted with appointees of the other party most of the time. Still, of the last twelve Justices, only two, Stevens and Souter, aligned most often with appointees of the other party. This low number calls into question whether the frequency of presidential disappointments has diminished recently.
My recent paper identifies change over time using regression analysis and more nuanced measures of presidential ideology. The analysis shows ideologies of appointing presidents did not significantly predict Justices’ votes before the 1970s, but they gained significant predictive power thereafter. This enhanced success coincides with Presidents Nixon’s and Reagan’s efforts to prioritize ideology in appointments to the bench. While earlier presidents did not uniformly ignore nominees’ ideology, they lacked modern technological resources. By the Reagan administration, computerized databases allowed presidential aides to quickly assemble and analyze virtually all of a nominee’s past writings. The improved information may have enabled presidents to better anticipate nominees’ future rulings.
July 10, 2013 at 11:22 am Tags: appointments, presidents, Supreme Court Posted in: Constitutional Law, Courts, Empirical Analysis of Law, Law Rev (Hastings), Politics, Supreme Court, Uncategorized Print This Post 5 Comments
posted by Christine Chabot
Thanks, Sarah, for the warm welcome. It is a pleasure to guest blog this month.
With pundits already speculating about President Obama’s next Supreme Court nominee, it seems a good time to discuss relationships between political forces surrounding Supreme Court appointments and Justices’ decisions. Justices sometimes disappoint their appointing presidents, and ideologically-distant Senates are often blamed for presidents’ “mistakes.” For example, David Souter and John Paul Stevens turned out to be far more liberal than the Republican presidents who appointed them (Bush I and Ford, respectively). These presidents both faced very liberal Senates when they selected Souter and Stevens.
Are nominees like Souter and Stevens anomalies or part of a larger pattern of senatorial constraint? My recent article in the Hastings Law Journal offers the first empirical analysis of the Senate’s role in constraining presidents’ choices of Supreme Court nominees over an extended period. It considers ideologies of Senates faced by nominating presidents and measures whether the ideologies of these Senates predict Justices’ voting behavior. The analysis substantially qualifies earlier understandings of senatorial constraint.
Earlier empirical studies consider only limited numbers of recent nominees (see article pp. 1235-39). They suggest that the Senate has constrained presidents’ choices, and many scholars theorize that the Senate has enhanced its role in the appointments process since the 1950s. Analysis of a larger group of nominees shows the Senate’s ideology has had significant predictive power over Justices’ votes in only two isolated historical periods. Senatorial ideology was last significant in the 1970s, shortly after the filibuster of Abe Fortas’s nomination to be Chief Justice, but then it actually lost significance after the Senate rejected Bork in 1987.
July 3, 2013 at 12:19 pm Tags: appointments, judicial selection, Senate confirmation process, Supreme Court Posted in: Constitutional Law, Courts, Current Events, Empirical Analysis of Law, Law Rev (Hastings), Politics, Supreme Court Print This Post 5 Comments
posted by David Schwartz
While patent law is my core area of scholarly interest, I have also studied the use of legal scholarship by the courts. My co-author Lee Petherbridge from Loyola-LA and I have conducted several comprehensive empirical studies using large datasets on the issue. More precisely, we have analyzed how often federal courts cite to law review articles in their decisions. We have empirically analyzed the issue from a variety of angles. We have studied the use of legal scholarship by the U.S. Supreme Court (available here), by the regional U.S. Courts of Appeals (study available here), and by the Federal Circuit (available here). I won’t recount the finding of those studies here. Instead, I will report some new information and ask readers for potential explanations of the data.
posted by David Schwartz
Before delving into the substance of my first post, I wanted to thank the crew at Concurring Opinions for inviting me to guest blog this month.
Recently, I have been thinking about whether empirical legal scholars have or should have special ethical responsibilities. Why special responsibilities? Two basic reasons. First, nearly all law reviews lack formal peer review. The lack of peer review potentially permits dubious data to be reported without differentiation alongside quality data. Second, empirical legal scholarship has the potential to be extremely influential on policy debates because it provides “data” to substantiate or refute claims. Unfortunately, many consumers of empirical legal scholarship — including other legal scholars, practitioners, judges, the media, and policy makers — are not sophisticated in empirical methods. Even more importantly, subsequent citations of empirical findings by legal scholars rarely take care to explain the study’s qualifications and limitations. Instead, subsequent citations often amplify the “findings” of the empirical study by over-generalizing the results.
My present concern is about weak data. By weak data, I don’t mean data that is flat out incorrect (such as from widespread coding errors) or that misuses empirical methods (such as when the model’s assumptions are not met). Others previously have discussed issues relating to incorrect data and analysis in empirical legal studies. Rather, I am referring to reporting data that encourages weak or flawed inferences, that is not statistically significant, or that is of extremely limited value and thus may be misused. The precise question I have been considering is under what circumstances one should report weak data, even with an appropriate explanation of the methodology used and its potential limitations. (A different yet related question for another discussion is whether one should report lots of data without informing the reader which data the researcher views as most relevant. This scattershot approach has many of the same concerns as weak data.)