Archive for the ‘Law and Psychology’ Category
posted by Orly Lobel
What a rollercoaster week of incredibly thoughtful reviews of Talent Wants to Be Free! I am deeply grateful to all the participants of the symposium. In The Age of Mass Mobility: Freedom and Insecurity, Anupam Chander, continuing Frank Pasquale’s and Matt Bodie’s questions about worker freedom and market power, asks whether Talent Wants to Be Free overly celebrates individualism, perhaps at the expense of a shared commitment to collective production, innovation, and equality. Deven Desai in What Sort of Innovation? asks about the kinds of investments and knowledge that are likely to be encouraged through private markets versus. And in Free Labor, Free Organizations,Competition and a Sports Analogy Shubha Ghosh reminds us that to create true freedom in markets we need to look closely at competition policy and antitrust law. These question about freedom/controls; individualism/collectivity; private/public are coming from left and right. And rightly so. These are fundamental tensions in the greater project of human progress and Talent Wants to Be Free strives to shows how certain dualities are pervasive and unresolvable. As Brett suggested, that’s where we need to be in the real world. From an innovation perspective, I describe in the book how “each of us holds competing ideas about the essence of innovation and conflicting views about the drive behind artistic and inventive work. The classic (no doubt romantic) image of invention is that of exogenous shocks, radical breakthroughs, and sweeping discoveries that revolutionize all that was before. The lone inventor is understood to be driven by a thirst for knowledge and a unique capacity to find what no one has seen before. But the solitude in the romantic image of the lone inventor or artist also leads to an image of the insignificance of place, environment, and ties…”. Chapter 6 ends with the following visual:
Dualities of Innovation:
Individual / Collaborative
Passion / Profit
And yet, the book takes on the contrarian title Talent Wants to Be Free! We are at a moment in history in which the pendulum has shifted too far. We have too much, not too little, controls over information, mobility and knowledge. We uncover this imbalance through the combination of a broad range of methodologies: historical, empirical, experimental, comparitive, theoretical, and normative. These are exciting times for innovation research and as I hope to convince the readers of Talent, insights from all disciplines are contributing to these debates.
November 16, 2013 at 12:56 pm Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Book Reviews, Bright Ideas, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post No Comments
posted by Orly Lobel
Each in his own sharp and perceptive way, Brett Frischmann, Frank Pasquale and Matthew Bodie present what are probably the hardest questions that the field of human capital law must contemplate. Brett asks about a fuller alternative vision for line drawing between freedom and control. He further asks how we should strike the balance between regulatory responses and private efforts in encouraging more openness. Finally, he raises the inevitable question about the tradeoffs between nuanced, contextual standards (what, as Brett points out, I discuss as the Goldilocks problem) versus rigid absolute rules (a challenge that runs throughout IP debates and more broadly throughout law). Frank and Matt push me on the hardest problems for any politically charged debate: the distributive, including inadvertent and co-optive, effects of my vision. I am incredibly grateful to receive these hard questions even though I am sure I am yet to uncover fully satisfying responses. Brett writes that he wanted more when the book ended and yes, there will be more. For one, Brett wanted to hear more about the commons and talent pools. I have been invited to present a new paper, The New Cognitive Property in the Spring at a conference called Innovation Beyond IP at Yale and my plan is to write more about the many forms of knowledge that need to be nurtured, nourished, and set free in our markets.
Matt describes his forthcoming paper where he demonstrates that “employment” is reliant on our theory and idea of the firm: we have firms to facilitate joint production but we need to complicate our vision of what that joint production, including from a governance perspective, looks like. “Employers are people too” Matt reminds us, as he asks, “Do some of the restrictions we are talking about look less onerous if we think of employers as groups of people?” And my answer is yes, of course there is a lot of room for policy and contractual arrangements that prevent opportunism and protect investment: my arguments have never been of the anarchic flavor “let’s do away with all IP, duties of loyalty, and contractual restrictions”. Rather, as section 2 (chapters 3-8) of Talent Wants to Be Free is entitled we need to Choose Our Battles. The argument is nicely aligned with the way Peter Lee frames it: we have lots of forms of control, we have many tools, including positive tool, to create the right incentives, let us now understand how we’ve gotten out of balance, how we’ve developed an over-control mentality that uses legitimate concerns over initial investment and risks of opportunism and hold-up to allow almost any form of information and exchange to be restricted. So yes: we need certain forms of IP – we have patents, we have copyright, we have trademark. Each one of these bodies of law too needs to be examined in its scope and there is certainly some excess out there but in general: we know where we stand. But what about human capital beyond IP? And what about ownership over IP between employees and employers?
So yes, we need joint inventorship doctrines for sure when two inventors work together. But what about firm-employee doctrines? Do we need work-for-hire and hired-to-invent doctrines? Here we arrive to core questions about the differences between employment versus joint ventures or partnerships between people. And even here, the argument is that we continue to need during employment certain firm protections over ownership. But the reality is that so many highly inventive and developed countries, diverse as Finland, Sweden, Korea, Japan, Germany, and China, all have drawn more careful lines about what can fall under “service inventions” or inventions produced within a corporation. These countries have some requirement for fair compensation of the employee, some stake in inventions, rather than a carte blanche to everything produced within the contours of the firm. The key is a continuous notion of sharing, fairness and boundaries that we’ve lost sight of. Intense line-drawing as Brett would have it that is based on context and evidence, not on an outdated version of the meaning of free markets.
What about non-competes and trade secrets? Again, my argument is that these protections alternate, they should be discussed in relation to one another, and we need to understand their logic, goals, and the cost/benefit of each given that they exist in a spectrum. Non-competes is the harshest restriction: an absolute prohibition post-employment to continue in one’s professional path outside the corporation. This is unnecessary. The empirics are there to support their absolute ban rather than the fine dance that of balancing that is needed with some of the other protections. Sure it makes life momentarily easier for those who want to use non-competes, but over time, not only can we all live without that harsh tool, we will actually benefit from ceding that chemical weapon in the battle over brains and instead employ more conventional arms. And yet, even in California, this insight doesn’t and shouldn’t extend to partnerships. The California policy against non-competes is limited to the employment context. If two people, as in Matt’s hypo, are together forming a business, their joint property rights in that business suggest to us that allowing some form of a covenant not to compete will be justified. There will still be a cost to positive externalities but the difference between the two forms of relationships allow for absolute ban in one and a standard of reasonableness for the other. And yes, as Brett alludes to, the world is not black and white and we will have to tread carefully in our distinctions between employees and partners.
I completely agree with Matt and Frank that there are fundamental injustices created by our entire regime of work law. Talent Wants to Be Free takes those deep structures into account in developing the more immediate and positive vision for better innovation regimes and richer talent pools. Matt writes that a more radical alternative lies within Talent but “deserves more exegesis: namely, whether we should eliminate the concept of employment entirely.” What if people will always be independent contractors?, he asks. The reforms promoted in Talent Wants to Be Free, allowing more employees more control over their human capital, indeed bring these two categories – employees and independent contractors – closer together in some respects. But far more would be needed to shift our work relations to be more “democratic and egalitarian: a post-industrial Jeffersonian economy.” As both Frank and Matt show, in their own scholarship and in their provocative comments here, this will require us to rethink so much of the world we live in.
Frank Pasquale’s review is so rich that I hope he extends and publishes it as a full article. Frank says that “for every normative term that animates [Orly’s] analysis (labor mobility, freedom of contract, innovation, creative or constructive destruction) there is a shadow term (precarity, exploitation, disruption, waste) that goes unexplored.” I would agree that the background rules that define our labor market, at will employment, inequality, class and power relations, are not themselves the target of the book. They do however deeply inform my analysis. To me, the symmetry I draw between job insecurity and the need for job opportunity is not what Frank describes as a “comforting symmetry”. It is a call for the partial correction of an outrageous asymmetry. And yes, as I mentioned at the very beginning of the symposium, I hoped in writing the book to shift some of the debates about human capital from the stagnating repetition of arguments framed as business-labor which I view not only as paralyzing and strategically unwise but also as simply incorrect and distorting. There is so much more room for win-win than both businesses and labor seem to believe. On that level, I think Frank and I actually disagree about what we would define as abuse. I do in fact believe that many of us can passionately decide to give monetary gains in return for a job that provides intangible benefits of doing something we love to do. Is that always buying into the corporate fantasy? Is that always exploitation? Don’t all of us do that when we become scholars? Still, of course I agree with many of the concrete examples that Frank raises as exploitation and precarious work – he points to domestic workers, which is a subject I have written about in a few articles (which I just realized I should probably put on ssrn - Family Geographies: Global Care Chains, Transnational Parenthood, and New Legal Challenges in an Era of Labor Globalization, 5 CURRENT LEGAL ISSUES 383 (2002) and Class and Care, 24 HARVARD WOMEN’S LAW JOURNAL 89 (2001)]. Frank describes a range of discontent in such celebrated workplaces as Silicon Valley giants, which I too am concerned with and have thought about how new hyped up forms of employment can become highly coercive. Freeing up more of our human capital is huge, but yes, I agree, it doesn’t solve all the problems of our world and by no means should my arguments about the California advantage in the region’s approach to human capital and knowledge flow be read as picturing everything and anything Californian as part of a romantic ideal.
November 14, 2013 at 4:21 pm Posted in: Behavioral Law and Economics, Book Reviews, Bright Ideas, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Inequality, Law and Psychology, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Orly Lobel
As Catherine Fisk and Danielle Citron point out in their thoughtful reviews here and here, the wisdom of freeing talent must go beyond private firm level decisions; beyond the message to corporations about what the benefits of talent mobility, beyond what Frank Pasquale’s smartly spun as “reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared.” To get to an optimal equilibrium of knowledge exchanges and mobility, smart policy is needed and policymakers must to pay attention to research. Both Fisk and Citron raise questions about the likelihood that we will see reforms anytime soon. As Fisk points out — and as her important historical work has skillfully shown, and more recently, as we witness developments in several states including Michigan, Texas and Georgia as well as (again as Fisk and Citron point out) in certain aspects of the pending Restatement of Employment — the movement of law and policy has actually been toward more human capital controls rather than less. This is perhaps unsurprising to many of us. Like with the copyright extension act which was the product of heavyweight lobbying, these shifts were supported by strong interest groups. What is perhaps different with the talent wars is the robust evidence that suggests that everyone, corporations large and small, new and old, can gain from loosening controls. Citron points to an irony that I too have been quite troubled by: the current buzz is about the intense need for talent, the talent drought, the shortage in STEM graduates. As Citron describes, the art and science of recruitment is all the rage. But while we debate reforms in schooling and reforms in immigration policies, we largely neglect to consider a reality of much deadweight loss of through talent controls.
The good news is that not only in Massachusetts, where the governor has just expressed his support in reforming state law to narrow the use of non-competes, but also in other state legislatures , courts and agencies, we see a greater willingness to think seriously about positive reforms. At the state level, the jurisdictional variations points to the double gain of regions that void or at least strongly narrow the use of non-competes. California for example gains twice: first by encouraging more human capital flow intra-regionally and second, by its willingness to give refuge to employees who have signed non-competes elsewhere. In other words, the positive effects stem not only from having the right policies of setting talent free but also from its comparative advantage vis-à-vis more controlling states. This brain gain effect has been shown empirically: areas that enforce strong post-employment controls have higher rates of departure of inventors to other regions. States that weakly enforce non-competes are on the receiving side of the cream of the crop. One can only hope that legislature and business leaders will take these findings very seriously.
At the federal level, in a novel approach to antitrust the federal government recently took up the investigation of anti-competitive practices between high-tech giants that had agreed not to poach one another’s employee. This in fact relates to Shubha Gosh’s questions about defining competition and the meaning of free and open labor markets. And it is a good moment to pause about the extent to which we encourage secrecy in both private and public organizations. It is a moment in which the spiraling scandals of economic espionage by governments coupled with leaks and demand for more transparency require us to think hard. In this context, Citron is right to raise the question of government 2.0 – for individuals to be committed and motivated to contribute to innovation, they need some assurances that their contributions will not be entirely appropriated by concentrated interests.
November 14, 2013 at 1:36 am Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Government Secrecy, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post One Comment
posted by Orly Lobel
Peter Lee’s thoughtful review of Talent Wants to Be Free goes straight to the heart of the issues. Peter describes a “central irony about information” – so many aspects of our knowledge cannot lend themselves to traditional monopolization through patents and copyright that their appropriation is done under the radar, through the more dispersed and covert regimes of talent wars rather than the more visible IP wars. We’ve always understood intellectual property law as a bargain: through patents and copyright, we allow monopolization of information for a limited time as a means to the end of encouraging progress in science and art. We understand the costs however and we strive as a society to draw the scope of these exclusive rights very carefully. and deliberately. We have heated public debates about the optimal delineation of patents, and we are witnessing new legislative reforms and significant numbers of recent SCOTUS cases addressing these tradeoffs. But patents are only a sliver of all the information that is needed to sustain innovative industries and creative ventures. Without much debate, the monopolization of knowledge has expanded far beyond the bargain struck in Article I, Section 8 of the Constitution. Through contractual and regulatory law, human capital – people themselves - their skills and tacit knowledge, their social connections and professional ties, and their creative capacities and inventive potential are all the subject to market attempts, aided by public enforcement, of monopolization. Peter refers to these as tacit versus codified knowledge; I think about inputs, human inventive powers versus outputs – the more tangible iterations of intangible assets – the traditional core IP, which qualifies patentability to items reduced to practice (rather than abstraction) and copyrightable art to expressions (rather than ideas). Cognitive property versus intellectual property, if you will.
Lee is absolutely correct that university tech transfer and its challenges and often discontent is highly revealing in this context of drawing fences around ideas and knowledge. Lee writes “in subtle ways, Orly’s work thus offers a cogent exposition of the limits of patent law and formal technology transfer.” Lee’s recent work on tech transfer Transcending the Tacit Dimension: Patents, Relationships, and Organizational Integration in Technology Transfer, California Law Review 2012 is a must read. Lee shows that “effective technology transfer often involves long-term personal relationships rather than discrete market exchanges. In particular, it explores the significant role of tacit, uncodified knowledge in effectively exploiting patented academic inventions. Markets, patents, and licenses are ill-suited to transferring such tacit knowledge, leading licensees to seek direct relationships with academic inventors themselves.” And Lee’s article also uses the lens of the theory of the firm, the subject of the exchanges here, to illuminate the role of organizational integration in transferring university technologies to the private sector. I think that in both of our works, trade secrets are an elephant in the room. And I hope we continue to think more about how can trade secrets, which have been called the step child of intellectual property, be better analyzed and defined.
November 13, 2013 at 12:30 pm Posted in: Behavioral Law and Economics, Bioethics, Contract Law & Beyond, Corporate Law, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology, Uncategorized Print This Post No Comments
posted by Rachel Godsil
Since last night I have been writing and re-writing this blog about race and the fiscal crisis. My link to the New York Times page keeps changing – though the content remains essentially the same. As the Senate moves toward a deal to reopen the government and avert a default, the Times reports that the House balks.
What explains the continued opposition to a deal, despite the seemingly obvious catastrophic consequences of a government default? Racial anxiety may be playing a role, suggests Shutdown Power Play: Stoking Racism, Fear of Culture Change to Push Anti-Government Agenda. The article describes an analysis by Democracy Corp (a research group led by Stan Greenberg and James Carville) of focus groups with three groups comprised of Evangelicals, Tea Party Republicans, and moderates. The Democracy Corp concludes that “base supporters” of the Republican Party fear that they are losing to a Democratic Party of big government that is creating programs that “mainly benefit minorities. Race remains very much alive in the Republican Party.”
So here we are again. Encouraging mistaken beliefs that only a particular few benefit from government programs – and perpetuating the continued division of “us” and “them” on racial grounds has long been a political strategy. Ronald Reagan’s “welfare queen” is a stereotype that continues to live on in some corners. Even though welfare, like most government programs, including the Affordable Care Act, stands to benefit large numbers of whites. Indeed, according to 2011 census figures, 46.3% of all uninsured people are non-Hispanic white while 16% are black.
Why has the blog taken so long? Because the counter-strategy is challenging.
The instinctive response is to call out those distorting the facts as racist. This tactic has the benefit of moral clarity, and is emotionally satisfying. But calling out those who oppose the implementation of the Affordable Care Act as “racist” will not move people in the political middle. This group is likely to consider someone “racist” only if they publicly disclose old-school-George-Wallace-like animus toward people of color. The political debate about the role of government in people’s lives—particularly the less fortunate—is much murkier territory, filled with subterranean, unspoken dynamics and assumptions. It does not resemble the image of ardent segregationists proudly flaunting their bigotry.
But simply ignoring the role race is still playing and pretending that we are all “color-blind” is also inadequate. Social science research has shown that most people carry a set of stereotypical assumptions about race – and that these stereotypes are most likely to influence decision making when race is right below the surface but not expressly mentioned. A set of juror studies by Sam Sommers and Phoebe Ellsworth provides powerful evidence of this phenomenon (for a short description of these studies, see this recent piece by Sommers).
The juror studies suggest that when mock jurors confront inter-racial incidents in which racially charged language is used, white jurors were no more likely to convict a black than a white defendant. When an incident involved a white victim and a black defendant but was otherwise not racially charged, white jurors were more likely to convict a black defendant than a white defendant. Why? Because only in the incident in which racial language was used were white jurors conscious that race may come into play — which triggered them to work to be fair. Donald Bucolo and Ellen Cohn in their study, Playing the race card: Making race salient in defense opening and closing statements, found similar effects in inter-racial trials: when defense attorneys explicitly mention race, white juror bias toward black defendants is reduced.
The findings in the juror studies are heartening – they provide an empirical foundation for the idea that most white people want to be racially egalitarian. And they suggest a way forward in policy discussions even if they do not provide play-by-play instructions. The goal, as john powell aptly states, is to allow people to maintain a self-concept as egalitarian while drawing attention to behaviors that are inconsistent with those values.
I have found listeners of all races to be extremely receptive to this social science in talks at public libraries as well as law schools. White listeners express relief that they are not being accused of racism – and once this anxiety is alleviated, the defensiveness melts away. Listeners of all races seem very interested in the facts about who benefits from government programs and how race operates in the unconscious.
This material is harder to translate into a sound-bite. But it seems to be the best way forward to an honest conversation about race.
posted by Rachel Godsil
Today’s New York Times lead story, “Millions of Poor Are Left Uncovered by Health Law,” reports on the devastating effect that states’ decisions not to expand Medicaid is having on poor people. This article is accompanied by an image – on the jump page 18 in print and featured online – of two poor families, one in Mississippi and one in Texas. Neither family is white.
The imagery leads the reader to presume that white people are unaffected by the failure to expand Medicaid and also perpetuates the general stereotype that most poor people are Black or Latino. The census figures released in 2013 tell a different story: 18.9 million non-Hispanic whites live in poverty and 8.4 million live in deep poverty. The next largest demographic group living in poverty is Latino – with 13.6 million living in poverty and 5.4 million living in deep poverty. The smallest group of people living in poverty – by over 8 million — are Black people, with 10.9 million living in poverty and 5.1 living in deep poverty. These numbers are staggering and shameful. And it is true that a larger proportion of African Americans and Latinos live in poverty than whites by a significant margin. However, the decision to depict only Black and Latino families in an article about poverty is itself problematic on a number of fronts.
Living in poverty should not be seen as an individual or group failure. Most of us have lived in poverty at some point in our own lives or in our families’ history. And undoubtedly the authors of the article and the editors who chose the picture have sympathy for poor people and hope that their news story and the image will elicit concern and moral outrage. This result is unlikely. Instead, research in social psychology suggests that news stories and images of this sort generally have exactly the opposite effect.
In an article entitled Justifying Inequality: A Social Psychological Analysis of Beliefs about Poverty and the Poor, Heather Bullock at the University of California, Santa Cruz explains that: ”single mothers and ethnic minorities, most notably African Americans, are the public face of poverty. Consequently, poverty is viewed not only as a “minority” problem (Gilens 1999; Quadagno 1994) but a reflection of weak sexual mores and the decline of the nuclear family (Lind 2004; Orloff 2002). Stereotypes about the poor and ethnic minorities mirror each other with intersecting characterizations including laziness, sexual promiscuity, irresponsible parenting, disinterest in education, and disregard for the law.” So the imagery in the NYT article and the discussion of the particular effects on single mothers and “poor blacks” simply confirms negative stereotypes. And the stereotypes are not rooted in fact. The vast majority of African Americans and Latinos in the United States — over 70% — are not poor.
This article and many others in the media contribute to a set of negative stereotypes about people of color and render invisible the enormous numbers of whites who are poor. Sadly, the combined effect, as Bullock explains, appears to be a growing tolerance for economic inequality and a willingness to support decisions that harm the poor (such as the rejection of Medicaid expansion).
The negative stereotypes, as I will discuss in future posts, underlie a set of psychological phenomena such as implicit bias, that underlie discriminatory behavior even among those with egalitarian values and create significant obstacles for progress toward racial equality. As an academic and as a civil rights litigator in my previous life, I have focused on legal and policy change as a means toward racial equality. More recently, I have been part of a consortium, the American Values Institute, linking social scientists with lawyers, legal academics, and the media to recognize the significance of culture. Law, as we all know, is in part a creature of culture. So long as our culture is infused with distorted facts and images about race, law reform is a vastly more difficult task.
posted by Dave Hoffman
Like many of you, I’ve been horrified by the events in Newtown, and dismayed by the debate that has followed. Josh Marshall (at TPM) thinks that “this is quickly veering from the merely stupid to a pretty ugly kind of victim-blaming.” Naive realism, meet thy kettle! Contrary to what you’ll see on various liberal outlets, the NRA didn’t cause Adam Lanza to kill innocent children and adults, nor did Alan Gura or the army of academics who helped to build the case for an individual right to gun ownership. Reading discussions on the web, you might come to believe that we don’t all share the goal of a society where the moral order is preserved, and where our children can be put on the bus to school without a qualm.
But we do.
We just disagree about how to make it happen.
Dan Kahan’s post on the relationship between “the gun debate”, “gun deaths”, and Newtown is thus very timely. Dan argues that if we really wanted to decrease gun deaths, we should try legalizing drugs. (I’d argue, following Bill Stuntz, that we also/either would hire many more police while returning much more power to local control). But decreasing gun deaths overall won’t (probably) change the likelihood of events like these:
“But here’s another thing to note: these very sad incidents “represent only a sliver of America’s overall gun violence.” Those who are appropriately interested in reducing gun homicides generally and who are (also appropriately) making this tragedy the occasion to discuss how we as a society can and must do more to make our citizens safe, and who are, in the course of making their arguments invoking(appropraitely!) the overall gun homicide rate should be focusing on what we can be done most directly and feasibly to save the most lives.
Repealing drug laws would do more — much, much, much more — than banning assault rifles (a measure I would agree is quite appropriate); barring carrying of concealed handguns in public (I’d vote for that in my state, if after hearing from people who felt differently from me, I could give an account of my position that fairly meets their points and doesn’t trade on tacit hostility toward or mere incomprehension of whatever contribution owning a gun makes to their experience of a meaningful free life); closing the “gun show” loophole; extending waiting periods etc. Or at least there is evidence for believing that, and we are entitled to make policy on the best understanding we can form of how the world works so long as we are open to new evidence and aren’t otherwise interfering with liberties that we ought, in a liberal society, to respect.”
Dan’s post is trying to productively redirect our public debate, and I wanted to use this platform to bring more attention to his point. But, I think he’s missing something, and if you follow me after the jump, I’ll tell you what.
posted by Dave Hoffman
Like many others, I’ve been using Amazon Mechanical Turk to recruit subjects for law & psychology experiments. Turk is (i) cheap; (ii) fast; (iii) easy to use; and (iv) not controlled by the psychology department’s guardians. Better yet, the literature to date has found that Turkers are more representative of the general population than you’d expect — and certainly better than college undergrads! Unfortunately, this post at the Monkey Cage provides a data point in the contrary direction:
“On Election Day, we asked 565 Amazon Mechanical Turk (MTurk) workers to take a brief survey on vote choice, ideology and demographics. . . . We compare MTurk workers on Election Day to actual election results and exit polling. The survey paid $0.05 and had seven questions: gender, age, education, income, state of residence, vote choice, and ideology. Overall, 73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for “Other.” This is skewed in expected ways, matching the stereotypical image of online IT workers as liberal—or possibly libertarian since 12% voted for a third party in 2012, compared to 1.6% percent of all voters. . . In sum, the MTurk sample is younger, more male, poorer, and more highly educated than Americans generally. This matches the image of who you might think would be online doing computer tasks for a small amount of money…”
Food for thought. What’s strange is that every sample of Turkers I’ve dealt with is older & more female than the general population. Might it be that Turk workers who responded to a survey on election habits aren’t like the Turk population at large? Probably so, but that doesn’t make me copacetic.
posted by Karen Newirth
I also thank Danielle and Brandon for including me in this symposium, and am very happy to join the discussion of four very important works on the state of the criminal justice system in America today.
The reference to the Central Park Five in Danielle’s original post highlights one of the most important qualities of Convicting the Innocent: it uses the powerfully told stories of the exonerated to bring to life the new and important detail about the causes of wrongful convictions that Garrett’s research has uncovered. The result is the fullest picture to date of the scope of the “nightmarish reality” that has led to 301 DNA-based exonerations in this country. Convicting the Innocent is not only a great read for lawyers and lay people alike, it is also a powerful tool for bringing about much-needed systemic change. Dan Medwed’s post appropriately asks whether the works being discussed here urge change that is gradual and specific or change that is revolutionary, going to the heart of the adversary system. In the context of eyewitness misidentification – the leading contributing cause of wrongful convictions, occurring in (as Garrett found) 75 percent of the first 250 exonerations – we see great success in effecting change in both courts and police precincts alike. Brandon Garrett’s research has been critical to these successful reform efforts.
As the attorney responsible for the Innocence Project‘s work in the area of eyewitness identification, I have relied on Convicting the Innocent in my efforts to educate attorneys, judges and policy makers about the perils of misidentification and the flaws in the current legal framework for evaluating identification evidence at trial that is applied in nearly all jurisdictions in the United States. That legal framework, set forth by the Supreme Court in Manson v. Brathwaite, directs courts to balance the effects of improper police suggestion in identification procedures with certain “reliability factors” – the witness’s opportunity to view the perpetrator, the attention paid by the witness, the witness’s certainty in the identification, the time between the crime and confrontation and the accuracy of the witness’s description. (These factors are not exclusive, but most courts treat them as if they are.)
Psychological research in the area of perception and memory has offered conclusive evidence that the identified reliability factors are not well-correlated with accuracy; do not objectively reflect reality to the extent that they are self-reported; and – most critically – are inflated by suggestion, leading to the perverse result that the more suggestive the identification procedure, the higher the measures of reliability under the Manson test.
Garrett’s work in Convicting the Innocent adds an important dimension to the psychological research – and makes even more urgent the call to reform the Manson test – by demonstrating that the Manson test failed in the cases of the 190 exonerees who were convicted based, at least in part, on identification evidence that was either not challenged or admitted as reliable under Manson. Garrett’s work shows just how the Manson reliability factors fail to ensure reliability: in most cases reviewed by Garrett, the witnesses had poor viewing opportunities; had only a few seconds to see the perpetrator’s face, which was often disguised or otherwise obscured; made identifications weeks or months after the crime; and provided descriptions that were substantially different from the wrongly accused’s appearance. In addition, almost all of the witnesses in the cases reviewed by Garrett expressed complete confidence at trial – stating for example that “there is absolutely no question in my mind” (Steven Avery’s case); that “[t]his is the man or it is his twin brother” (Thomas Doswell’s case) – although DNA later proved that these witnesses were entirely wrong. Perhaps most striking of all of Garrett’s research findings in the area of eyewitness misidentification is that in 57 percent of the trials with certain eyewitnesses, the witnesses had expressed earlier uncertainty (strongly suggesting that the identification was unreliable), but only 21 percent of these witnesses admitted their earlier uncertainty.
The Innocence Project has relied on Garrett’s research in advocating for the reform of the legal framework for evaluating identification evidence in courts around the country, from the U.S. Supreme Court (Perry v. New Hampshire) to state supreme courts from Oregon (State v. Lawson) and Washington (State v. Allen) to New Jersey (State v. Henderson) and Pennsylvania (State v. Walker). In two of these cases – Henderson and Lawson – high courts found that Manson fails to ensure reliability and implemented new legal tests that better reflect the scientific research and, we hope, will better prevent wrongful convictions based on eyewitness misidentification. Both the Henderson and Lawson courts cited Convicting the Innocent in rendering their decisions, demonstrating just how powerful a force for change Garrett’s work is.
posted by Brandon Garrett
That image is from the false confession of Ronald Jones, a man whose tragic story begins my book, Convicting the Innocent: Where Criminal Prosecutions Go Wrong. In fact, it is an image of his entire false confession, at least the statement that the detectives had typed at the end of eight grueling hours of interrogation in Chicago in the mid-1980s. I turned the statement into a word cloud to illustrate the words that Jones had repeated the most. In his statement, Jones was unfailingly polite, and according to the police stenographer, at least, he responded “Yes, Sir,” as the detectives asked him questions. In reality, he alleged at trial, detectives had brutally threatened him, beat him, and told him what to say about a crime he did not commit. The jury readily sentenced Jones to death for a brutal rape and murder on Chicago’s South Side.
The word cloud shows why the jury put Jones on death row. Some of the most prominent words, after “Yes, Sir,” are key details about the crime scene: that there was a knife, that the murder occurred in the abandoned Crest hotel, that the killer left through a window. Jones protested his innocence at trial, but those facts were powerfully damning. The lead detective had testified at trial Jones told them in the interrogation room exactly how the victim was assaulted and killed, and finally signed that confession statement. The detectives said they brought Jones to the crime scene where Jones supposedly showed them where and how the murder occurred. After his trial, Jones lost all of his appeals. Once DNA testing was possible in the mid-1990s, he was denied DNA testing by a judge who was so convinced by his confession statement that he remarked, “What issue could possibly be resolved by DNA testing?”
In my book, I examined what went wrong in the first 250 DNA exonerations in the U.S. Jones was exonerated by a post-conviction DNA test. Now we know that his confession, like 40 other DNA exoneree confessions, was not just false, but likely contaminated during a botched interrogation. Now we know that 190 people had eyewitnesses misidentify them, typically due to unsound lineup procedures. Now we know that flawed forensics, in about half of the cases, contributed to a wrongful conviction. Now we know that informants, in over 50 of the cases, lied at trial. Resource pages with data from the book about each of these problems, and with material from these remarkable trials of exonerees, are available online.
Returning to Ronald Jones’ false confession, the Supreme Court has not intervened to regulate the reliability of confessions, such as by asking courts to inquire whether there was contamination, or simply requiring videotaping so that we know who said what and whether the suspect actually knew the actual facts of the crime. Typical of its rulings on the reliability of evidence in criminal cases, the Court held in Colorado v. Connelly that though a confession statement “might be proved to be quite unreliable . . . this is a matter to be governed by the evidentiary laws of the forum . . . not by the Due Process Clause of the Fourteenth Amendment.” Preventing wrongful convictions has largely fallen on the states. I end the book with optimism that we are starting to see stirrings of a criminal justice reform movement.
posted by Dave Hoffman
I’ve been working for some time on an article about how policymakers could and should reduce the law’s transmission costs by developing rules which stick and which are then re-transmitted and thus are passed among citizens without heavy-handed enforcement campaigns. This is different from saying that policymakers should make rules which are merely memorable: the goal is to increase the influence of the rule by making it likely that individuals will spread knowledge of it widely with less government effort. Recently, one of my students, Bill Scarpato, worked on this problem in a particular context: off-road vehicle use on public lands. His draft paper, Don’t Tread on Me: Increasing Compliance with Off-Road Vehicle Use at Least Cost is up on ssrn. From the abstract:
In a world of diminished enforcement resources, how can environmental regulators get the most bang for their buck? Off-road vehicle use is the fastest growing and most contentious form of recreation on America’s public lands. Motorized recreationists have enjoyed access to National Forests and BLM land for almost a century, but regulators, property owners, and environmental groups have voiced opposition to unconstrained off-road vehicle use. Law enforcement on these lands is underfunded and ineffective, and the individualist culture of off-road vehicle users is said to foster an attitude of non-compliance — trailblazing in the literal sense. Endorsing and building upon work in law and social norms and cognate disciplines, this Article draws principally on the social psychology of effective messaging outlined in Chip and Dan Heath’s 2007 work, Made to Stick, to propose a partnership-based campaign based on the exhortatory theme, “Don’t Tread on Me.”
I think Bill did a nice job of laying out the research and applying it in a creative way to a very hard problem. Check it out.
posted by Amanda Pustilnik
By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law. Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”
Ben’s question suggests that ostensibly rational human beings often act in irrational ways. To prove his point, I’m actually going to address his enormous question within a blog post. I hope you judge the effort valiant, if not complete.
The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality. The first view is that greater rationality might be possible – but might not confer greater benefits. I call this the “anti-Vulcan hypothesis”: While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock. A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group. In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases. Yet, whether we are Kirk or Flossie, the implication for law may be the same: Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.
First, a slight cavil with the question: The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control. Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution. Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true. (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.) Rationality divorced from affect arguably may not even be possible for humans, much less desirable. Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.
Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor. By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.
Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest. Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing. Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills. This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.
So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference. It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions. Further, the rational cognition we can access can be totally swamped out by sudden and strong affect. With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”
This fragility may be more boon than bane: Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage. Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations. Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors. To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility. What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational. This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.
An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory. While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality. In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”
On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it. Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group. Rationality operates, if at all, post hoc: It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions. (Note that different cultural groups assign different values to rational forms of thought and inquiry. In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming. Children of academics and knowledge-workers: I’m looking at you.)
This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data. And that this cognitive mode inheres in us makes a certain kind of sense: Most people face far greater immediate danger from defying their social group than from global warming or gun control policy. The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.
To descend from Olympus to the village: What could this mean for law? Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored. I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.
Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are designed. Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions. The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.
Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy. In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community. And in still other contexts, we might value narrow rationality above all. Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas. Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.
Cultural cognition may offer strategies for communicating with the public about important issues. The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it. If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow: Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities. The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.
To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers. But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.
October 16, 2011 at 2:25 am Tags: cultural cognition, emotion & cognition, irrationality, law & neuroscience, rationality Posted in: Behavioral Law and Economics, Law and Psychology, Legal Theory, Philosophy of Social Science, Uncategorized Print This Post 11 Comments
posted by Daniel Solove
Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011). Like all of Lior’s work, the book is creative, thought-provoking, and compelling. There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way. That’s what Lior achieves in his book, and that’s quite an achievement.
I recently had the opportunity to chat with Lior about the book.
Daniel J. Solove (DJS): What drew you to the topic of exclusion?
Lior Jacob Strahilevitz (LJS): It was an observation I had as a college sophomore. I lived in the student housing cooperatives at Berkeley. Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process. The cooperatives, by contrast, were open to any student. But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities. That made me curious. It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system. But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone? That question was one I kept wondering about as a law student, lawyer, and professor.
That’s why page 1 of the book begins with a discussion of exclusion in the Greek system. I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.) The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge. Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.
DJS: What is the central idea in your book?
LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services. When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it. Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria. There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.
posted by Frank Pasquale
Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:
Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.
Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:
The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.
Yves Smith finds Kramer’s response unconvincing:
The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.
Felix Salmon also challenges Kramer’s logic:
[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”
Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.
That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.
How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”
We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).
Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.
Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.
The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.
The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.
Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.
We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.
*In the spirit of full disclosure: I did participate in this roundtable.
X-Posted: Health Law Profs Blog.
posted by Dave Hoffman
There’s a flurry of coverage about proposed anti-circumcision initiatives in California. (Sullivan, Volokh.) The posts I’ve been reading – and, granted, I’ve not read the field – have taken this issue oddly seriously. After all, these are merely (actual or proposed) ballot initiatives that haven’t been approved by the voters. If they were approved, their constitutionality won’t (contra Volokh) be determined by existing precedent. In my view, this is a slam dunk example of an overdetermined constitutional issue.
But there’s another aspect of this fight that is, I think, worth some extended comment. As Sarah has pointed on this blog, anti- and pro- circumcision advocates generally fight about circumcision’s health effects, and resist attacking (or defending) it as a cultural practice. To me, this looks quite like other contests in our society in which nominally empirical debates predominate — the fight over the HPV vaccine, gay and lesbian parenting, nanotechnology, global warming, etc. The Cultural Cognition project illustrates that these fights very often appear to be about facts, but that expressed conclusions of the “facts” and “risks” involved follow our less-conscious values. Moreover, though we can perceive this tendency in others, we deny it in ourselves. This is the phenomenon of naive realism. What results? We come to believe that people who we disagree with about these value-laden fights (i.e., people who deny the health benefits of circumcision) are arguing in bad faith. They think the same of us. Winning, in the world of policy, becomes an exercise of defeating not just our opponent’s values, but denying that their values are even at play. I am pretty sure that if we tested this hypothesis in the circumcision debate, we’d see a very strong set of cultural priors influencing how partisans interpret and process the medical-risk-facts about circumcision, whether the American Academy of Pediatrics is vouching for those facts or not.
This leads to a concrete piece of advice for Andrew Sullivan and other hot-tempered advocates on either side of this fight. Cool it. Stop inciting fights with question-begging terms like “male genital mutilation.” Instead, affirm the values of those you disagree with by making clear that this isn’t – at root – a debate that be resolved with reference to empirical facts. It’s (as Sarah has insightfully pointed out) a discussion about cultural practices, and the degree to which the greater society has the right to change them.
For what it’s worth, my view is that the government has about as much of a moral right to prohibit circumcision as it does to tell me that I must eat broccoli.
posted by UCLA Law Review
Volume 58, Issue 3 (February 2011)
|Good Faith and Law Evasion||Samuel W. Buell||611|
|Making Sovereigns Indispensable: Pimentel and the Evolution of Rule 19||Katherine Florey||667|
|The Need for a Research Culture in the Forensic Sciences||Jennifer L. Mnookin et al.||725|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Joseph P. Bono||781|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Judge Nancy Gertner||789|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Pierre Margot||795|
|What’s Your Position? Amending the Bankruptcy Disclosure Rules to Keep Pace With Financial Innovation||Samuel M. Kidder||803|
|Defendant Class Actions and Patent Infringement Litigation||Matthew K. K. Sumida||843|
February 25, 2011 at 1:19 pm Posted in: Bankruptcy, Civil Procedure, Constitutional Law, Courts, Criminal Law, Criminal Procedure, Current Events, Economic Analysis of Law, Empirical Analysis of Law, Evidence Law, History of Law, Indian Law, Intellectual Property, International & Comparative Law, Jurisprudence, Law and Humanities, Law and Inequality, Law and Psychology, Law Practice, Law Rev (UCLA), Psychology and Behavior, Race, Sociology of Law, Supreme Court Print This Post No Comments
posted by Dave Hoffman
The partisanship and bad faith of judges who disagree with us has never been more obvious, or more pernicious. For many, the most irritating personality flaw of judicial politicos (and their fellow-travelers) isn’t the bottom-line results of the opinions themselves, it is that judges refuse to acknowledge their own biases, though it’s evident that they aren’t neutral umpires, but rather players in the game. Indeed, almost every decision you read about these days comes accompanied by a reference to the political party of the appointing President – as if you needed the help! As Orin Kerr has brilliantly pointed out, “people who disagree with me are just arguing in bad faith.”
For the Cultural Cognition Project, the way that we talk about legal decisions – and decisionmakers – is a subject of study and concern. We decided to take a careful look at this topic — which we’ve previously touched on in work like Whose Eyes Are You Going To Believe. Our motivation was to investigate how constitutional norms requiring neutrality in fact finding interact with individuals’ tendencies to perceive facts and risks in ways congenial to their group identities. Building on Hastorf/Cantril’s social psychology classic, They Saw a Game: A Case Study, we’ve written a new piece about how motivated cognition can de-stabilize constitutional doctrine, render legal fact-finders blind to their own biases, and inflame the culture wars. Our resulting paper, “They Saw a Protest”: Cognitive Illiberalism and the Speech-Conduct Distinction, results from my collaboration with Dan Kahan, Don Braman, Danieli Evans, and Jeff Rachlinski. The paper is just up on SSRN, and I figured to jump-start the conversation by using this post to talk about our experimental approach and findings. (I think that Kahan is blogging on Balkinization later in the week about the normative upshot of Protest.)
February 7, 2011 at 6:00 pm Posted in: Articles and Books, Behavioral Law and Economics, Civil Procedure, Civil Rights, Law and Psychology, Law School (Scholarship), Psychology and Behavior, Sociology of Law Print This Post One Comment
posted by John Jacobi
Thanks to Frank for inviting me to review Barak Richman, Daniel Grossman, and Frank Sloan’s chapter, Fragmentation in Mental Health Benefits and Services, in Our Fragmented Health Care System: Causes and Solutions (Einer Elhauge, ed. 2010). The book is important and provocative. The chapter on the fragmentation of mental health care couldn’t address a more timely issue.
People with serious mental illness, more than most other patients, struggle with health system fragmentation. As the Institute of Medicine described it,
Mental and substance-use (M/SU) problems and illnesses seldom occur in isolation. They frequently accompany each other, as well as a substantial number of general medical illnesses such as heart disease, cancers, diabetes, and neurological illnesses. *** Improving the quality of M/SU health care—and general health care—depends upon the effective collaboration of all mental, substance-use, general health care, and other human service providers in coordinating the care of their patients. *** However, these diverse providers often fail to detect and treat (or refer to other providers to treat) these co-occurring problems and also fail to collaborate in the care of these multiple health conditions—placing their patients’ health and recovery in jeopardy.
By some estimates, formerly institutionalized people with serious mental illness experience about 25 fewer years of life, mostly due to the effects of treatable physical illnesses such as cardiovascular, pulmonary and infectious diseases. The effects of this health system fragmentation are experienced notwithstanding parity legislation, and they are felt also by people in the community with less serious mental illness, often because their primary care providers can’t find mental health providers to whom they can refer.
In Fragmentation in Mental Health Benefits and Services, the authors approach mental health system fragmentation by telling a story of the relationship between health insurance structure and income redistribution. The authors address the interrelationship between insurance “carve-outs” for mental health care and the growth of mental health parity laws. They assert that the carve out of behavioral health coverage from medical insurance provokes states to pass mental health parity laws. According to the authors, these parity laws fail to help their “intended” beneficiaries, and instead serve to redistribute resources away from low income and non-White employees.
To make their case, they mine a database of claims data for privately insured North Carolina patients. These claims data allow them to track employees’ (and, presumably, their dependents’) use of mental health services. Along the way, they raise several important issues. For example, they suggest that care provided by mental health providers may not be particularly efficacious. (299) Few would disagree that in most areas of health care – including mental health care – comparative effectiveness research is essential. In addition, they suggest that access to and benefit from covered services varies by income and race. (298-99) It is undoubtedly true that there are class-based and race-based disparities in access to health care; this is so much discussed, in fact, that it somewhat puzzling that the authors would characterize as a “regularly overlooked question” the fact that “equal insurance and access does not translate into equitable consumption.” (279)
On some points, the authors seem to go a bit beyond their data. First, the authors assert (without citation) that mental health parity is “often” pursued “to benefit low-income and traditionally vulnerable populations.” (284) Many advocates (myself included) have argued for parity as a civil rights matter: as people with physical illness have access to insurance coverage, so should people with mental illness. Certainly, insurance coverage is most valuable for those without the means to pay for care out of pocket, but that is as true for cardiac care as for mental health care. From this perspective, parity legislation seems no more a redistributive move than any other form of health insurance.
posted by Alicia Kelly
Married life is characterized by a sharing norm. As I described in an earlier post, spouses commit to and in fact engage deeply in sharing behavior, including a shared family economy. Overwhelmingly, spouses pool economic resources, including labor, and decide together how to allocate them to benefit the family as a whole.
In addition to its affects in the paid labor market (see my last post), sharing money matters inside a functioning marriage. It shapes the couple relationship as well as each partner individually. Research shows that in an ongoing marriage, money is a relational tool. For example, making money a communal asset is a way to demonstrate intimacy and commitment, and that can nurture a couple’s bond. Yet, in some circumstances, an assignment of resources to just one spouse can also be understood (by both partners) to be appropriate and deserved—a recognition of the individual within a sharing framework. Conversely, it is also possible that spouses’ monetary dealings can undermine individual autonomy and the relationship as well. For example, one person might exercise authority over money in a way that disregards the other. Accordingly, power to influence financial resource allocation within the family is important for individual spouses and for togetherness.
It becomes a special concern then, that sharing patterns in marriage are gendered. As highlighted in my previous post, role specialization remains a part of modern intimate partner relations. Particularly true for married couples, men continue to perform more as breadwinners, and women more as caregivers. As a result, women tend to have reduced earning power in the market. How does this market asymmetry translate into economic power at home? Happily, in a significant departure from the past, a majority of couples report that they share financial decisionmaking power roughly equally. Indeed, most married couples today endorse gender equality as an important value in their relationship. However, in a significant minority of marriages, spouses agree that husbands have more economic power. For some couples then, a husband’s breadwinning role and/or perhaps his gender, confers authority in contentious money matters.
How should law governing an ongoing marriage respond to these sharing dynamics? Consider this hypothetical fact situation. A husband has a stock account from which he plans to make a gift to his sister who he feels really needs the money. The husband suspects that his wife would not approve of the gift. Even though the wife too loves the sister, she believes the sister is irresponsible with money. Let’s assume that the money in that stock account was acquired while the parties were married, and that it came from the market wages of one or both of the spouses earned during marriage. It was a product of the couple’s shared life. Does contemporary law allow the husband to give his sister the gift without her consent? Without even telling her? How should legal power over the money be allocated?
October 1, 2010 at 1:04 pm Posted in: Family Law, Feminism and Gender, Law and Inequality, Law and Psychology, Legal Theory, Property Law, Psychology and Behavior, Uncategorized Print This Post 2 Comments
posted by Glenn Cohen
Hypotheticals are a ubiquitous pedagogical tool in both the law and philosophy classrooms. I have recently been thinking about the different functions they serve and whether they are well-suited for the weight we give them. These reflections were prompted by a conference on “Moral Biology,” hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School (which I co-direct), in cooperation with The Project on Law and Mind Sciences at Harvard Law School, the Gruter Institute, the Harvard Program on Ethics and Health, and the MacArthur Law and Neuroscience Project.
I may blog a little bit later about some other of the marvelous things I learned over these two days, but for now I wanted to concentrate on some thoughts that stemmed from a public portion of the conference that can be seen here, involving Josh Greene from Harvard’s Psychology Department, William Fitzpatrick from the University of Rochester’s Philosophy Department, Adina Roskies from Dartmouth’s Philosophy Department, Walter Sinnott-Armstrong from Duke’s Philosophy Department, and Tim Scanlon, from Harvard’s philosophy department.
At around the 43 to 50 minute mark in the video, Josh discusses Trolley Problems (which ask participants a thought experiment about whether to divert a trolley from one track to another with many versions of the hypothetical) and an experiment done on them by Fiery Cushman (and a collaborator, Switzgable I believe, I could not find the actual paper) in Josh’s lab. In the experiment, before being asked whether they would endorse the principle of double effect, ethicists with PhDs were asked to reason about variants of the Trolley problem (switch vs. footbridge) presented in different orders. The experiment found that if one varied the order in which the versions were presented (but always presented all of them,) ethicists reached different conclusions about whether they would endorse the principle. [This is Josh's description in the video, again if anyone can find the paper he is discussing I will try and like to that]. The result is surprising in that it appears even those with PhD training in ethics are susceptible to order effects in reasoning about a very fundamental issue.
As Josh concedes, and others (in the panel and in written pieces discussing his work emphasize) the fact that these ordering effects occur is not itself fatal to the enterprise of philosophical analysis using intuitions. It depends on further views about how one uses these kinds of intuitions in the analysis. For present purposes, though, I want to partially side-step that question in favor of thinking about the law classroom, and how this experiment might should us a little more careful about the way we use hypotheticals.
August 13, 2010 at 8:22 am Posted in: Bright Ideas, Empirical Analysis of Law, Jurisprudence, Law and Humanities, Law and Psychology, Law School, Law School (Teaching), Legal Theory, Teaching, Uncategorized Print This Post One Comment