Category: Law and Psychology

14

The Good Life and Gun Control

Like many of you, I’ve been horrified by the events in Newtown, and dismayed by the debate that has followed.  Josh Marshall (at TPM) thinks that “this is quickly veering from the merely stupid to a pretty ugly kind of victim-blaming.”  Naive realism, meet thy kettle!  Contrary to what you’ll see on various liberal outlets, the NRA didn’t cause Adam Lanza to kill innocent children and adults, nor did Alan Gura or the army of academics who helped to build the case for an individual right to gun ownership.  Reading discussions on the web, you might come to believe that we don’t all share the goal of a society where the moral order is preserved, and where our children can be put on the bus to school without a qualm.

But we do.

We just disagree about how to make it happen.

Dan Kahan’s post on the relationship between “the gun debate”, “gun deaths”, and Newtown is thus very timely.  Dan argues that if we really wanted to decrease gun deaths, we should try legalizing drugs.  (I’d argue, following Bill Stuntz, that we also/either would hire many more police while returning much more power to local control).  But decreasing gun deaths overall won’t (probably) change the likelihood of events like these:

“But here’s another thing to note: these very sad incidents “represent only a sliver of America’s overall gun violence.” Those who are appropriately interested in reducing gun homicides generally and who are (also appropriately) making this tragedy the occasion to discuss how we as a society can and must do more to make our citizens safe, and who are, in the course of making their arguments invoking(appropraitely!) the overall gun homicide rate should be focusing on what we can be done most directly and feasibly to save the most lives.

Repealing drug laws would do more —  much, much, much more — than banning assault rifles (a measure I would agree is quite appropriate); barring carrying of concealed handguns in public  (I’d vote for that in my state, if after hearing from people who felt differently from me, I could give an account of my position that fairly meets their points and doesn’t trade on tacit hostility toward or mere incomprehension of  whatever contribution owning a gun makes to their experience of a meaningful free life); closing the “gun show” loophole; extending waiting periods etc.  Or at least there is evidence for believing that, and we are entitled to make policy on the best understanding we can form of how the world works so long as we are open to new evidence and aren’t otherwise interfering with liberties that we ought, in a liberal society, to respect.”

Dan’s post is trying to productively redirect our public debate, and I wanted to use this platform to bring more attention to his point.  But, I think he’s missing something, and if you follow me after the jump, I’ll tell you what.

Read More

6

Unrepresentative Turkers?

Like many others, I’ve been using Amazon Mechanical Turk to recruit subjects for law & psychology experiments.  Turk is (i) cheap; (ii) fast; (iii) easy to use; and (iv) not controlled by the psychology department’s guardians.  Better yet, the literature to date has found that Turkers are more representative of the general population than you’d expect — and certainly better than college undergrads! Unfortunately, this post at the Monkey Cage provides a data point in the contrary direction:

“On Election Day, we asked 565 Amazon Mechanical Turk (MTurk) workers to take a brief survey on vote choice, ideology and demographics.  . . . We compare MTurk workers on Election Day to actual election results and exit polling.  The survey paid $0.05 and had seven questions:  gender, age, education, income, state of residence, vote choice, and ideology.  Overall, 73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for “Other.”  This is skewed in expected ways, matching the stereotypical image of online IT workers as liberal—or possibly libertarian since 12% voted for a third party in 2012, compared to 1.6% percent of all voters. . .  In sum, the MTurk sample is younger, more male, poorer, and more highly educated than Americans generally.  This matches the image of who you might think would be online doing computer tasks for a small amount of money…”

Food for thought.  What’s strange is that every sample of Turkers I’ve dealt with is older & more female than the general population.  Might it be that Turk workers who responded to a survey on election habits aren’t like the Turk population at large?  Probably so, but that doesn’t make me copacetic.

0

Convicting the Innocent: A powerful force for change

I also thank Danielle and Brandon for including me in this symposium, and am very happy to join the discussion of four very important works on the state of the criminal justice system in America today.

The reference to the Central Park Five in Danielle’s original post highlights one of the most important qualities of Convicting the Innocent: it uses the powerfully told stories of the exonerated to bring to life the new and important detail about the causes of wrongful convictions that Garrett’s research has uncovered. The result is the fullest picture to date of the scope of the “nightmarish reality” that has led to 301 DNA-based exonerations in this country.  Convicting the Innocent is not only a great read for lawyers and lay people alike, it is also a powerful tool for bringing about much-needed systemic change. Dan Medwed’s post appropriately asks whether the works being discussed here urge change that is gradual and specific or change that is revolutionary, going to the heart of the adversary system. In the context of eyewitness misidentification – the leading contributing cause of wrongful convictions, occurring in (as Garrett found) 75 percent of the first 250 exonerations – we see great success in effecting change in both courts and police precincts alike. Brandon Garrett’s research has been critical to these successful reform efforts.

As the attorney responsible for the Innocence Project‘s work in the area of eyewitness identification, I have relied on Convicting the Innocent in my efforts to educate attorneys, judges and policy makers about the perils of misidentification and the flaws in the current legal framework for evaluating identification evidence at trial that is applied in nearly all jurisdictions in the United States. That legal framework, set forth by the Supreme Court in Manson v. Brathwaite, directs courts to balance the effects of improper police suggestion in identification procedures with certain “reliability factors” – the witness’s opportunity to view the perpetrator, the attention paid by the witness, the witness’s certainty in the identification, the time between the crime and confrontation and the accuracy of the witness’s description. (These factors are not exclusive, but most courts treat them as if they are.)

Psychological research in the area of perception and memory has offered conclusive evidence that the identified reliability factors are not well-correlated with accuracy; do not objectively reflect reality to the extent that they are self-reported; and – most critically – are inflated by suggestion, leading to the perverse result that the more suggestive the identification procedure, the higher the measures of reliability under the Manson test.

Garrett’s work in Convicting the Innocent adds an important dimension to the psychological research – and makes even more urgent the call to reform the Manson test – by demonstrating that the Manson test failed in the cases of the 190 exonerees who were convicted based, at least in part, on identification evidence that was either not challenged or admitted as reliable under Manson. Garrett’s work shows just how the Manson reliability factors fail to ensure reliability: in most cases reviewed by Garrett, the witnesses had poor viewing opportunities; had only a few seconds to see the perpetrator’s face, which was often disguised or otherwise obscured; made identifications weeks or months after the crime; and provided descriptions that were substantially different from the wrongly accused’s appearance. In addition, almost all of the witnesses in the cases reviewed by Garrett expressed complete confidence at trial – stating for example that “there is absolutely no question in my mind” (Steven Avery’s case); that “[t]his is the man or it is his twin brother” (Thomas Doswell’s case) – although DNA later proved that these witnesses were entirely wrong. Perhaps most striking of all of Garrett’s research findings in the area of eyewitness misidentification is that in 57 percent of the trials with certain eyewitnesses, the witnesses had expressed earlier uncertainty (strongly suggesting that the identification was unreliable), but only 21 percent of these witnesses admitted their earlier uncertainty.

The Innocence Project has relied on Garrett’s research in advocating for the reform of the legal framework for evaluating identification evidence in courts around the country, from the U.S. Supreme Court (Perry v. New Hampshire) to state supreme courts from Oregon (State v. Lawson) and Washington (State v. Allen) to New Jersey (State v. Henderson) and Pennsylvania (State v. Walker). In two of these cases – Henderson and Lawson – high courts found that Manson fails to ensure reliability and implemented new legal tests that better reflect the scientific research and, we hope, will better prevent wrongful convictions based on eyewitness misidentification. Both the Henderson and Lawson courts cited Convicting the Innocent in rendering their decisions, demonstrating just how powerful a force for change Garrett’s work is.

 

7

Convicting the Innocent

 

That image is from the false confession of Ronald Jones, a man whose tragic story begins my book, Convicting the Innocent: Where Criminal Prosecutions Go Wrong. In fact, it is an image of his entire false confession, at least the statement that the detectives had typed at the end of eight grueling hours of interrogation in Chicago in the mid-1980s. I turned the statement into a word cloud to illustrate the words that Jones had repeated the most. In his statement, Jones was unfailingly polite, and according to the police stenographer, at least, he responded “Yes, Sir,” as the detectives asked him questions. In reality, he alleged at trial, detectives had brutally threatened him, beat him, and told him what to say about a crime he did not commit. The jury readily sentenced Jones to death for a brutal rape and murder on Chicago’s South Side.

The word cloud shows why the jury put Jones on death row. Some of the most prominent words, after “Yes, Sir,” are key details about the crime scene: that there was a knife, that the murder occurred in the abandoned Crest hotel, that the killer left through a window. Jones protested his innocence at trial, but those facts were powerfully damning. The lead detective had testified at trial Jones told them in the interrogation room exactly how the victim was assaulted and killed, and finally signed that confession statement. The detectives said they brought Jones to the crime scene where Jones supposedly showed them where and how the murder occurred. After his trial, Jones lost all of his appeals. Once DNA testing was possible in the mid-1990s, he was denied DNA testing by a judge who was so convinced by his confession statement that he remarked, “What issue could possibly be resolved by DNA testing?”

In my book, I examined what went wrong in the first 250 DNA exonerations in the U.S. Jones was exonerated by a post-conviction DNA test. Now we know that his confession, like 40 other DNA exoneree confessions, was not just false, but likely contaminated during a botched interrogation. Now we know that 190 people had eyewitnesses misidentify them, typically due to unsound lineup procedures. Now we know that flawed forensics, in about half of the cases, contributed to a wrongful conviction. Now we know that informants, in over 50 of the cases, lied at trial. Resource pages with data from the book about each of these problems, and with material from these remarkable trials of exonerees, are available online.

Returning to Ronald Jones’ false confession, the Supreme Court has not intervened to regulate the reliability of confessions, such as by asking courts to inquire whether there was contamination, or simply requiring videotaping so that we know who said what and whether the suspect actually knew the actual facts of the crime. Typical of its rulings on the reliability of evidence in criminal cases, the Court held in Colorado v. Connelly that though a confession statement “might be proved to be quite unreliable . . . this is a matter to be governed by the evidentiary laws of the forum . . . not by the Due Process Clause of the Fourteenth Amendment.” Preventing wrongful convictions has largely fallen on the states. I end the book with optimism that we are starting to see stirrings of a criminal justice reform movement.

 

Read More

0

Sticky Law & ORV Use

I’ve been working for some time on an article about how policymakers could and should reduce the law’s transmission costs by developing rules which stick and which are then re-transmitted and thus are passed among citizens without heavy-handed enforcement campaigns.  This is different from saying that policymakers should make rules which are merely memorable: the goal is to increase the influence of the rule by making it likely that individuals will spread knowledge of it widely with less government effort.  Recently, one of my students, Bill Scarpato, worked on this problem in a particular context: off-road vehicle use on public lands.  His draft paper, Don’t Tread on Me: Increasing Compliance with Off-Road Vehicle Use at Least Cost is up on ssrn. From the abstract:

In a world of diminished enforcement resources, how can environmental regulators get the most bang for their buck? Off-road vehicle use is the fastest growing and most contentious form of recreation on America’s public lands. Motorized recreationists have enjoyed access to National Forests and BLM land for almost a century, but regulators, property owners, and environmental groups have voiced opposition to unconstrained off-road vehicle use. Law enforcement on these lands is underfunded and ineffective, and the individualist culture of off-road vehicle users is said to foster an attitude of non-compliance — trailblazing in the literal sense. Endorsing and building upon work in law and social norms and cognate disciplines, this Article draws principally on the social psychology of effective messaging outlined in Chip and Dan Heath’s 2007 work, Made to Stick, to propose a partnership-based campaign based on the exhortatory theme, “Don’t Tread on Me.”

I think Bill did a nice job of laying out the research and applying it in a creative way to a very hard problem.  Check it out.

11

An Irrational Undertaking: Why Aren’t We More Rational?

By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law.  Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”

Ben’s question suggests that ostensibly rational human beings often act in irrational ways.  To prove his point, I’m actually going to address his enormous question within a blog post.  I hope you judge the effort valiant, if not complete.

The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality.  The first view is that greater rationality might be possible – but might not confer greater benefits.  I call this the “anti-Vulcan hypothesis”:  While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock.  A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group.  In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases.  Yet, whether we are Kirk or Flossie, the implication for law may be the same:  Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.

First, a slight cavil with the question:  The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control.  Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution.  Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true.  (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.)  Rationality divorced from affect arguably may not even be possible for humans, much less desirable.  Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.

Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor.  By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.

Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest.  Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing.  Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills.  This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.

So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference.  It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions.  Further, the rational cognition we can access can be totally swamped out by sudden and strong affect.  With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”

This fragility may be more boon than bane:  Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage.  Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations.  Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors.  To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility.  What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational.  This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.

An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory.  While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality.  In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”

On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it.  Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group.  Rationality operates, if at all, post hoc:  It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions.  (Note that different cultural groups assign different values to rational forms of thought and inquiry.  In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming.  Children of academics and knowledge-workers: I’m looking at you.)

This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data.  And that this cognitive mode inheres in us makes a certain kind of sense:  Most people face far greater immediate danger from defying their social group than from global warming or gun control policy.  The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.

To descend from Olympus to the village:  What could this mean for law?  Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored.  I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.

Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are  designed.  Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions.  The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.

Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy.  In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community.  And in still other contexts, we might value narrow rationality above all.  Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas.  Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.

Cultural cognition may offer strategies for communicating with the public about important issues.  The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.  If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow:  Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities.  The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.

To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers.  But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.

3

Q&A with Lior Strahilevitz about Information and Exclusion

Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011).  Like all of Lior’s work, the book is creative, thought-provoking, and compelling.  There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way.  That’s what Lior achieves in his book, and that’s quite an achievement.

I recently had the opportunity to chat with Lior about the book. 

Daniel J. Solove (DJS): What drew you to the topic of exclusion?

Lior Jacob Strahilevitz (LJS):  It was an observation I had as a college sophomore.  I lived in the student housing cooperatives at Berkeley.  Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process.  The cooperatives, by contrast, were open to any student.  But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities.  That made me curious.  It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system.  But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone?  That question was one I kept wondering about as a law student, lawyer, and professor.

That’s why page 1 of the book begins with a discussion of exclusion in the Greek system.  I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.)  The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge.  Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.

DJS: What is the central idea in your book?

LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services.  When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it.  Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria.  There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.

Read More

Auditing Studies of Anti-Depressants

Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:

Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:

The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.

Yves Smith finds Kramer’s response unconvincing:

The research is clear: the efficacy of antidepressants is (contrary to what [Kramer’s] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer’s] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.

Felix Salmon also challenges Kramer’s logic:

[Kramer’s view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”

Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.

That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.

How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”

We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).

Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.

Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:

One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.

The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.

The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.

Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.

We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.

*In the spirit of full disclosure: I did participate in this roundtable.

X-Posted: Health Law Profs Blog.

10

Facts, Values and Circumcision

There’s a flurry of coverage about proposed anti-circumcision initiatives in California.  (Sullivan, Volokh.)  The posts I’ve been reading – and, granted, I’ve not read the field – have taken this issue oddly seriously.  After all, these are merely (actual or proposed) ballot initiatives that haven’t been approved by the voters. If they were approved, their constitutionality won’t (contra Volokh) be determined by existing precedent.  In my view, this is a slam dunk example of an overdetermined constitutional issue.

But there’s another aspect of this fight that is, I think, worth some extended comment.  As Sarah has pointed on this blog, anti- and pro- circumcision advocates generally fight about circumcision’s health effects, and resist attacking (or defending) it as a cultural practice. To me, this looks quite like other contests in our society in which nominally empirical debates predominate — the fight over the HPV vaccine, gay and lesbian parenting, nanotechnology, global warming, etc. The Cultural Cognition project illustrates that these fights very often appear to be about facts, but that expressed conclusions of the “facts” and “risks” involved follow our less-conscious values.  Moreover, though we can perceive this tendency in others, we deny it in ourselves.  This is the phenomenon of naive realism.  What results?  We come to believe that people who we disagree with about these value-laden fights (i.e., people who deny the health benefits of circumcision) are arguing in bad faith.  They think the same of us.  Winning, in the world of policy, becomes an exercise of defeating not just our opponent’s values, but denying that their values are even at play. I am pretty sure that if we tested this hypothesis in the circumcision debate, we’d see a very strong  set of cultural priors influencing how partisans interpret and process the medical-risk-facts about circumcision, whether the American Academy of Pediatrics is vouching for those facts or not.

This leads to a concrete piece of advice for Andrew Sullivan and other hot-tempered advocates on either side of this fight. Cool it.  Stop inciting fights with question-begging terms like “male genital mutilation.”  Instead, affirm the values of those you disagree with by making clear that this isn’t – at root – a debate that be resolved with reference to empirical facts.  It’s (as Sarah has insightfully pointed out) a discussion about cultural practices, and the degree to which the greater society has the right to change them.

For what it’s worth, my view is that the government has about as much of a moral right to prohibit circumcision as it does to tell me that I must eat broccoli.

0

UCLA Law Review Vol. 58, Issue 3 (February 2011)

Volume 58, Issue 3 (February 2011)


Articles

Good Faith and Law Evasion Samuel W. Buell 611
Making Sovereigns Indispensable: Pimentel and the Evolution of Rule 19 Katherine Florey 667
The Need for a Research Culture in the Forensic Sciences Jennifer L. Mnookin et al. 725
Commentary on The Need for a Research Culture in the Forensic Sciences Joseph P. Bono 781
Commentary on The Need for a Research Culture in the Forensic Sciences Judge Nancy Gertner 789
Commentary on The Need for a Research Culture in the Forensic Sciences Pierre Margot 795


Comments

What’s Your Position? Amending the Bankruptcy Disclosure Rules to Keep Pace With Financial Innovation Samuel M. Kidder 803
Defendant Class Actions and Patent Infringement Litigation Matthew K. K. Sumida 843