Archive for the ‘Law and Psychology’ Category
posted by Dave Hoffman
Like many of you, I’ve been horrified by the events in Newtown, and dismayed by the debate that has followed. Josh Marshall (at TPM) thinks that “this is quickly veering from the merely stupid to a pretty ugly kind of victim-blaming.” Naive realism, meet thy kettle! Contrary to what you’ll see on various liberal outlets, the NRA didn’t cause Adam Lanza to kill innocent children and adults, nor did Alan Gura or the army of academics who helped to build the case for an individual right to gun ownership. Reading discussions on the web, you might come to believe that we don’t all share the goal of a society where the moral order is preserved, and where our children can be put on the bus to school without a qualm.
But we do.
We just disagree about how to make it happen.
Dan Kahan’s post on the relationship between “the gun debate”, “gun deaths”, and Newtown is thus very timely. Dan argues that if we really wanted to decrease gun deaths, we should try legalizing drugs. (I’d argue, following Bill Stuntz, that we also/either would hire many more police while returning much more power to local control). But decreasing gun deaths overall won’t (probably) change the likelihood of events like these:
“But here’s another thing to note: these very sad incidents “represent only a sliver of America’s overall gun violence.” Those who are appropriately interested in reducing gun homicides generally and who are (also appropriately) making this tragedy the occasion to discuss how we as a society can and must do more to make our citizens safe, and who are, in the course of making their arguments invoking(appropraitely!) the overall gun homicide rate should be focusing on what we can be done most directly and feasibly to save the most lives.
Repealing drug laws would do more — much, much, much more — than banning assault rifles (a measure I would agree is quite appropriate); barring carrying of concealed handguns in public (I’d vote for that in my state, if after hearing from people who felt differently from me, I could give an account of my position that fairly meets their points and doesn’t trade on tacit hostility toward or mere incomprehension of whatever contribution owning a gun makes to their experience of a meaningful free life); closing the “gun show” loophole; extending waiting periods etc. Or at least there is evidence for believing that, and we are entitled to make policy on the best understanding we can form of how the world works so long as we are open to new evidence and aren’t otherwise interfering with liberties that we ought, in a liberal society, to respect.”
Dan’s post is trying to productively redirect our public debate, and I wanted to use this platform to bring more attention to his point. But, I think he’s missing something, and if you follow me after the jump, I’ll tell you what.
posted by Dave Hoffman
Like many others, I’ve been using Amazon Mechanical Turk to recruit subjects for law & psychology experiments. Turk is (i) cheap; (ii) fast; (iii) easy to use; and (iv) not controlled by the psychology department’s guardians. Better yet, the literature to date has found that Turkers are more representative of the general population than you’d expect — and certainly better than college undergrads! Unfortunately, this post at the Monkey Cage provides a data point in the contrary direction:
“On Election Day, we asked 565 Amazon Mechanical Turk (MTurk) workers to take a brief survey on vote choice, ideology and demographics. . . . We compare MTurk workers on Election Day to actual election results and exit polling. The survey paid $0.05 and had seven questions: gender, age, education, income, state of residence, vote choice, and ideology. Overall, 73% of these MTurk workers voted for Obama, 15% for Romney, and 12% for “Other.” This is skewed in expected ways, matching the stereotypical image of online IT workers as liberal—or possibly libertarian since 12% voted for a third party in 2012, compared to 1.6% percent of all voters. . . In sum, the MTurk sample is younger, more male, poorer, and more highly educated than Americans generally. This matches the image of who you might think would be online doing computer tasks for a small amount of money…”
Food for thought. What’s strange is that every sample of Turkers I’ve dealt with is older & more female than the general population. Might it be that Turk workers who responded to a survey on election habits aren’t like the Turk population at large? Probably so, but that doesn’t make me copacetic.
posted by Karen Newirth
I also thank Danielle and Brandon for including me in this symposium, and am very happy to join the discussion of four very important works on the state of the criminal justice system in America today.
The reference to the Central Park Five in Danielle’s original post highlights one of the most important qualities of Convicting the Innocent: it uses the powerfully told stories of the exonerated to bring to life the new and important detail about the causes of wrongful convictions that Garrett’s research has uncovered. The result is the fullest picture to date of the scope of the “nightmarish reality” that has led to 301 DNA-based exonerations in this country. Convicting the Innocent is not only a great read for lawyers and lay people alike, it is also a powerful tool for bringing about much-needed systemic change. Dan Medwed’s post appropriately asks whether the works being discussed here urge change that is gradual and specific or change that is revolutionary, going to the heart of the adversary system. In the context of eyewitness misidentification – the leading contributing cause of wrongful convictions, occurring in (as Garrett found) 75 percent of the first 250 exonerations – we see great success in effecting change in both courts and police precincts alike. Brandon Garrett’s research has been critical to these successful reform efforts.
As the attorney responsible for the Innocence Project‘s work in the area of eyewitness identification, I have relied on Convicting the Innocent in my efforts to educate attorneys, judges and policy makers about the perils of misidentification and the flaws in the current legal framework for evaluating identification evidence at trial that is applied in nearly all jurisdictions in the United States. That legal framework, set forth by the Supreme Court in Manson v. Brathwaite, directs courts to balance the effects of improper police suggestion in identification procedures with certain “reliability factors” – the witness’s opportunity to view the perpetrator, the attention paid by the witness, the witness’s certainty in the identification, the time between the crime and confrontation and the accuracy of the witness’s description. (These factors are not exclusive, but most courts treat them as if they are.)
Psychological research in the area of perception and memory has offered conclusive evidence that the identified reliability factors are not well-correlated with accuracy; do not objectively reflect reality to the extent that they are self-reported; and – most critically – are inflated by suggestion, leading to the perverse result that the more suggestive the identification procedure, the higher the measures of reliability under the Manson test.
Garrett’s work in Convicting the Innocent adds an important dimension to the psychological research – and makes even more urgent the call to reform the Manson test – by demonstrating that the Manson test failed in the cases of the 190 exonerees who were convicted based, at least in part, on identification evidence that was either not challenged or admitted as reliable under Manson. Garrett’s work shows just how the Manson reliability factors fail to ensure reliability: in most cases reviewed by Garrett, the witnesses had poor viewing opportunities; had only a few seconds to see the perpetrator’s face, which was often disguised or otherwise obscured; made identifications weeks or months after the crime; and provided descriptions that were substantially different from the wrongly accused’s appearance. In addition, almost all of the witnesses in the cases reviewed by Garrett expressed complete confidence at trial – stating for example that “there is absolutely no question in my mind” (Steven Avery’s case); that “[t]his is the man or it is his twin brother” (Thomas Doswell’s case) – although DNA later proved that these witnesses were entirely wrong. Perhaps most striking of all of Garrett’s research findings in the area of eyewitness misidentification is that in 57 percent of the trials with certain eyewitnesses, the witnesses had expressed earlier uncertainty (strongly suggesting that the identification was unreliable), but only 21 percent of these witnesses admitted their earlier uncertainty.
The Innocence Project has relied on Garrett’s research in advocating for the reform of the legal framework for evaluating identification evidence in courts around the country, from the U.S. Supreme Court (Perry v. New Hampshire) to state supreme courts from Oregon (State v. Lawson) and Washington (State v. Allen) to New Jersey (State v. Henderson) and Pennsylvania (State v. Walker). In two of these cases – Henderson and Lawson – high courts found that Manson fails to ensure reliability and implemented new legal tests that better reflect the scientific research and, we hope, will better prevent wrongful convictions based on eyewitness misidentification. Both the Henderson and Lawson courts cited Convicting the Innocent in rendering their decisions, demonstrating just how powerful a force for change Garrett’s work is.
posted by Brandon Garrett
That image is from the false confession of Ronald Jones, a man whose tragic story begins my book, Convicting the Innocent: Where Criminal Prosecutions Go Wrong. In fact, it is an image of his entire false confession, at least the statement that the detectives had typed at the end of eight grueling hours of interrogation in Chicago in the mid-1980s. I turned the statement into a word cloud to illustrate the words that Jones had repeated the most. In his statement, Jones was unfailingly polite, and according to the police stenographer, at least, he responded “Yes, Sir,” as the detectives asked him questions. In reality, he alleged at trial, detectives had brutally threatened him, beat him, and told him what to say about a crime he did not commit. The jury readily sentenced Jones to death for a brutal rape and murder on Chicago’s South Side.
The word cloud shows why the jury put Jones on death row. Some of the most prominent words, after “Yes, Sir,” are key details about the crime scene: that there was a knife, that the murder occurred in the abandoned Crest hotel, that the killer left through a window. Jones protested his innocence at trial, but those facts were powerfully damning. The lead detective had testified at trial Jones told them in the interrogation room exactly how the victim was assaulted and killed, and finally signed that confession statement. The detectives said they brought Jones to the crime scene where Jones supposedly showed them where and how the murder occurred. After his trial, Jones lost all of his appeals. Once DNA testing was possible in the mid-1990s, he was denied DNA testing by a judge who was so convinced by his confession statement that he remarked, “What issue could possibly be resolved by DNA testing?”
In my book, I examined what went wrong in the first 250 DNA exonerations in the U.S. Jones was exonerated by a post-conviction DNA test. Now we know that his confession, like 40 other DNA exoneree confessions, was not just false, but likely contaminated during a botched interrogation. Now we know that 190 people had eyewitnesses misidentify them, typically due to unsound lineup procedures. Now we know that flawed forensics, in about half of the cases, contributed to a wrongful conviction. Now we know that informants, in over 50 of the cases, lied at trial. Resource pages with data from the book about each of these problems, and with material from these remarkable trials of exonerees, are available online.
Returning to Ronald Jones’ false confession, the Supreme Court has not intervened to regulate the reliability of confessions, such as by asking courts to inquire whether there was contamination, or simply requiring videotaping so that we know who said what and whether the suspect actually knew the actual facts of the crime. Typical of its rulings on the reliability of evidence in criminal cases, the Court held in Colorado v. Connelly that though a confession statement “might be proved to be quite unreliable . . . this is a matter to be governed by the evidentiary laws of the forum . . . not by the Due Process Clause of the Fourteenth Amendment.” Preventing wrongful convictions has largely fallen on the states. I end the book with optimism that we are starting to see stirrings of a criminal justice reform movement.
posted by Dave Hoffman
I’ve been working for some time on an article about how policymakers could and should reduce the law’s transmission costs by developing rules which stick and which are then re-transmitted and thus are passed among citizens without heavy-handed enforcement campaigns. This is different from saying that policymakers should make rules which are merely memorable: the goal is to increase the influence of the rule by making it likely that individuals will spread knowledge of it widely with less government effort. Recently, one of my students, Bill Scarpato, worked on this problem in a particular context: off-road vehicle use on public lands. His draft paper, Don’t Tread on Me: Increasing Compliance with Off-Road Vehicle Use at Least Cost is up on ssrn. From the abstract:
In a world of diminished enforcement resources, how can environmental regulators get the most bang for their buck? Off-road vehicle use is the fastest growing and most contentious form of recreation on America’s public lands. Motorized recreationists have enjoyed access to National Forests and BLM land for almost a century, but regulators, property owners, and environmental groups have voiced opposition to unconstrained off-road vehicle use. Law enforcement on these lands is underfunded and ineffective, and the individualist culture of off-road vehicle users is said to foster an attitude of non-compliance — trailblazing in the literal sense. Endorsing and building upon work in law and social norms and cognate disciplines, this Article draws principally on the social psychology of effective messaging outlined in Chip and Dan Heath’s 2007 work, Made to Stick, to propose a partnership-based campaign based on the exhortatory theme, “Don’t Tread on Me.”
I think Bill did a nice job of laying out the research and applying it in a creative way to a very hard problem. Check it out.
posted by Amanda Pustilnik
By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law. Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”
Ben’s question suggests that ostensibly rational human beings often act in irrational ways. To prove his point, I’m actually going to address his enormous question within a blog post. I hope you judge the effort valiant, if not complete.
The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality. The first view is that greater rationality might be possible – but might not confer greater benefits. I call this the “anti-Vulcan hypothesis”: While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock. A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group. In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases. Yet, whether we are Kirk or Flossie, the implication for law may be the same: Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.
First, a slight cavil with the question: The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control. Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution. Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true. (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.) Rationality divorced from affect arguably may not even be possible for humans, much less desirable. Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.
Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor. By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.
Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest. Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing. Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills. This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.
So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference. It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions. Further, the rational cognition we can access can be totally swamped out by sudden and strong affect. With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”
This fragility may be more boon than bane: Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage. Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations. Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors. To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility. What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational. This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.
An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory. While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality. In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”
On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it. Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group. Rationality operates, if at all, post hoc: It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions. (Note that different cultural groups assign different values to rational forms of thought and inquiry. In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming. Children of academics and knowledge-workers: I’m looking at you.)
This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data. And that this cognitive mode inheres in us makes a certain kind of sense: Most people face far greater immediate danger from defying their social group than from global warming or gun control policy. The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.
To descend from Olympus to the village: What could this mean for law? Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored. I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.
Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are designed. Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions. The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.
Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy. In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community. And in still other contexts, we might value narrow rationality above all. Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas. Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.
Cultural cognition may offer strategies for communicating with the public about important issues. The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it. If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow: Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities. The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.
To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers. But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.
October 16, 2011 at 2:25 am Tags: cultural cognition, emotion & cognition, irrationality, law & neuroscience, rationality Posted in: Behavioral Law and Economics, Law and Psychology, Legal Theory, Philosophy of Social Science, Uncategorized Print This Post 11 Comments
posted by Daniel Solove
Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011). Like all of Lior’s work, the book is creative, thought-provoking, and compelling. There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way. That’s what Lior achieves in his book, and that’s quite an achievement.
I recently had the opportunity to chat with Lior about the book.
Daniel J. Solove (DJS): What drew you to the topic of exclusion?
Lior Jacob Strahilevitz (LJS): It was an observation I had as a college sophomore. I lived in the student housing cooperatives at Berkeley. Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process. The cooperatives, by contrast, were open to any student. But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities. That made me curious. It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system. But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone? That question was one I kept wondering about as a law student, lawyer, and professor.
That’s why page 1 of the book begins with a discussion of exclusion in the Greek system. I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.) The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge. Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.
DJS: What is the central idea in your book?
LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services. When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it. Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria. There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.
posted by Frank Pasquale
Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:
Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.
Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:
The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.
Yves Smith finds Kramer’s response unconvincing:
The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.
Felix Salmon also challenges Kramer’s logic:
[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”
Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.
That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.
How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”
We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).
Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.
Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.
The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.
The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.
Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.
We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.
*In the spirit of full disclosure: I did participate in this roundtable.
X-Posted: Health Law Profs Blog.
posted by Dave Hoffman
There’s a flurry of coverage about proposed anti-circumcision initiatives in California. (Sullivan, Volokh.) The posts I’ve been reading – and, granted, I’ve not read the field – have taken this issue oddly seriously. After all, these are merely (actual or proposed) ballot initiatives that haven’t been approved by the voters. If they were approved, their constitutionality won’t (contra Volokh) be determined by existing precedent. In my view, this is a slam dunk example of an overdetermined constitutional issue.
But there’s another aspect of this fight that is, I think, worth some extended comment. As Sarah has pointed on this blog, anti- and pro- circumcision advocates generally fight about circumcision’s health effects, and resist attacking (or defending) it as a cultural practice. To me, this looks quite like other contests in our society in which nominally empirical debates predominate — the fight over the HPV vaccine, gay and lesbian parenting, nanotechnology, global warming, etc. The Cultural Cognition project illustrates that these fights very often appear to be about facts, but that expressed conclusions of the “facts” and “risks” involved follow our less-conscious values. Moreover, though we can perceive this tendency in others, we deny it in ourselves. This is the phenomenon of naive realism. What results? We come to believe that people who we disagree with about these value-laden fights (i.e., people who deny the health benefits of circumcision) are arguing in bad faith. They think the same of us. Winning, in the world of policy, becomes an exercise of defeating not just our opponent’s values, but denying that their values are even at play. I am pretty sure that if we tested this hypothesis in the circumcision debate, we’d see a very strong set of cultural priors influencing how partisans interpret and process the medical-risk-facts about circumcision, whether the American Academy of Pediatrics is vouching for those facts or not.
This leads to a concrete piece of advice for Andrew Sullivan and other hot-tempered advocates on either side of this fight. Cool it. Stop inciting fights with question-begging terms like “male genital mutilation.” Instead, affirm the values of those you disagree with by making clear that this isn’t – at root – a debate that be resolved with reference to empirical facts. It’s (as Sarah has insightfully pointed out) a discussion about cultural practices, and the degree to which the greater society has the right to change them.
For what it’s worth, my view is that the government has about as much of a moral right to prohibit circumcision as it does to tell me that I must eat broccoli.
posted by UCLA Law Review
Volume 58, Issue 3 (February 2011)
|Good Faith and Law Evasion||Samuel W. Buell||611|
|Making Sovereigns Indispensable: Pimentel and the Evolution of Rule 19||Katherine Florey||667|
|The Need for a Research Culture in the Forensic Sciences||Jennifer L. Mnookin et al.||725|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Joseph P. Bono||781|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Judge Nancy Gertner||789|
|Commentary on The Need for a Research Culture in the Forensic Sciences||Pierre Margot||795|
|What’s Your Position? Amending the Bankruptcy Disclosure Rules to Keep Pace With Financial Innovation||Samuel M. Kidder||803|
|Defendant Class Actions and Patent Infringement Litigation||Matthew K. K. Sumida||843|
February 25, 2011 at 1:19 pm Posted in: Bankruptcy, Civil Procedure, Constitutional Law, Courts, Criminal Law, Criminal Procedure, Current Events, Economic Analysis of Law, Empirical Analysis of Law, Evidence Law, History of Law, Indian Law, Intellectual Property, International & Comparative Law, Jurisprudence, Law and Humanities, Law and Inequality, Law and Psychology, Law Practice, Law Rev (UCLA), Psychology and Behavior, Race, Sociology of Law, Supreme Court Print This Post No Comments
posted by Dave Hoffman
The partisanship and bad faith of judges who disagree with us has never been more obvious, or more pernicious. For many, the most irritating personality flaw of judicial politicos (and their fellow-travelers) isn’t the bottom-line results of the opinions themselves, it is that judges refuse to acknowledge their own biases, though it’s evident that they aren’t neutral umpires, but rather players in the game. Indeed, almost every decision you read about these days comes accompanied by a reference to the political party of the appointing President – as if you needed the help! As Orin Kerr has brilliantly pointed out, “people who disagree with me are just arguing in bad faith.”
For the Cultural Cognition Project, the way that we talk about legal decisions – and decisionmakers – is a subject of study and concern. We decided to take a careful look at this topic — which we’ve previously touched on in work like Whose Eyes Are You Going To Believe. Our motivation was to investigate how constitutional norms requiring neutrality in fact finding interact with individuals’ tendencies to perceive facts and risks in ways congenial to their group identities. Building on Hastorf/Cantril’s social psychology classic, They Saw a Game: A Case Study, we’ve written a new piece about how motivated cognition can de-stabilize constitutional doctrine, render legal fact-finders blind to their own biases, and inflame the culture wars. Our resulting paper, “They Saw a Protest”: Cognitive Illiberalism and the Speech-Conduct Distinction, results from my collaboration with Dan Kahan, Don Braman, Danieli Evans, and Jeff Rachlinski. The paper is just up on SSRN, and I figured to jump-start the conversation by using this post to talk about our experimental approach and findings. (I think that Kahan is blogging on Balkinization later in the week about the normative upshot of Protest.)
February 7, 2011 at 6:00 pm Posted in: Articles and Books, Behavioral Law and Economics, Civil Procedure, Civil Rights, Law and Psychology, Law School (Scholarship), Psychology and Behavior, Sociology of Law Print This Post One Comment
posted by John Jacobi
Thanks to Frank for inviting me to review Barak Richman, Daniel Grossman, and Frank Sloan’s chapter, Fragmentation in Mental Health Benefits and Services, in Our Fragmented Health Care System: Causes and Solutions (Einer Elhauge, ed. 2010). The book is important and provocative. The chapter on the fragmentation of mental health care couldn’t address a more timely issue.
People with serious mental illness, more than most other patients, struggle with health system fragmentation. As the Institute of Medicine described it,
Mental and substance-use (M/SU) problems and illnesses seldom occur in isolation. They frequently accompany each other, as well as a substantial number of general medical illnesses such as heart disease, cancers, diabetes, and neurological illnesses. *** Improving the quality of M/SU health care—and general health care—depends upon the effective collaboration of all mental, substance-use, general health care, and other human service providers in coordinating the care of their patients. *** However, these diverse providers often fail to detect and treat (or refer to other providers to treat) these co-occurring problems and also fail to collaborate in the care of these multiple health conditions—placing their patients’ health and recovery in jeopardy.
By some estimates, formerly institutionalized people with serious mental illness experience about 25 fewer years of life, mostly due to the effects of treatable physical illnesses such as cardiovascular, pulmonary and infectious diseases. The effects of this health system fragmentation are experienced notwithstanding parity legislation, and they are felt also by people in the community with less serious mental illness, often because their primary care providers can’t find mental health providers to whom they can refer.
In Fragmentation in Mental Health Benefits and Services, the authors approach mental health system fragmentation by telling a story of the relationship between health insurance structure and income redistribution. The authors address the interrelationship between insurance “carve-outs” for mental health care and the growth of mental health parity laws. They assert that the carve out of behavioral health coverage from medical insurance provokes states to pass mental health parity laws. According to the authors, these parity laws fail to help their “intended” beneficiaries, and instead serve to redistribute resources away from low income and non-White employees.
To make their case, they mine a database of claims data for privately insured North Carolina patients. These claims data allow them to track employees’ (and, presumably, their dependents’) use of mental health services. Along the way, they raise several important issues. For example, they suggest that care provided by mental health providers may not be particularly efficacious. (299) Few would disagree that in most areas of health care – including mental health care – comparative effectiveness research is essential. In addition, they suggest that access to and benefit from covered services varies by income and race. (298-99) It is undoubtedly true that there are class-based and race-based disparities in access to health care; this is so much discussed, in fact, that it somewhat puzzling that the authors would characterize as a “regularly overlooked question” the fact that “equal insurance and access does not translate into equitable consumption.” (279)
On some points, the authors seem to go a bit beyond their data. First, the authors assert (without citation) that mental health parity is “often” pursued “to benefit low-income and traditionally vulnerable populations.” (284) Many advocates (myself included) have argued for parity as a civil rights matter: as people with physical illness have access to insurance coverage, so should people with mental illness. Certainly, insurance coverage is most valuable for those without the means to pay for care out of pocket, but that is as true for cardiac care as for mental health care. From this perspective, parity legislation seems no more a redistributive move than any other form of health insurance.
posted by Alicia Kelly
Married life is characterized by a sharing norm. As I described in an earlier post, spouses commit to and in fact engage deeply in sharing behavior, including a shared family economy. Overwhelmingly, spouses pool economic resources, including labor, and decide together how to allocate them to benefit the family as a whole.
In addition to its affects in the paid labor market (see my last post), sharing money matters inside a functioning marriage. It shapes the couple relationship as well as each partner individually. Research shows that in an ongoing marriage, money is a relational tool. For example, making money a communal asset is a way to demonstrate intimacy and commitment, and that can nurture a couple’s bond. Yet, in some circumstances, an assignment of resources to just one spouse can also be understood (by both partners) to be appropriate and deserved—a recognition of the individual within a sharing framework. Conversely, it is also possible that spouses’ monetary dealings can undermine individual autonomy and the relationship as well. For example, one person might exercise authority over money in a way that disregards the other. Accordingly, power to influence financial resource allocation within the family is important for individual spouses and for togetherness.
It becomes a special concern then, that sharing patterns in marriage are gendered. As highlighted in my previous post, role specialization remains a part of modern intimate partner relations. Particularly true for married couples, men continue to perform more as breadwinners, and women more as caregivers. As a result, women tend to have reduced earning power in the market. How does this market asymmetry translate into economic power at home? Happily, in a significant departure from the past, a majority of couples report that they share financial decisionmaking power roughly equally. Indeed, most married couples today endorse gender equality as an important value in their relationship. However, in a significant minority of marriages, spouses agree that husbands have more economic power. For some couples then, a husband’s breadwinning role and/or perhaps his gender, confers authority in contentious money matters.
How should law governing an ongoing marriage respond to these sharing dynamics? Consider this hypothetical fact situation. A husband has a stock account from which he plans to make a gift to his sister who he feels really needs the money. The husband suspects that his wife would not approve of the gift. Even though the wife too loves the sister, she believes the sister is irresponsible with money. Let’s assume that the money in that stock account was acquired while the parties were married, and that it came from the market wages of one or both of the spouses earned during marriage. It was a product of the couple’s shared life. Does contemporary law allow the husband to give his sister the gift without her consent? Without even telling her? How should legal power over the money be allocated?
October 1, 2010 at 1:04 pm Posted in: Family Law, Feminism and Gender, Law and Inequality, Law and Psychology, Legal Theory, Property Law, Psychology and Behavior, Uncategorized Print This Post 2 Comments
posted by Glenn Cohen
Hypotheticals are a ubiquitous pedagogical tool in both the law and philosophy classrooms. I have recently been thinking about the different functions they serve and whether they are well-suited for the weight we give them. These reflections were prompted by a conference on “Moral Biology,” hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School (which I co-direct), in cooperation with The Project on Law and Mind Sciences at Harvard Law School, the Gruter Institute, the Harvard Program on Ethics and Health, and the MacArthur Law and Neuroscience Project.
I may blog a little bit later about some other of the marvelous things I learned over these two days, but for now I wanted to concentrate on some thoughts that stemmed from a public portion of the conference that can be seen here, involving Josh Greene from Harvard’s Psychology Department, William Fitzpatrick from the University of Rochester’s Philosophy Department, Adina Roskies from Dartmouth’s Philosophy Department, Walter Sinnott-Armstrong from Duke’s Philosophy Department, and Tim Scanlon, from Harvard’s philosophy department.
At around the 43 to 50 minute mark in the video, Josh discusses Trolley Problems (which ask participants a thought experiment about whether to divert a trolley from one track to another with many versions of the hypothetical) and an experiment done on them by Fiery Cushman (and a collaborator, Switzgable I believe, I could not find the actual paper) in Josh’s lab. In the experiment, before being asked whether they would endorse the principle of double effect, ethicists with PhDs were asked to reason about variants of the Trolley problem (switch vs. footbridge) presented in different orders. The experiment found that if one varied the order in which the versions were presented (but always presented all of them,) ethicists reached different conclusions about whether they would endorse the principle. [This is Josh's description in the video, again if anyone can find the paper he is discussing I will try and like to that]. The result is surprising in that it appears even those with PhD training in ethics are susceptible to order effects in reasoning about a very fundamental issue.
As Josh concedes, and others (in the panel and in written pieces discussing his work emphasize) the fact that these ordering effects occur is not itself fatal to the enterprise of philosophical analysis using intuitions. It depends on further views about how one uses these kinds of intuitions in the analysis. For present purposes, though, I want to partially side-step that question in favor of thinking about the law classroom, and how this experiment might should us a little more careful about the way we use hypotheticals.
August 13, 2010 at 8:22 am Posted in: Bright Ideas, Empirical Analysis of Law, Jurisprudence, Law and Humanities, Law and Psychology, Law School, Law School (Teaching), Legal Theory, Teaching, Uncategorized Print This Post One Comment
posted by Dave Hoffman
What is the relationship between litigation and settlement? In a new working paper, Christina Boyd and I explore that question using data from federal trial dockets. Our basic intuition is that motion practice propels cases toward faster settlements, as it unlocks information about the facts, the parties’ strategies, the resources they will spend on the case, and (sometimes) what the judge thinks of the merits. Our results essentially support such hypotheses: the mere filing of a motion speeds case settlement. Moreover, “motions which are granted are more immediately important to the settlement rate than motions denied, plaintiff victories are more important than defendant victories, motions about unclear areas of law are more important than motions about settled law, and motions later in cases are more important that motions earlier in cases.” These findings are suggestive. Though motion practice is often thought of as parasitic, driven by agency costs, and part the problem of litigation, our results imply that it has significant pro-social consequences. Indeed, paying homage to Gilson, why not re-imagine lawyers as canny litigation costs engineers?
We also found some nifty case effects. Women judges were on average (as Boyd had previously established) better at encouraging settlement than men: “the likelihood of a case settling in any given month is, on average, 25% larger when a female judge presides than when a male judge does.” Also, imbalance between the size of the firms representing the plaintiff and the defendant had a significant influence on compromise’s timing, as the figure below illustrates:
posted by Glenn Cohen
A recent faculty workshop by my witty and brilliant colleague Jonathan Zittrain on “ubiquitous human computing,” (this youtube video captures in a different form what he was talking about ), prompted me to thinking about some ways in which platforms like Amazon’s Mechanical Turk, interface with university research and research ethics in interesting ways.
For those unfamiliar, Mechanical Turk allows you to farm out a variety of small tasks (label this image, enter date of this .pdf to a spreadsheet, take a photo of yourself with the sign “will turk for food,” etc) at a price per unit you set. Millions of anonymous users can then do the task for you and collect the bounty, a form of microwork.
As Jonathan detailed, this raises a host of fascinating issues, but I want to focus on two that are closer to bioethics.
First, I have begun to see some legal academics recruiting populations for experimental work using Mechanical Turk, and there is an emerging literature on the pros and cons of subject recruitment from these populations. Are Mechanical Turkers “research subjects” within the legal (primarily the Common Rule if one receives federal funding) or broader ethical sense of the term? Should they be? Take as a tangible example the implicit bias research of the kind Mahzarin R. Banarji has made famous, and imagine it was done over something like Mechanical Turk. How (if at all) should the anonymity of the subject, the lack of subject-experimenter relationship of any sort, the piecemeal nature of the task, etc, change the way an institutional review board reviews the research? It is a mantra in the research ethics community that informed consent is supposed to be a “process” not a document, but how can that process take place in this anonymous static cyberspace environment?
Second, consider research assistance.
August 3, 2010 at 9:49 am Posted in: Amazon, Anonymity, Bioethics, Bright Ideas, Google & Search Engines, Law and Psychology, Law School, Law School (Scholarship), Technology, Web 2.0 Print This Post 5 Comments
posted by UCLA Law Review
Volume 57, Issue 5 (June 2010)
|Introduction to the Symposium Issue: Sexuality and Gender Law: The Difference a Field Makes||Nan D. Hunter||1129|
|Elusive Coalitions: Reconsidering the Politics of Gender and Sexuality||Kathryn Abrams||1135|
|The Sex Discount||Kim Shayo Buchanan||1149|
|What Feminists Have to Lose in Same-Sex Marriage Litigation||Mary Ann Case||1199|
|Lawyering for Marriage Equality||Scott L. Cummings Douglas NeJaime||1235|
|Sexual and Gender Variation in American Public Law: From Malignant to Benign to Productive||William N. Eskridge, Jr.||1333|
|Sticky Intuitions and the Future of Sexual Orientation Discrimination||Suzanne B. Goldberg||1375|
|The Dissident Citizen||Sonia K. Katyal||1415|
|Raping Like a State||Teemu Ruskola||1477|
|The Gay Tipping Point||Kenji Yoshino||1537|
July 5, 2010 at 7:12 pm Posted in: Articles and Books, Constitutional Law, Current Events, Feminism and Gender, History of Law, Immigration, Law and Humanities, Law and Inequality, Law and Psychology, Law Practice, Law Rev (UCLA), Law School, Legal Theory, Politics, Psychology and Behavior, Supreme Court Print This Post No Comments
VICTIMS’ UNDERSTANDINGS AND MOTIVATIONS IN PROCESSING HUMAN RIGHTS VIOLATIONS CASES IN THE GLOBAL SOUTH
posted by Tamara Relis
The proliferation of international human rights treaties, committees and courts over the last sixty years represents enormous achievement. International human rights laws are now asserted throughout the world by individuals of many cultures and traditions. Yet, at the same time human rights ideas and principles continue to have difficulty in establishing their relevance in the daily lives of those who are geographically and culturally distant from international institutions (Stacy, 2009). In my forthcoming piece in Human Rights Quarterly, I argue that notwithstanding the fact that giving voice to those oppressed is a main function of the international human rights movement (Baxi, 2009), and that the meaning of human rights must be grounded in local culture at grassroots levels, relatively little scholarship bases its analyses on the discourse of those actually involved in human rights violations cases in the Global South. What are victims’ conceptions and expectations of human rights and their agendas and experiences in formal and informal justice systems processing their cases? This knowledge is critical to enable greater understanding of victims’ needs, epistemologies and micro-realities in order to innovatively engage the controversies in international human rights theory and practice and to effect realizable change for the subjects of human rights in the Global South.
I provide some such data in my forthcoming book based on my empirical research in India, detailed in my earlier post. This includes voices of female victims of violence discussing their comprehensions, objectives, and practices in processing their cases (74 interviews with victims, and 24 with their family members). I link victims’ discourse to norm diffusion theory in international relations (Risse et al. 1999) and to vernacularization theory in law and anthropology (Merry, 2006), which engage the issue of permeation of human rights standards to grassroots levels.
In terms of female victims of violence in India where CEDAW was ratified in 1993, I show that notwithstanding State enactments of laws in line with international human rights obligations, and the dissemination of human rights concepts by transnational activists and domestic NGOs who work to make them meaningful within particular societies, the subjectivities of victims of violence in two major cities (Delhi, Bangalore) as illustrated in their discourse on their motivations and aims in approaching formal courts and informal justice mechanisms suggest little if any human rights emancipation. Those with little education had either never heard of human rights or lacked an understanding of their meaning. More educated victims who had a general sense of human rights concepts knew little of specifics. Moreover, both groups generally felt that fundamental human rights ideas, though something positive, were primarily of use on an inspirational level.
June 1, 2010 at 11:27 pm Posted in: Articles and Books, Civil Procedure, Civil Rights, Criminal Law, Criminal Procedure, Culture, Empirical Analysis of Law, Feminism and Gender, International & Comparative Law, Interviews, Law and Inequality, Law and Psychology, Sociology of Law Print This Post No Comments
Paradoxes in Formal Courts versus Informal Justice / Quasi-Legal Processing of Human Rights Cases in India
posted by Tamara Relis
Continuing from my previous post, I will elaborate here on some of the initial arguments from my forthcoming book, INTERNATIONAL HUMAN RIGHTS AND VIOLENCE AGAINST WOMEN: THEORY, GLOBAL STANDARDS AND SOUTHERN ACTORS’ PRAXIS based on the empirical research I conducted throughout India, which I described earlier. Some of these issues are discussed in my forthcoming article, International Human Rights and Southern Realities, 112 HUMAN RIGHTS QUARTERLY (2010), HTTP://PAPERS.SSRN.COM/SOL3/PAPERS.CFM?ABSTRACT_ID=1592042 . There, I argue that on the basis that a culturally plural universalism in human rights is an acceptable aim, we are in dire need of a new integrated analytical framework, one that is grounded not only in the understandings and perceptions of Southern actors (i.e. individuals from the Global South), but that simultaneously imbeds their perspectives within the realities of human rights case processing in the legally pluralistic Global South. This involves not only formal courts but also informal justice or quasi-legal non-state mechanisms processing human rights cases.
PARADOXES IN FORMAL COURTS VERSUS INFORMAL JUSTICE / QUASI-LEGAL MECHANISMS IN INDIA - Paradoxically, the data suggest that the bulk of lawyer advocates and judges working in the lower criminal and civil courts, as well as court-linked ‘lok adalats’ (mediations)–who process great numbers of cases involving serious violence against women involving food deprivation as a means of punishment, physical and mental torture, and rape–utilize international human rights principles to a far lesser extent, if at all, in dealing with these cases than do some informal justice / quasi-legal mechanisms processing the very same type cases. In contrast, the non-lawyer mediators/arbitrators in the informal justice mechanisms studied—who were not only not formally legally trained, but many of whom had poor literacy skills—were far more geared towards resolving cases utilizing principles of international human rights law and CEDAW in particular (e.g. equality, autonomy).
May 24, 2010 at 8:49 pm Posted in: Civil Procedure, Civil Rights, Criminal Law, Criminal Procedure, Culture, Empirical Analysis of Law, Feminism and Gender, International & Comparative Law, Interviews, Law and Inequality, Law and Psychology, Law Practice, Sociology of Law, Uncategorized Print This Post No Comments
INTERNATIONAL HUMAN RIGHTS AND VIOLENCE AGAINST WOMEN: THEORY, GLOBAL STANDARDS AND SOUTHERN ACTORS’ PRAXIS – Some highlights from a forthcoming book
posted by Tamara Relis
My second book is entitled INTERNATIONAL HUMAN RIGHTS AND VIOLENCE AGAINST WOMEN: THEORY, GLOBAL STANDARDS AND SOUTHERN ACTORS’ PRAXIS (forthcoming). It is based on data I collected over three years in eight states of India and in seven languages while I was a postdoctoral research fellow at Columbia Law School and the LSE (London School of Economics, Dept. of Law, where I continue to be a research fellow). This data was collected with the help of eight teams of about 200 research assistants throughout India. The United Nations Development Program (Delhi), 11 law school Deans, domestic judges, state legal services authorities, local district and high courts, NGO’s and human rights/public interest lawyers throughout India were also involved in the project. The dataset comprises 400 semi-structured depth interviews and questionnaires from victims, accused, lawyers, judges, arbitrators and mediators in 193 cases involving human rights violations of serious violence against women. It also includes case hearing observations in lower formal courts, court-linked mediations known as “lok adalats” and non-state, quasi-legal women’s arbitrations known as “mahila panchayats” and “nari adalats” (British Academy Award PDF/2006-09/64).
Similar to my first book, the South Asian research analyzes legal and lay actors’ understandings, objectives and experiences during case processing. However, the South Asian research builds on and takes in new directions the theories and conceptual arguments I developed in PERCEPTIONS IN LITIGATION AND MEDIATION . In particular, it focuses on local, Southern actors’ perspectives (i.e. individuals from the Global South) on the permeation and perceived relevance of international human rights laws and norms in formal courts and non-state informal justice mechanisms.
Drawing on interdisciplinary scholarship (international relations, law & anthropology, law & development, and victimology literatures), the book questions how the current proliferation of international human rights has shaped case processing systems at grassroots levels. Expanding on my North American findings, Southern legal and lay actors provide local perspectives on non-western models of formal courts and informal justice processes as forms of legal pluralism. I examine how, if at all, international human rights laws and norms (e.g. CEDAW 1979, ICCPR 1976, UN Declaration on Basic Principles of Justice for Victims of Crime and Abuse of Power 1985) have permeated the processing of these cases, comparing how receptive the different spaces of lower courts versus quasi-legal regimes are to claims made from the international sphere. I further examine the theoretical ideas informing these processes (including norm diffusion theory, universalism versus cultural relativism, restorative justice, and feminist critiques of mainstream human rights paradigms) and how these ideas are understood by those on the ground. The research also highlights the interdependence of all human rights and the link between human rights, women’s rights and development, which has been the subject of much debate. Finally, the findings provide a critique on the boundaries created both between formal and informal justice, as well as between ratified international law and the permeation of international human rights norms in case processing at grass roots levels.
Interestingly, depending on arbitrary factors including parties’ geographic and/or socioeconomic positions within India, the same type cases might be heard in either criminal or civil lower courts (magistrates/sessions/district) or in the above-mentioned court-linked or non-state quasi-legal mediations or arbitrations. The dataset additionally comprises “in-chambers mediations”, which are newly exported forms of American justice to India. These are case management tools that include ADR and plea bargaining methods, which have been and are being taught to Indian judges and advocates by a number of Californian judges and US Department of Justice representatives with the aim of deflecting cases from the overburdened Indian courts where trial waits of 10 years or more are not uncommon. This is being done predominantly for US commercial interests. However, these case management tools also affect the processing of violence against women cases.
May 17, 2010 at 8:54 pm Posted in: Articles and Books, Civil Procedure, Civil Rights, Criminal Law, Criminal Procedure, Culture, Empirical Analysis of Law, Feminism and Gender, International & Comparative Law, Interviews, Law and Inequality, Law and Psychology, Law Practice, Sociology of Law Print This Post One Comment