Site Meter

Tagged: law & neuroscience

0

Racey, Racey Neuro-Hype! Can a Pill Make You Less Racist?

Media outlets around the world reported yesterday that a pill can make people less racist.

“Heart disease drug ‘combats racism’” heralds the UK’s Telegraph.  “A Pill that Could Prevent Racism?” asks The Daily News.

Is this for real?

The answer is less racy – and less raced – but actually more interesting than the headlines suggest.

Researchers at the Oxford University Centre for Practical Ethics, led by Sylvia Terbeck, administered a common blood-pressure lowering drug, called propranolol, to half of a group of white subjects and a placebo to the other half.  (Read the study’s press release here and the research paper here)  The subjects then took a test that measures “implicit associations” – the rapid, automatic good/bad, scary/safe judgments we all make in a fraction of a second when we look at words and pictures.  The subjects who took the drug showed less of an automatic fear response to images of black people’s faces and were less likely to associate pictures of black people with negative words than the subjects who did not take the drug.  Based on the study’s design, it is likely that results would be the same in trials involving racism by and against other racial and ethnic groups.

This looks like the pill treated racism in the research subjects.  But this isn’t so.

Researchers have long known that propranolol has a range of effects that include lethargy, sedation, and reductions in several kinds of brain activity.  In high-flown medical parlance, this drug makes people really chilled out.  I know: I’ve been on propranolol myself (unsuccessfully) for migraine prevention.  When I was on the drug, my biggest fear was falling asleep at work – and even that didn’t stress me as much as it should have.

Because propranolol muffles fear generally, it reduces automatic negative responses to just about anything.  Propranolol has been used to treat everything from “uncontrolled rage” to performance anxiety and is being explored for treating PTSD.  Very recent research shows that it generally reduces activity in the brain region called the amygdala (more on that, below).

But the study remains interesting and important for a few reasons.  This is the first study to show that inhibiting activity in the amygdala, which is crucially involved in fear learning, directly reduces one measure of race bias.  This validates extensive research that has correlated race bias with heightened activity in that brain region.  (Although some contrary research also challenges the association.)  So this study helps support the idea of a causal relationship between automatic or pre-conscious race bias and conditioned fear learning.

The cure for racism born of conditioned fear learning is not to chemically dampen the brain’s response to fear generally – because fear is often useful – but to attack the causes of the conditioned associations that lead to bias in the first place.

The rest of this post will show how the fear response, claims about race, and the way the drug works all come together to point to the social nature of even “neurological” race bias – and to its economic and legal repercussions.

The fear response

When we see something that frightens or startles us, several regions of the brain become active – particularly the amygdala.  The amygdala has many functions, so a neuroimage showing activity in the amygdala does not necessarily mean that a person is experiencing fear.  But if a person has a frightening experience (loud noise!) or sees something she’s afraid of (snakes!), activity in the amygdala spikes.  This activity is pre-conscious and totally outside our control:  We startle first and then maybe stop to think about it.

The automaticity of fear serves us well in the face of real threats – but poorly in much of daily life.  Fear learning is overly easy: A single negative experience can create a lasting, automatic fear association.  Repeated, weak negative experiences can also form a strong fear association.  And, we can “catch” fear socially: If my friend tells me that she had a negative experience, I may form an automatic fear association as if I had been frightened or harmed myself.  Finally, fear lasts.  I can consciously tell myself not to be afraid of a particular thing but my automatic fear response is likely to persist.

Race bias and the fear response

In neuroimaging studies using functional magnetic resonance imaging (fMRI) on white and black Americans, research subjects on average have a greater amygdalar response to images of black faces than to images of white faces.  Researchers have interpreted this as a pre-conscious fear response.  Indeed, the more that activity in a person’s amydala increases in response to the images of black faces, the more strongly he or she makes negative associations with images of black faces and with typically African-American names (see paper here).

These automatic fear responses matter because they literally shape our perceptions of reality.  For example, a subject might be asked to rate the facial expressions on a set of white and black faces.  The facial expressions range from happy to neutral to angry.  A subject who has a strong amygdalar response to images of black faces is much more likely to misinterpret neutral or even moderately happy expressions on a black facial image as being hostile or angry.  This shows how fear changes our perceptions, which in turn changes how we react to and treat other people.  It also shows how fear alters perception to create a self-reinforcing loop.

This kind of pre-conscious or automatic racism matters economically and legally:  A majority of white people who have taken these implicit association tests demonstrate some automatic bias against black faces both associationally and neurologically.  White people numerically and proportionally hold more positions as decision-makers about employment – like hiring and promotion – and about legal process and consequences – like whether to charge a suspect with a crime, the severity of the crime with which to charge him or her, and whether to offer a generous or harsh plea bargain.  A study of two hundred judges serving in jurisdictions across the United States has shown that judges, too, more readily make these automatic, negative associations about black people than they do about white people.  The implication is that automatic racial bias could play a role in pervasively tilting the scales against black people in every phase of economic life and in every phase of the legal process.

Yet, current anti-discrimination law only prohibits explicit racial bias.  An employer may not advertise a position as “whites only” nor fire nor refuse to promote a worker because the employer does not want to retain or advance a black person.  Systematic racial bias that creates unlawful “disparate impact” also rests on explicit racism: plaintiffs who claim that they are proportionally under-represented in, say, hiring and promotion by a particular employer must show that the disparate impact results from an intentional discriminatory purpose.

Automatic race bias, by contrast, takes a different form – a form not barred by law.  Automatic discrimination expresses itself when the white supervisor (or police officer, or prosecutor, or judge, or parole board member) just somehow feels that his or her black counterpart has the proverbial “bad attitude,” or doesn’t “fit” with the culture of the organization, or poses a greater risk to the public than an equivalent white offender and so should not be offered bail or a plea deal or be paroled after serving some part of his sentence.

Tying it all together

If current anti-discrimination law does not touch automatic bias, and automatic bias is pervasive, then does this point to a role for drugs?

On propranlol, an implicitly biased interviewer or boss might perceive a black candidate more fairly, unfiltered by automatic negative responses.  (She might, of course, still harbor conscious but unstated forms of bias; propranolol certainly would not touch race-biased beliefs about professionalism, competence, and the like.)  But it also would generally dampen the decision-maker’s automatic fear responses.  An overall reduction in automatic negative responses would not necessarily be a good thing:  while it might free decision-makers from some false negative judgments based on race, it also would likely impair them from picking up on real negative signals from other sources.

And the take-away …

That a fear-dampening drug reduces racial bias in subjects helps confirm that much racial bias is based in automatic negative responses, which result from conditioned fear learning.  Although this finding is hardly surprising, it is interesting and important.  Any person reading this study should ask him- or herself: How does automatic fear affect my decisions about other people?  How does it affect the judgments of important economic and legal decision-makers?  How can we make it less likely that the average white person sees the average black person through distorting fear goggles in the first place?

The problem with this study and the headlines hyping it is that they perpetuate the idea that racism is the individual racist’s problem (It’s in his brain! And we can fix it!).  A close reading of the study points to the importance of socially conditioned fear-learning about race – which then becomes neurologically represented in each of us.  Despite the headlines, racism is not a neurological problem but a cultural one, which means that the solutions are a lot more complex than popping a pill.

1

Neuroscience at Trial: Society for Neuroethics Convenes Panel of Front-Line Practitioners

Is psychopathy a birth defect that should exclude a convicted serial killer and rapist from the death penalty?  Are the results of fMRI lie-detection tests reliable enough to be admitted in court? And if a giant brain tumor suddenly turns a law-abiding professional into a hypersexual who indiscriminately solicits females from ages 8 to 80, is he criminally responsible for his conduct?  These were the questions on the table when the International Neuroethics Society convened a fascinating panel last week at the Carnegie Institution for Science last week on the uses of neuroscience evidence in criminal and civil trials.

Moderated and organized by Hank Greely of Stanford Law School, the panel brought together:

  • Steven Greenberg, whose efforts to introduce neuroscience on psychopathic disorder (psychopathy) in capital sentencing in Illinois of Brian Dugan has garnered attention from Nature to The Chicago Tribune;
  • Houston Gordon (an old-school trial attorney successful enough not to need his own website, hence no hyperlink), who has made the most assertive arguments so far to admit fMRI lie-detection evidence in a civil case, United States v. Semrau, and
  • Russell Swerdlow, a research and clinical professor of neurology (and three other sciences!).  Swerdlow’s brilliant diagnostic work detected the tumor in the newly-hypersexual patient, whom others had dismissed as a creep and a criminal.

 

In three upcoming short posts, I will feature the comments of each of these panelists and present for you, dear reader, some of the thornier issues raised by their talks.  These cases have been reported on in publications ranging from the Archives of Neurology to USA Today, but Concurring Opinions brings to you, direct and uncensored, the statements of the lawyers and scientists who made these cases happen … Can I say “stay tuned” on a blog?

11

An Irrational Undertaking: Why Aren’t We More Rational?

By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law.  Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”

Ben’s question suggests that ostensibly rational human beings often act in irrational ways.  To prove his point, I’m actually going to address his enormous question within a blog post.  I hope you judge the effort valiant, if not complete.

The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality.  The first view is that greater rationality might be possible – but might not confer greater benefits.  I call this the “anti-Vulcan hypothesis”:  While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock.  A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group.  In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases.  Yet, whether we are Kirk or Flossie, the implication for law may be the same:  Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.

First, a slight cavil with the question:  The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control.  Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution.  Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true.  (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.)  Rationality divorced from affect arguably may not even be possible for humans, much less desirable.  Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.

Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor.  By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.

Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest.  Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing.  Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills.  This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.

So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference.  It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions.  Further, the rational cognition we can access can be totally swamped out by sudden and strong affect.  With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”

This fragility may be more boon than bane:  Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage.  Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations.  Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors.  To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility.  What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational.  This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.

An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory.  While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality.  In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”

On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it.  Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group.  Rationality operates, if at all, post hoc:  It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions.  (Note that different cultural groups assign different values to rational forms of thought and inquiry.  In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming.  Children of academics and knowledge-workers: I’m looking at you.)

This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data.  And that this cognitive mode inheres in us makes a certain kind of sense:  Most people face far greater immediate danger from defying their social group than from global warming or gun control policy.  The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.

To descend from Olympus to the village:  What could this mean for law?  Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored.  I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.

Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are  designed.  Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions.  The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.

Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy.  In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community.  And in still other contexts, we might value narrow rationality above all.  Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas.  Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.

Cultural cognition may offer strategies for communicating with the public about important issues.  The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.  If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow:  Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities.  The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.

To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers.  But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.