Tagged: rationality

11

An Irrational Undertaking: Why Aren’t We More Rational?

By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law.  Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”

Ben’s question suggests that ostensibly rational human beings often act in irrational ways.  To prove his point, I’m actually going to address his enormous question within a blog post.  I hope you judge the effort valiant, if not complete.

The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality.  The first view is that greater rationality might be possible – but might not confer greater benefits.  I call this the “anti-Vulcan hypothesis”:  While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock.  A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group.  In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases.  Yet, whether we are Kirk or Flossie, the implication for law may be the same:  Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.

First, a slight cavil with the question:  The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control.  Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution.  Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true.  (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.)  Rationality divorced from affect arguably may not even be possible for humans, much less desirable.  Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.

Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor.  By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.

Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest.  Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing.  Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills.  This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.

So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference.  It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions.  Further, the rational cognition we can access can be totally swamped out by sudden and strong affect.  With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”

This fragility may be more boon than bane:  Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage.  Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations.  Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors.  To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility.  What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational.  This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.

An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory.  While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality.  In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”

On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it.  Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group.  Rationality operates, if at all, post hoc:  It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions.  (Note that different cultural groups assign different values to rational forms of thought and inquiry.  In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming.  Children of academics and knowledge-workers: I’m looking at you.)

This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data.  And that this cognitive mode inheres in us makes a certain kind of sense:  Most people face far greater immediate danger from defying their social group than from global warming or gun control policy.  The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.

To descend from Olympus to the village:  What could this mean for law?  Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored.  I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.

Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are  designed.  Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions.  The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.

Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy.  In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community.  And in still other contexts, we might value narrow rationality above all.  Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas.  Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.

Cultural cognition may offer strategies for communicating with the public about important issues.  The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.  If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow:  Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities.  The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.

To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers.  But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.