# Are You an Intuitive or a Deliberative Information Processor?

Quick — take this test:

(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

____ cents

(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

____ minutes

(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

____ days

(see below for answers)

In an interesting new article, Blinking on the Bench: How Judges Decide Cases, Chris Guthrie, Jeffrey Rachlinski, and Andrew Wistrich report the answers of 252 Florida trial judges to this Cognitive Reflection Test (CRT), which is designed to have a “correct answer that is easy to discern upon reflection, [as well as] an intuitive–but incorrect–answer that almost immediately comes to mind.” The judges scored, well, slightly better than the average undergraduate student subject at Michigan and slightly worse than the average undergraduate student subject at Harvard. Almost one-third of these judges didn’t answer any of the questions correctly; another third answered one question correctly; less than a quarter of the judges answered two questions correctly; and only one seventh answered all three correctly. Their mean score of 1.23 compares unfavorably to student subjects at MIT (2.18), Carnegie Mellon (1.51), and Harvard (1.43).

So what does this all mean? Looking to this data alongside other studies, the authors argue that judges often make decisions intuitively rather than deliberatively. This is not always a problem; indeed the authors note that the “conversion of deliberative judgment into intuitive judgment might be the hallmark of expertise.” But, judges who respond intuitively, as the test results show, might make inaccurate decisions. The paper concludes with several suggestions as to how to limit “bad” intuitive decisionmaking –more time and resources, requiring written opinions, training and feedback, use of scripts and checklists, and separating out decision-making authority– very similar to suggestions that my co-authors and I made in our recent article, Refugee Roulette, which describes disparities in decision-making in the asylum process. Sounds like we might need to stop those asylum adjudicators from blinking on the bench . . .

So could you cut it amongst those MIT students? Here are the answers:

(1) Five cents. If the ball costs five cents, the bat costs $1.05, and the total is $1.10. The intuitive and incorrect answer is that the ball costs ten cents — but then of course the bat costs $1.10 and the total is $1.20

(2) Five minutes. The intuitive and incorrect answer is 100 minutes; each machine makes one widget in 5 minutes, so increasing the number of machines increases the number of widgets accordingly.

(3) Forty-seven days. The intuitive and incorrect answer is 24 days — if the patch of lily pads doubles each day and covers the entire lake on the last day, it must cover half the lake the day before.

I took a quick look at the paper, but I did not see how long subjects had to answer the questions. If it is 10 seconds of something like that, I wouldn’t be surprised if almost everyone got it wrong. Personally, on the first question the inituitive answer came to mind first, but I immediately recognized it was wrong. Sad to say, I had to take out a pen and paper and start the algebra before I was sure of the 5c answer. Obviously the process took longer than 3 seconds.

The problem in finding a remedy is the two distinct roles that trial judges play: resolving cases on motions (to dismiss or for summary judgment) and making in-trial decisions on evidence. I think the tendency to rely on intuitive answers comes more in the latter situation than the former. But how can that be remedied within the flow of trial?

I have been involved in the study of psychiatric classification and decisionmaking for a number of years. On of the issues that i think is missing here is the base rate problem, and its relationship with decisions. mentalscripts (intuitions?) develop for dealing with ¨standard¨ situations. these are the most common, and intuitive reasoning is actually a very efficient way of dealing with cases like this. This mechanism breakes down when someone encounters a¨non-standard¨ case and uses the same script to deal with it. Unfortunatly ¨non-standard¨ cases are rare, so experience with this phenomenon is limited for most practitioners, and, because of the low numbers, the kind of non-standard cases they have to deal with is highly idiosyncratic. this leads to further bias. So, i my opinion, the kind of reasoning to use is not black or white. The important point is to recognize non-standard cases and adapt the approach taken from relying on mental scripts towards a more ¨discovery-research like¨ approach.

Thanks for these comments. On the first, while there may be other problems with the methodology, the potential timing concern that “anon” raises is not one of them. The paper suggests that the judges had as long as they needed to respond to the test (administered as part of a longer survey), and in any case had approximately fifteen minutes to answer the survey. The trial point is a great one, and as an Evidence prof, one that I’m very interested in. It would be interesting to investigate how many of these intuitive rulings are incorrect, and then propose remedies — shifting more heavily towards pre-trial motions on evidentiary questions, more training, etc. And on the final point, the authors do explain that intuition is generally used in “standard” or “easy” cases — e.g. the first question in the CRT looks “easy” so more people answer it intuitively and get it wrong, while the second looks a bit more complicated, so test-takers tend to deliberate more and get it right. The authors recognize that there’s no simple answer and even support intuitive decision-making in some situations. I hope you’ll take a look at the paper itself, as a short blog post is simply not enough space to detail all of their theories and arguments.

Also, note to fellow readers: your own ability to answer the questions is no clue about how easy or difficult they are — we were all cued to answer the questions carefully by the post subject, etc. So we all popped into system 2 thinking.

Were any such cues given to the judges? In which case, I’m scared if they still scored so badly. Anyone who is in system 2 thinking should be able to nail all three without effort.

One thing that is not appreciated is how much law is a “right brained” profession. For instance, i have a friend who had birth defects that had the effect of basically cutting off his entire right brain. He was admitted into a very prestigious law school, but he reported that it was a real struggle because he fundamentally thought differently than his classmates.

I am weird because where most people lean toward the left or right of their brains, I am actually right in the middle. For instance, I got all three of those questions right. As a lawyer, it helps me very often because increasingly this is a technologically-laden profession. For instance, being a left brainer helps you to run those searches in westlaw or lexis. I have a reputation for being able to find the case that no one else can find.

Btw, when it comes to logic, I highly recommend Edgar Allen Poe’s mystery stories, such as The Murders at the Rue Morgue, The Mystery of Marie Roget, and The Purloigned (sp?) Letter. I wouldn’t be shocked if it is free online somewhere. Each of these also read like a treatise on logic. The interesting thing is that Poe argued that logic, once the domain of poets and the like, was becoming the domain of mathematicians, but to Poe these people were unqualified for the job. What he was getting at was the left-brained right-brained divide. And, interestingly, while right brained people are typically intuitive, the actual solution to the problem is also right-brained.

Btw, the second problem is dubious in this sense. It requires the reader to assume that each machine works independantly. if they work collaboratively, then it would actually be impossible to guess the answer. i only guessed, in a right-brained sense, that you wanted us to make this incorrect assumption.

To illustrate, imagine if we changed the terms of the problem to exaggerate the obvious effect of collaboration to this:

> It takes 5 men 50 hours to make five cars. How long will it take 100 men to make 100 cars?

If we made the same incorrect assumption as above, we would say 1,000 hours. But is that really true? there are economies of scale at work here, so that 5 guys working together would almost certainly put a car together faster tha 5 guys working separately. But at the same time, 100 guys working on one car is incredibly inefficient. So as my example illustrates, the correct answer is really “we don’t know.”