Category: Psychology and Behavior

10

Reforming the Non-Medical IRB: A Shift from Preventing Harm to Doing Good

As some of you know (grandma), my area is law and mind sciences. To date, most of my scholarship has involved applying existing insights from social psychology, social cognition, and other fields to legal topics. However, over the last few months, I’ve been working on designing a set of experiments with a cognitive psychologist and, as a result, I have had a chance to engage the institutional review board process for the first time.

I must say that while the people running the IRBs at Drexel and Penn seem well-intentioned and nice enough, the process is utterly befuddling to me. As has been noted on this blog previously, more legal academics are doing work that is potentially covered by IRBs than ever before and it is worth pausing to think about whether radical changes to the existing approach are not appropriate.

(I certainly do not purport to be the first person to advocate reform in this area or to have thought about it as much as others; my hope is that this post will provoke some readers to consider their experiences and whether they feel like the current IRB process is worth its costs.)

I’d like to focus on the non-medical IRB (covering social and behavioral research, ethnography studies, etc.) and I’d like to propose eliminating review completely in this area. No more paper work, no more calls, no more meetings. Instead, we will simply rely on professional norms to channel behavior and existing legal mechanisms to deter the most harmful conduct. (I will leave to the side, in this post, the sticky issue of university liability.)

Now, this doesn’t mean that everyone is off the hook. All of the money and energy that universities currently expend on the IRB process will simply be redirected. The idea is to use resources to directly improve people’s lives, rather than to try to avoid harms that may or may not arise. All of the time previously spent on filling out paperwork, on the phone asking and answering questions, taking human subjects tests, and filing updates, among other things, would now be spent actively participating in socially-beneficial endeavors.

As a licensed attorney, what if I used every hour I would expend on IRB compliance volunteering at a legal aid clinic instead? Or what if I used that time to help high school students in north Philadelphia work on their college essays or removing trash from the Schuylkill River? What if all of the staff at the Office of Research Compliance spent their days finding and coordinating opportunities for professors to volunteer in the community? I would argue that the social good likely to result would considerably outweigh the potential costs of not subjecting non-medical experiments to formal review.

The truth is that the new regime would not be perfect—people would occasionally be harmed—but the magnitude of this threat might be less than imagined. When a person goes to design a psychology experiment there are many factors that act as constraints on the design: Do my colleagues approve of my proposal? Will members of my field look favorably on this experiment? Will resulting harms negatively impact my tenure review (remember that Stanley Milgram was denied tenure at Harvard)? Does this align with my sense of morality? Will my friends/parents/wife/children think less of me if someone is hurt on my watch? How does this experiment compare to other experiments that were conducted in the past and how did people react to those projects?

The IRB process is not the primary reason why the vast majority of non-medical experiments today do not pose major risks to human subjects. It would seem to me that while the process prevents some harms, it does not prevent enough to justify its existence and thinking of alternative uses of the resources currently dedicated to IRBs has the potential to leave us all better off.

1

In Support of Activist Officiating

Dave’s post earlier today on referees and judging (linking to a fascinating discussion of “whistleblower” bad-boy Tim Donaghy’s new book, Blowing the Whistle) has got me thinking.

While on a certain level, I’m outraged at the thought that refs do not follow the rules of the game with objectivity and dispassion, I’m not sure that I want officials to just call “balls” and “strikes.”

The reason that I never bought into the Chief Justice’s analogy of judging to umpiring is that sports, for me, are not just about fairness and a level playing field. They’re about fun and entertainment. I want to watch a good game and I don’t care if there is a little “tweak” here or there to ensure an enjoyable match for the spectators.

Although it is dangerous to admit in my new home of Philadelphia, I am a party to an abusive lifelong relationship with the Washington Redskins and Wizards (née Bullets). Hoping to break the cycle of repeated psychological mistreatment, a number of years back I started also following English Premier League soccer (I’m a Liverpool supporter, although I tend to watch whatever pops up on Fox Soccer Channel).

In EPL and other European soccer matches, one of the things that always irks me is when a ref sends off a player on the weaker team in the opening minutes. It really doesn’t matter to me that the official was following the letter of the law in giving the red card. When I sit down, my goal is 90 minutes of pleasure. Dismissing a key player in the fourth minute spoils the proceedings. (Of course, I’m advocating “tweaking” here – I’m not asking a ref to turn a blind eye to a deliberate two-footed, studs-up challenge aimed at an opponent’s head).

Yes, I might feel differently if I was a gambling man or if the Redskins returned to their glory days, but maybe not. I’ll always choose an exciting overtime game to a blow out, even if I’m on the right side of the rout.

14

Asteroidgate: The Rocket, Not the Asteroid, Packs the Punch

global_warmingEric Posner muses about Asteroidgate:

Suppose that astronomers around the world alerted us that a large asteroid is headed in our direction, and might collide with the earth in the year 2012.  The astronomers cannot give us a precise probability of collision because of many imponderables . . .  To build a defense system—say, rockets that would intercept the asteroid and knock it off course—would cost hundreds of billions of dollars . . . As is always the case, there are a few dissenters . . .   A scandal erupts when emails at the West Anglia Space Research Unit are released, and shows that some scientists tried to arrange a boycott of a journal that published a few articles of the skeptics.  At the same time, thousands of astronomers not connected with the West Anglia Unit continue to insist that the risk of a collision is very high . . . A few questions.  In this scenario, would there emerge an industry of non-credentialed “astronomy skeptics” in the press and public comparable to the current batch of “climate skeptics”?  My instinct is that the world would quickly get to work building the rocket system, and disregard the views of the skeptics.  Is this right or wrong?  If it is right, is there some reason to think that climate science and astronomy are different, justifying the skepticism about climate science that does not (yet) exist about astronomy?

This is a clever scenario, and its gives me a launching pad to talk about why climate-change skeptics and believers have reacted so differently to the same set of information: namely the stolen East-Anglia emails.

The Cultural Cognition Project has a perspective on this problem which may be helpful.  Dan Kahan, Don Braman, Paul Slovic, John Gastil, and Geoffrey Cohen wrote a paper called The Second National Risk and Culture Study: Making Sense of – and Making Progress In – The American Culture War of Fact. Using a large random and nationally representative study sample, the paper confirms that Americans are deeply divided over basic questions about the climate, such as “how much risk does global warming pose for people in our society?” Those divisions track the cultural identities that the project has often explored — and which relate back to pioneering work by anthropologist Mary Douglas. That is, group-grid theory.

Of particular interest, Kahan et al. tested the hypothesis that individuals’ perceptions about the same set of facts about the severity of the problem turned on what policy solutions were recommended to deal with it.  When the policy solution was nuclear power, hierarchical and individualist Americans were far less likely to discredit global warming facts than when the solution was an expanded set of anti-pollution measures.  Such individuals find expanded anti-pollution policy threatening to their identities: it suggests restriction of market activities (upsetting to individualists) and it implicitly challenges the legitimacy of the ruling order (upsetting to hierarchs).  Confronted with such a threat, individuals are less likely to credit information about increased risks of warming.  Conversely, egalitarians and communitarians were more likely to see global warming as a severe threat when the solution was anti-pollution control.

What does such research teach us?  Well, for one, it makes reactions to “climate-gate” easier to understand.  We know that people are looking at the benefit/risk calculus in highly polarized ways.  The East Anglia emails, which go to the weight of the evidence about warming, is yet more fodder in that filtered debate.  This  polarization is (notably) neither partisan nor conscious.

More importantly, the research suggest a very concrete strategy for those who worry about climate change and who want to see their position persuade unbelievers: you should be more attentive to finding politically congenial solutions, and spend less energy trying to use data to convince those you disagree with.  Thus, former VP Gore’s approach, which focused on staking out a data-driven position on the scope of the problem, has at best produced a fragile coalition in support of change, which will be undermined quickly when individuals are presented with alternative data, information about imperfect scientists, or threatening policy solutions.

Rounding back to Eric’s post,  the reason that asteroidgate seems like a clear example where an organized opposition would not emerge is that neither the underlying disaster nor the policy solution poses a threat to the identities of large and discrete groups of Americans. Expensive rockets simply aren’t the bogeymen that private-property-destroying pollution controls are.  The case would be different if the solution to our asteroid problem were to unequally burden a minority group.  In that scenario, egalitarians and communitarians would be much less likely to credit the risks of a massive asteroid than would hiearchs and egalitarians.

3

Professional Responsibility Meets Facebook, Another Oops the Bar

450px-Drunk_woman_vomitsEvery year, my small section reads a New Yorker “On the Town” squib called “Oops” to kick off a discussion on care and professional responsibility in their legal careers.  “Oops” tells the story of a summer associate who, in 2003, mistakenly sent the following email to lawyers with whom he worked on a deal: “I’m buy doing jack shit.  Went to a nice 2hr sushi lunch today at Sushi Zen.  Nice place.  Spent the rest of the day typing e-mails and bullshitting with people.”  The summer associate signed  off the email: “So yeah, Corporate Love hasn’t worn off yet. But give me time.”  The summer associate meant to send the email to his friend.  Oops.

For a moment, let’s put aside the stark difference between the world (and law firm environment) facing the summer associates of 2003 and the one facing the summers of 2009 and turn to Sunday’s New York Times story “A Legal Battle: Online Attitude Vs. Rules of Bar.”  The Times talked about recent cases where lawyers do violence to their careers through their online activities.  Lawyers blog about judges:  one wrote that he thought a named judge was an “Evil, Unfair, Witch” and questioned the judge’s competence.  Another lawyer friended a judge on Facebook and later posted about his/her drinking and motorbiking.  The problem: the lawyer asked the judge to delay a trial because of a death in the family in the same week that the lawyer shared the drinking tales with his/her social network.  The lawyers in those cases have suffered serious consequences (the first is facing a reprimand from the bar, the second faced the wrath of his/her firm–the judge told the lawyer’s bosses what happened).

Now, the 2003 summer associate made a big mistake, but perhaps not on the same order as the lawyers covered in yesterday’s Times.  The summer associate had a slip of the finger perhaps, a hasty moment that changed the way those in his firm saw him.  But the lawyers arguably dove into the pool of their fate head first: one might say that they knowingly risked their careers and should suffer the consequences (to the extent the Bar desires and the First Amendment permits).  Social scientists like Alessandro Acquisti and danah boyd and legal scholars like James Grimmelmann offer an explanation for why people are so foolish online.  People write carelessly not because they have “a reduced sense of privacy” but because they felt anonymous.  As danah boyd explains, social network participants “live by ‘security through obscurity’ where they assume that as long as no one cares about them, no one will come knocking.”  They operate under the norm that people with no social connection to them “could look at your profile, but shouldn’t.”  They assume that only close friends are paying attention to their online activities.  All of this is to say that perhaps President Obama shouldn’t just talk to young people about the perils of oversharing online.  Maybe lawyers need the lesson too.

Wikimedia Commons Image

0

Bernie Madoff and the Unfortunate Consequences of Celebrity Bias

744040_jesterCelebrity is intoxicating.  We have long been willing to play the fool to the rich and powerful, even if that means turning a blind eye to signs of trickery.  In the late 1980s, a 37-year-old con artist convinced Duke University administrators and students that he hailed from the wealthy Rothschild family of France despite the fact that he spoke no French, drove a run-down car, and offered clipped out magazine articles to show his family’s homes. During a two-year charade, the imposter borrowed (stole) thousands of dollars from Duke and joined a fraternity. (I was an Duke undergraduate at the time, but alas did not know him).  More recently, Christopher Chichester tricked many into believing that he was a Rockefeller despite his gauche manners and outrageous claims (e.g, that he owned “the key to Rockefeller Center”).  As Clark Rockefeller, he gained admission to exclusive clubs and married a partner at McKinsey Consulting.  Only after Mr. Chichester kidnapped his daughter from his ex-wife did the police discover his true identity and connection to unsolved murders.120px-Bernie_Madoff_Cropped

Perhaps such celebrity bias had some role in the SEC’s bungling of the Bernie Madoff fiasco.  On Thursday, the S.E.C.’s Inspector General’s Report explored why the agency missed so many “red flags” about Madoff since 1992.  The report discussed missed leads, bureaucratic snafus, and investigators’ inexperience.  Investigators were far too believing because they were simply awed by him.  One investigator described Madoff as “a wonderful storyteller” and a “captivating speaker.”  As with the faux Rockefeller and Rothschild incidents, Madoff’s ruse worked for so long despite the clues of foul play perhaps because investigators and investors could not shake their sense of Madoff as a rich, powerful, and trusted financial guru.  Madoff’s celebrity reputation anchored their thinking, permitting Madoff to get away with his scheme for far longer than it should have.  As Madoff’s victims’ stories attest, celebrity bias had profoundly destructive consequences.

StockXchange Image; Wikimedia Commons Image

1

Football and Judicial Politics

208388_football_close-upMy colleague Joanna Shepherd and I are working on a project analyzing judicial voting on election law cases in state court. Although there is a sophisticated literature about judicial politics and political influences on judges, there actually is little quantitative work looking at political influences on judges in explicitly political cases, such as election contests, redistricting, and ballot access questions. Thinking generally about judicial politics for this project gives me a different perspective on the state court review of the NFL suspensions of two players from the Minnesota Vikings.

Last September, the NFL suspended Kevin Williams and Pat Williams of the Minnesota Vikings for four games each after they failed drug tests. The two star defensive tackles, who together comprise Minnesota’s “Williams Wall,” tested positive for bumetanide, a prescription diuretic banned under the NFL collective bargaining agreement as a masking agent for steroids. After exhausting the appeals process with the NFL, the two Williams’ and the NFL Players Association challenged the suspensions in Minnesota state court.

Here’s the judicial politics angle: The Minnesota district court that heard the Williams’ claims issued a temporary restraining order last December immediately after the Williams’ final internal appeals with the NFL were rejected. The TRO postponed any suspension until the end of the 2008 season, which kept both Williams’ on the field and helped ensure Minnesota a playoff spot last year. The NFL removed the case to federal court, which then dismissed all but two state law claims and remanded those two claims back to state court. This summer, on remand, the Minnesota district court issued another TRO, blocking the NFL from enforcing its suspensions of the Williams’ until after the upcoming 2009 season. I don’t know enough about Minnesota labor law, the NFL collective bargaining agreement, or the relevant preemption issues to assess the state court TROs that helped both Williams’ postpone their suspensions for almost two full seasons, but one commentator who considered these issues noted that even the issuing judge expressed doubts about the likelihood that the Williams’ claims would prevail on the merits, and at least one Vikings blogger suspected a home-court advantage for the Williams’ on their legal claims.

Of course, I have no real idea whether the Minnesota judge in this case was consciously or subconsciously affected by the possible political consequences of denying the TROs. I have little reason to doubt the integrity of this judge in particular, who I assume has nothing but the best intentions. But it might be reasonable to wonder whether a state judge in his position, who must run for re-election to keep his job, could be influenced by the prospect of hometown football fans unhappy that a judge has effectively sidelined their star players for a quarter of a season. My colleague Joanna Shepherd concludes from her research that state judges are routinely re-elected unless they risk doing something controversial and attract negative publicity. Whether or not this particular judge was consciously affected by the possibility, there’s no doubt that denying the latest TRO and putting Kevin and Pat Williams on the sideline for the beginning of the season, right after the Vikings stirred up fan excitement by signing Brett Favre as their new quarterback, would’ve attracted lots of negative attention. If nothing else, this case offers fed courts professors a very salient example for discussing the risk of a home-court advantage in state court and a foreign defendant’s interest in removal to federal court.

Thinking along the same lines, Gregg Easterbrook, an astute NFL commentator (and brother of Frank), suggested that former NFL wide receiver Plaxico Burress might have fared better in his recent gun possession case, if he had rallied local football support to his side by re-signing with the New York Giants immediately before trial. As Easterbrook put it, “Had Burress remained a Giant, he would have had the most popular organization between Washington and Boston in his corner, and it’s simply human nature that prosecutors and judges might have looked sympathetically upon his case.” Instead, Burress received two years in prison for violating New York’s gun permit law. Football matters intensely to many people, which surely has political consequences. One study finds that public universities with Division I-A football programs receive about six percent more in state appropriations than public universities without football programs, and for those football universities, a victory over an in-state rival is correlated with an additional increase in appropriations the following year. Maybe football shouldn’t matter so much to courts and legislatures, but it seems that sometimes it really does.

2

Health Care Reform, Public Opinion, and Personal Experience as Information

James Surowiecki describes an interesting recent shift in public opinion about the health care system in the United States. Last year, polling found that only 29 percent of Americans rated the health care system as “good” or “excellent,” but when asked the same question today, the percentage of the public giving the same answer now has jumped up to 48 percent. Why the sudden increase given that, as Surowiecki notes, “[t]he American health-care system didn’t suddenly improve over the past eleven months”? Surowiecki attributes the rapid increase to the endowment effect. Now that health care reform is actively under consideration, people are focused on “what we might lose rather than on what we might get.” When people encounter uncertainty about trading what they already have for something else, psychologists have shown that people tend to overvalue what they already have and gravitate toward a natural instinct to keep things as they are.

The endowment effect is a plausible explanation for the suddenness of the shift in public opinion, but I have a different intuition than Surowiecki. Although I have not studied public opinion these days with respect to the current debate on health care reform, I have done empirical research about public opinion during the health care reform debates of the early 1990s that could be relevant. Political scientists find generally that people do not normally infer about national conditions directly from their own personal situations. For instance, people who are struggling financially do not assume that their personal situation indicates that the national economy is doing poorly overall as a more general matter. Just so, during the late 1980s and early 1990s, people who had undergone unpleasant experiences with their personal health care did not necessarily assume that the health care system was in bad shape. Their evaluations of the health care system as a whole did not vary from everyone else’s nearly as much as you might expect. However, when Democrats began championing health care reform during the early 1990s and arguing that there was an unaddressed crisis in American health care, people who had undergone negative experiences in their personal health care suddenly began to credit those negative experiences as a source of information for evaluating the system overall. Accordingly, compared to their fellow citizens, their overall views of the system changed very abruptly in a negative direction once political leaders substantiated the perceived reasonableness of that inference.

Although I cannot say definitively, it’s worth considering whether the abrupt shift in public opinion today that Surowiecki identifies is actually a mirror image of what happened during the early 1990s. Remember that, as I mentioned in an earlier post, the American public by and large report positive feelings about their personal health care today. Surowiecki, in fact, observes in the article that a clear majority of the public reports satisfaction with their insurance coverage, and public satisfaction with health care costs in particular has increased from the early 1990s into this decade. A year ago, Democratic supporters of reform probably had the edge in leading public perceptions about the system as a whole in a negative direction. But now with Republican opponents of health care reform touting the virtues of the American health care system, people who are happy with their health care situation now may be crediting their personal situation as a source of information about the system overall in a positive direction. The abrupt shift in public opinion may be less about the endowment effect than a portion of the public suddenly drawing stronger connections between their good personal experiences with health care and their sociotropic evaluations of the system as a whole. Such inferences from personal experience could explain not only the direction of the shift in public opinion about the health care system, but also the speed with which it occurred.

8

More Moneyball

688484_baseballIn my previous post, I argued that Michael Lewis’s influential bestseller Moneyball, widely cited in academia, ultimately relies too much on hyperbole to make its claims about the superiority of Billy Beane’s statistical methods in managing the Oakland Athletics baseball team. In this post, I assess the value of those Moneyball methods by examining the results of Oakland’s 2002 draft, a central event glamorized in the book as a showcase of Beane’s “scientific selection of amateur baseball players.”

This is a long, baseball-heavy post, so let me cut to the chase at the outset. Oakland’s Moneyball draft of 2002 was not the smash success that Lewis’s book forecasted. The seven years since the draft bear that out. Beane had seven first-round picks that year but drafted none of the eleven all-stars signed from the 2002 draft, despite exercising almost twenty percent of all first-round choices in the draft. In fact, only half the players on Beane’s wish list of the top twenty players in the draft ended up playing even a game in the major leagues. To tell a compelling story of Beane’s superiority, Lewis overstates what Cass Sunstein and Richard Thaler call the “blunders and the confusions of those who run baseball teams.” It turns out that other teams do a pretty good job of identifying talent too. In the 2002 draft, Oakland did not draft the best players available, while other teams using traditional methods did equally well or better.

So, what is the lesson for Moneyball? Lewis describes how Beane imported quantitative methods from the academic and financial worlds to identify assets, in this case baseball players, undervalued by the market. The modest notion that statistical methods can be invaluable in the search for these undervalued assets, particularly in an industry so obsessively numbers-oriented as baseball, is unassailable. But Lewis’s claims that Moneyball demonstrates the outright superiority of Beane’s quantitative methods in identifying the best talent are much more difficult to sustain, and Moneyball does not convincingly establish the backwardness of other teams, which had already begun erasing whatever advantages Oakland possessed by the time of Moneyball’s publication.

Instead, Moneyball demonstrates a slightly but importantly different lesson about Oakland’s successes during the early 2000s.

Read More

7

Moneyball Revisited

727607_oakland_baseball_Michael Lewis’s bestselling book Moneyball occupies a unique convergence of academic, sports, and popular fascination. Moneyball profiles Billy Beane and his management of the Oakland Athletics baseball team, with particular attention to Beane’s use of cutting-edge quantitative analysis in an industry portrayed as bound by tradition and decisionmaking by anecdote. Moneyball garnered recent attention again after the movie version of Moneyball, starring Brad Pitt, suddenly halted production just five days before shooting was to begin in July. The event, or nonevent, brought forth several commentaries on Moneyball’s legacy, six years after its publication. Today’s post begins to explain my ambivalence about Moneyball’s place in the academic imagination; my next post continues by arguing that, perhaps to the surprise of its academic enthusiasts, Moneyball actually gets a good chunk of its baseball wrong and in the end, may tell a slightly different story than usually thought.

Baseball fans from outside academia would be shocked how influential and popular the book Moneyball has been within academic circles. Cass Sunstein and Richard Thaler wrote a book review of Moneyball for the Michigan Law Review, and  professors have cited Moneyball as inspiration for new approaches to everything from faculty hiring to election administration to health care reform. There’s even a Moneyball-inspired blawg called Moneylaw.  The great contribution of Moneyball was to puncture a certain overconfidence in untested conventional wisdom based on unsystematic anecdotal information. Moneyball offered a colorful example from baseball, now widely cited in academia, of how inefficiencies in markets can be exploited by canny operators who identify objective metrics of value underappreciated by traditional practices. As Sunstein and Thaler note, “If Lewis is right about the blunders and the confusions of those who run baseball teams, then his tale has a lot to tell us about blunders and confusions in many other domains.”

The problem with Moneyball is the hyperbole deployed to construct Lewis’s lesson of absolute quantitative triumph. A key element of Moneyball’s influence is the vividness and persuasiveness of Lewis’s account of the Oakland Athletics’ success, but it is so vivid and persuasive at least in part because it exaggerates the brilliance of Billy Beane and his quantitative approach to baseball.

Read More

3

Nudging the exam takers

I’ve recently been reading Dan Ariely’s book, Predictably Irrational. One fascinating chapter is about the psychology of dishonesty. The experimenters gave two versions of a test – one graded by proctors, the other entirely self-graded, with a small monetary reward (10 cents per correct answer). They found an incidence of cheating in the self-graded answers, no surprise there.

The experimenters gave the same tests to another group of students, but first made those students do a task designed to make them think about morality and honesty. One group of students had to write out as many of the Ten Commandments as they could remember. Another group was asked them to simply sign the statement, “I understand that this study falls under the MIT honor system.”

These moral reminders had the effect of eliminating all (!) of the statistically significant cheating — with either one of those moral reminders in place, the students in the self-graded group had results that were statistically indistinguishable from the proctor-graded group.
Read More