Archive for the ‘Philosophy of Social Science’ Category
posted by Frank Pasquale
Brett Frischmann’s book is a summa of infrastructural theory. Its tone and content approach the catechetical, patiently instructing the reader in each dimension and application of his work. It applies classic economic theory of transport networks and environmental resources to information age dilemmas. It thus takes its place among the liberal “big idea” books of today’s leading Internet scholars (including Benkler’s Wealth of Networks, van Schewick’s Internet Architecture and Innovation, Wu’s Master Switch, Zittrain’s Future of the Internet,and Lessig’s Code.) So careful is its drafting, and so myriad its qualifications and nuances, that is likely consistent with 95% of the policies (and perhaps theories) endorsed in those compelling books. And yet the US almost certainly won’t make the necessary investments in roads, basic research, and other general-purpose inputs that Frischmann promotes. Why is that?
Lawrence Lessig’s career suggests an answer. He presciently “re-marked” on Frischmann’s project in a Minnesota Law Review article. But after a decade at the cutting edge of Internet law, Lessig switched direction entirely. He committed himself to cleaning up the Augean stables of influence on Capitol Hill. He knew that even best academic research would have no practical impact in a corrupted political sphere.
Were Lessig to succeed, I have little doubt that the political system would be more open to ideas like Frischmann’s. Consider, for instance, the moral imperative and economic good sense of public investment in an era of insufficient aggregate demand and near-record-low interest rates:
The cost of borrowing to fund infrastructure projects, [as Economic Policy Institute analyst Ethan Pollack] points out, has hit record “low levels.” And the private construction companies that do infrastructure work remain desperate for contracts. They’re asking for less to do infrastructure work. “In other words,” says Pollack, “we’re getting much more bang for our buck than we usually do.”
And if we spend those bucks on infrastructure, we would also be creating badly needed jobs that could help juice up the economy. Notes Pollack: “This isn’t win-win, this is win-win-win-win.” Yet our political system seems totally incapable of seizing this “win-win-win-win” moment. What explains this incapacity? Center for American Progress analysts David Madland and Nick Bunker, see inequality as the prime culprit.
April 26, 2012 at 8:17 am Posted in: Economic Analysis of Law, Infrastructure Symposium, Innovation, Law and Inequality, Philosophy of Social Science, Political Economy, Politics, Symposium (Infrastructure), Technology Print This Post 2 Comments
posted by Deven Desai
Andrew Morin and six others have argued for open access to source code behind scientific publishing so that the work can be tested and live up to the promise of the scientific method. At least, I think that is the claim. Ah irony, the piece is in Science and behind, oh yes, a pay wall! As Morin says in Scientific American:
“Far too many pieces of code critical to the reproduction, peer-review and extension of scientific results never see the light of day,” said Andrew Morin, a postdoctoral fellow in the structural biology research and computing lab at Harvard University. “As computing becomes an ever larger and more important part of research in every field of science, access to the source code used to generate scientific results is going to become more and more critical.”
If the essay were available, we might assess it better too.
Victoria Stodden is assistant professor of Statistics at Columbia University and serves as a member of the National Science Foundation’s Advisory Committee on Cyberinfrastructure (ACCI), and on Columbia University’s Senate Information Technologies Committee. She is one of the creators of SparseLab, a collaborative platform for reproducible computational research and has developed an award winning licensing structure to facilitate open and reproducible computational research, called the Reproducible Research Standard. She is currently working on the NSF-funded project: “Policy Design for Reproducibility and Data Sharing in Computational Science.”
Victoria is serving on the National Academies of Science committee on “Responsible Science: Ensuring the Integrity of the Research Process” and the American Statistical Association’s “Committee on Privacy and Confidentiality” (2013).
In other words, if you are interested in thisarea, you may want to contact Victoria as well as Mr. Morin.
posted by Frank Pasquale
Paul A. Lombardo published an essay “Legal Archaeology: Recovering the Stories behind the Cases” in the Fall 2008 issue of the Journal of Law, Medicine, and Ethics. It reminded me of the wonderful chapters in this volume of “health law stories.” Here are some excerpts that may be of interest:
Every lawsuit is a potential drama: a story of conflict, often with victims and villains, leading to justice done or denied. Yet a great deal, if not all, that we learn about the most noteworthy of lawsuits — the truly great cases — comes from reading the opinion of an appellate court, written by a judge who never saw the parties of the case, who worked at a time and a place far removed from the events that gave rise to litigation.
Rarely do we admit that the official factual account contained in an appellate opinion may have only the most tenuous relationship to the events that actually led the parties to court. The complex stories — turning on small facts, seemingly trivial circumstances, and inter-contingent events — fade away as the “case” takes on a life of its own as it leaves the court of appeals.
How can a law professor correct this bias? Here are some of Lombardo’s suggestions:
posted by Biella Coleman
Inspired by Orin Kerr’s question (“is your work focused on the internal narratives and ideologies that people use to describe/justify what they do, or is it focused externally on the actual conduct of what people do?”) below I will give a sense of how I walk the line between what we might call idealism and practice among the geeks and hackers I study.
One of the toughest parts about working with the type of technologists I focus on— intelligent, opinionated, online a lot of the time—is that many will unabashedly dissect my every word, statement, and media appearance. This attribute of my research, unsurprisingly, has been the source of considerable anxiety, only made worse in recent times with Anonymous as I have to make “authoritative” statements about them in the midst studying them, in other words, in the midst of having incomplete information.
All of this is to say I am deliberate and diplomatic when it comes to word choice, framing, and arguments. But most of the time examining practice in light of or up against idealism does not take the somewhat noxious form of “exposing” secrets, the implication being that people are so mystified and deluded that you, the outsider, are there to inform the world of what is really going on (there is a a long standing tradition in the humanities and social sciences, loosely inspired by Karl Marx and especially Pierre Bourdieu, taking this stance, not my favorite strain of analysis unless done really when needed and very well).
Much of what I do is to unearth those dynamics which may not be natively theorized but are certainly in operation. Take for instance the following example at the nexus of law and politics: during fieldwork it was patently clear that many free software hackers were wholly uninterested in politics outside of software freedom and those aligned with open source explicitly disavowed even this narrowly defined political agenda. Many were also repelled by the law (as one developer put it, “writing an algorithm in legalese should be punished with death…. a horrible one, by preference”) and yet weeks into research it was obvious that many developers are nimble legal thinkers, which helps explain how they have built, in a relatively short time period, a robust alternative body of legal theory and laws. One reason for this facility is that the skills, mental dispositions, and forms of reasoning necessary to read and analyze a formal, rule-based system like the law parallel the operations necessary to code software. Both are logic-oriented, internally consistent textual practices that require great attention to detail. Small mistakes in both law and software—a missing comma in a contract or a missing semicolon in code—can jeopardize the integrity of the system and compromise the intention of the author of the text. Both lawyers and programmers develop mental habits for making, reading, and parsing what are primarily utilitarian texts and this makes a lot of free software hackers, who already must pay attention to the law in light of free software licenses, adept legal thinkers, although of course this does not necessarily mean they would make good lawyers.
posted by Amanda Pustilnik
By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law. Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”
Ben’s question suggests that ostensibly rational human beings often act in irrational ways. To prove his point, I’m actually going to address his enormous question within a blog post. I hope you judge the effort valiant, if not complete.
The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality. The first view is that greater rationality might be possible – but might not confer greater benefits. I call this the “anti-Vulcan hypothesis”: While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock. A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group. In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases. Yet, whether we are Kirk or Flossie, the implication for law may be the same: Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.
First, a slight cavil with the question: The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control. Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution. Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true. (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.) Rationality divorced from affect arguably may not even be possible for humans, much less desirable. Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.
Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor. By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.
Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest. Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing. Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills. This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.
So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference. It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions. Further, the rational cognition we can access can be totally swamped out by sudden and strong affect. With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”
This fragility may be more boon than bane: Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage. Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations. Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors. To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility. What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational. This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.
An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory. While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality. In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”
On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it. Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group. Rationality operates, if at all, post hoc: It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions. (Note that different cultural groups assign different values to rational forms of thought and inquiry. In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming. Children of academics and knowledge-workers: I’m looking at you.)
This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data. And that this cognitive mode inheres in us makes a certain kind of sense: Most people face far greater immediate danger from defying their social group than from global warming or gun control policy. The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.
To descend from Olympus to the village: What could this mean for law? Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored. I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.
Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are designed. Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions. The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.
Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy. In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community. And in still other contexts, we might value narrow rationality above all. Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas. Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.
Cultural cognition may offer strategies for communicating with the public about important issues. The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it. If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow: Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities. The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.
To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers. But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.
October 16, 2011 at 2:25 am Tags: cultural cognition, emotion & cognition, irrationality, law & neuroscience, rationality Posted in: Behavioral Law and Economics, Law and Psychology, Legal Theory, Philosophy of Social Science, Uncategorized Print This Post 11 Comments
posted by Frank Pasquale
It’s becoming clearer that classic Keynesian stimulus—ranging from Obama’s minimalist jobs program to the robust visions of a Krugman or Delong—won’t be enough to get us out of the Great Recession/Lesser Depression. The exhaustion of conventional macroeconomic thought (chronicled in outlets like the Real World Economics Review) has cleared some space for more imaginative thinkers. As John Kay observes:
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
In this post, I want to briefly highlight Bernard Harcourt’s work in crossing disciplinary boundaries to engage in the synthesis necessary to truly understand our plight.
posted by Frank Pasquale
The US faced two great crises during the first decade of the 21st century: the attacks of September, 2001, and the meltdown of its financial system in September, 2008. In the case of 9/11, the country reluctantly concluded that it had made a category mistake about the threat posed by terrorism. The US had relied on cooperation among the Federal Aviation Administration, local law enforcement, and airlines to prevent hijacking. Assuming that, at most, a hijacked or bombed airplane would kill the passengers aboard the plane, government officials believed that national, local, and private authorities had adequate incentives to invest in an optimal level of deterrence. Until the attack occurred, no high official had deeply considered and acted on the possibility that an airplane itself could be weaponized, leading to the deaths of thousands of civilians.
After the attack, a new Department of Homeland Security took the lead in protecting the American people from internal threats, while existing intelligence agencies refocused their operations to better monitor internal threats to domestic order. The government massively upgraded its surveillance capabilities in the search for terrorists. DHS collaborated with local law enforcement officials and private critical infrastructure providers. Federal agencies, including the Department of Homeland Security, gather information in conjunction with state and local law enforcement officials in what Congress has deemed the “Information Sharing Environment” (ISE), held together by information “fusion centers” and other hubs. My co-blogger Danielle Citron and I wrote about some of the consequences in an article that recently appeared in the Hastings Law Journal:
In a speech at the Washington National Cathedral three days after 9/11, then-President George W. Bush proclaimed that America’s “responsibility to history is already clear[:] . . . [to] rid the world of evil.” For the next seven years, the Bush administration tried many innovations to keep that promise, ranging from preemptive war in Iraq to . . . changes in law enforcement and domestic intelligence . . . Fusion centers are a lasting legacy of the Administration’s aspiration to “eradicate evil,” a great leap forward in both technical capacity and institutional coordination. Their goal is to eliminate both the cancer of terror and lesser diseases of the body politic.
September 12, 2011 at 2:59 pm Posted in: Current Events, Cyberlaw, Philosophy of Social Science, Politics, Privacy, Privacy (Law Enforcement), Privacy (National Security), Sociology of Law Print This Post 9 Comments
posted by Olivier Sylvain
Like Professor Zick, I am grateful for the invitation to share my view of the world with Concurring Opinions. I’d like to pick up where his post on strange expressive acts left off and, along the way, perhaps answer his question.
Flash mobs have been eliciting wide-eyed excitement for the better part of the past decade now. They were playful and glaringly pointless in their earliest manifestations. Mobbers back then were content with the playful performance art of the thing. Early proponents, at the same time, breathlessly lauded the flash mob “movement.”
Today, the flash mob has matured into something much more complex than these early proponents prophesied. For one, they involve unsupported and disaffected young people of color in cities on the one hand and, on the other, anxious and unprepared law enforcement officials. A fateful mix.
In North London in early August, mobile online social networking and messaging probably helped outrage over the police shooting of a young black man morph into misanthropic madness. Race-inflected flash mob mischief hit the U.S. this summer, too. Most major metropolitan newspapers and cable news channels this summer have run stories about young black people across the country using their idle time and fleet thumbs to organize shoplifting, beatings, and general indiscipline. This is not the first time the U.S. has seen the flash mob or something like it. (Remember the 2000 recount in Florida?) But the demographic and commercial politics of these events in particular ought to raise eyebrows.
Read the rest of this post »
September 5, 2011 at 11:52 pm Posted in: Constitutional Law, Culture, Current Events, First Amendment, Media Law, Philosophy of Social Science, Politics, Race, Social Network Websites, Sociology of Law, Technology, Web 2.0 Print This Post 8 Comments
posted by Dave Hoffman
Among its many other vices, does legal education teach you to argue less persuasively and in a way that unsettles civil society? That accusation is implicit in Dan Kahan’s new magisterial HLR Forward, Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law. In Some Problems, Kahan considers the Supreme Court’s perceived legitimacy deficit when it resolves high-stakes cases. Rejecting the common criticism that focuses on the ideal of neutrality, Kahan argues than the Court’s failure is one of communication. The issues that the Court considers are hard, the they often turn on disputed policy judgments. But the Justices resort to language which is untempered by doubt, and which advances empirical support that is said to be conclusive. Like scientists, judges’ empirical messages are read by elites, and thus understood through polarizing filters. As a result, Justices on the other sides of these fights quickly seek to undermine these purported empirical foundations – - as Justice Scalia argued last term in Plata:
“[It] is impossible for judges to make “factual findings” without inserting their own policy judgments, when the factual findings are policy judgments. What occurred here is no more judicial factfinding in the ordinary sense than would be the factual findings that deficit spending will not lower the unemployment rate, or that the continued occupation of Iraq will decrease the risk of terrorism.”
Kahan resists Scalia’s cynicism — and says that in fact Scalia is making the problem worse. Overconfident display encourages people to take polarized views of law, to distrust the good faith of the Court and of legal institutions, and to experience the malady of cognitive illiberalism. Kahan concludes that Courts ought to show doubt & humility – aporia – when deciding cases, so as to signal to the other justices & the public that the losing side has been heard. Such a commitment to humble rhetoric would strengthen the idea of neutrality, which currently is attacked by all comers. Moreover, there is evidence that these sorts of on-the-one-hand/on-the-other-hand arguments do work. As Dan Simon and co-authors have found, people are basically likely to consider as legitimate arguments whose outcomes they find congenial. But when they dislike outcomes, people are better persuaded by arguments that are explicitly two-sided: that is, the form of very muscular rhetoric typical in SCOTUS decisions is likely to be seen, by those who disagree with the Court’s outcomes, are particularly unpersuasive, illegitimate, and biased.
I love this paper — it’s an outgrowth of the cultural cognition project, and it lays the groundwork for some really neat experiments. So the point of the post is partly to encourage you to go read it. But I wanted to try as well to connect this line of research to the recent “debate” about Law Schools.
posted by Frank Pasquale
Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:
Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.
Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:
The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.
Yves Smith finds Kramer’s response unconvincing:
The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.
Felix Salmon also challenges Kramer’s logic:
[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”
Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.
That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.
How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”
We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).
Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.
Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.
The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.
The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.
Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.
We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.
*In the spirit of full disclosure: I did participate in this roundtable.
X-Posted: Health Law Profs Blog.
posted by Frank Pasquale
Daniel Altman’s book Outrageous Fortunes is consistently smart, engaging, and counterintuitive. Ambitious in scope, it discusses several important forces shaping the global economy over the next few decades.
Very long-term thinking has two characteristic pitfalls. As the Village’s deficit obsession shows, sometimes panic over a distant threat can derail attention to much more pressing ones. There’s also little accountability for long-term prognosticators. A lot can happen between now and 2030, and as Philip Tetlock has shown, media and academic elites rarely lose visibility or credibility in the wake of even grotesquely wrong predictions. The futuristic novel can be a much safer place to conjure up ensuing decades.
But unlike speculative fiction, or the slightly less speculative macro-predictive fare of a “Megatrends” or “Bold New World,” Altman’s book is grounded in a deep engagement with current economic dilemmas. His analysis works on two levels. First, for a self-interested investor, it’s good to be aware of the long-run influences on productivity and power that Altman outlines. For example, his discussion of the new colonialism demonstrates both the short-term profits and long-term risks that arise when countries like China and Saudi Arabia start buying rights to agricultural land and other resources in poorer places. He also challenges conventional wisdom on disintermediation, making a compelling case that certain middlemen and arbitrageurs can only gain from market integration.
Outrageous Fortunes also succeeds as a work for wonks, taking its place in the often noble genre dubbed by David Brin the self-preventing prophecy. As Altman puts it, “a frequent goal of prediction is to alter the future – to warn of impending danger so that it can be avoided.” The book describes many impending dangers, including increasing inequality driven by global warming, accelerating brain drains, and an enormous financial black market that is developing outside of traditional financial centers. Altman’s description of that black market is particularly acute, and worth discussing in some detail.
Read the rest of this post »
posted by Frank Pasquale
Google’s been in the news a lot the past month. Concerned about the quality of their search results, they’re imposing new penalties on “content farms” and certain firms, including JC Penney and Overstock.com. Accusations are flying fast and furious; the “antichrist of Silicon Valley” has flatly told the Googlers to “stop cheating.”
As the debate heats up and accelerates in internet time, it’s a pleasure to turn to Siva Vaidhyanathan’s The Googlization of Everything, a carefully considered take on the company composed over the past five years. After this week is over, no one is going to really care whether Google properly punished JC Penney for scheming its way to the top non-paid search slot for “grommet top curtains.” But our culture will be influenced in ways large and small by Google’s years of dominance, whatever happens in coming years. I don’t have time to write a full review now, but I do want to highlight some key concepts in Googlization, since they will have lasting relevance for studies of technology, law, and media for years to come.
Dan Solove helped shift the privacy conversation from “Orwell to Kafka” in a number of works over the past decade. Other scholars of surveillance have first used, and then criticized, the concept of the “Panopticon” as a master metaphor for the conformity-inducing pressures of ubiquitous monitoring. Vaidhyanathan observes that monitoring is now so ubiquitous, most people have given up trying to conform. As he observes,
[T]he forces at work in Europe, North America, and much of the rest of the world are the opposite of a Panopticon: they involve not the subjection of the individual to the gaze of a single, centralized authority, but the surveillance of the individual, potentially by all, always by many. We have a “cryptopticon” (for lack of a better word). Unlike Bentham’s prisoners, we don’t know all the ways in which we are being watched or profiled—we simply know that we are. And we don’t regulate our behavior under the gaze of surveillance: instead, we don’t seem to care.
Of course, that final “we” is a bit overinclusive, for as Vaidhyanathan later shows in a wonderful section on the diverging cultural responses to Google Street View, there are bastions of resistance to the technology:
Read the rest of this post »
March 12, 2011 at 12:38 pm Posted in: Cyberlaw, First Amendment, Google & Search Engines, Philosophy of Social Science, Privacy, Privacy (Electronic Surveillance), Social Network Websites, Technology Print This Post No Comments
posted by Frank Pasquale
Brian McKenna published an interesting piece in the Society for Applied Anthropology Newsletter, which is reprinted here. He quotes Financial Times Managing Editor Gillian Tett on one underexplored reason for lack of public attention to “financial innovation” pre-2008: “Once something is labeled boring, it’s the easiest way to hide it in plain sight.” He also reproduces a fascinating reflection from Annelise Riles, whose work Collateral Knowledge: Legal Reasoning in the Global Financial Markets will soon be released:
I think Tett’s diagnosis should cause academics to ask some hard questions about why we did not do more to highlight and critique the problems in the financial markets prior to the crash. For myself, for example, fieldwork in the derivatives markets had convinced me long before the crash that all was not well in these markets. My husband (also an ethnographer of finance) and I often joked way back around 2002 that our research had convinced us not to put a penny of our own money in these markets.
But our own disciplinary silo made us feel that it was impossible to counter the enthusiasm for financial models out there in the economics departments, the business schools, the law schools, the corridors of regulatory institutions. There surely was some truth to our sense that no one wanted to hear that markets were not rational in the sense assumed by the firms’ and regulators’ models. But maybe we should have tried a bit harder; it turns out many other people also had doubts and thought they too were alone. What might have happened if we had all found a way to link our skepticisms?
At this point, it may well be the case that most financial economists have so barren a theory of the social purpose of financial markets that they really are only teaching people how to succeed within the current system, rather than improving the system overall. It’s a bit like a divinity school run by “believers,” rather than a religious studies department trying to study the religious (to borrow a distinction from Paul Kahn’s Cultural Study of Law).
Read the rest of this post »
posted by Frank Pasquale
There is an excellent review essay by Simon Head on the future of British universities in the NYRB. It discusses the Strategic Plan of the Higher Education Funding Council for England (HEFCE), including the “Research Assessment Exercise (RAE) led every six or seven years.” As of 2008, panels of 10 to 20 specialists in 67 fields evaluate work during RAEs. As the author explains,
The panels must award each submitted work one of four grades, ranging from 4*, the top grade, for work whose “quality is world leading in terms of originality, significance and rigor,” to the humble 1*, “recognized nationally in terms of originality, significance, and rigour.” The anthropologist John Davis . . . has written of exercises such as the RAE that their “rituals are shallow because they do not penetrate to the core.”
I have yet to meet anyone who seriously believes that the RAE panels—underpaid, under pressure of time, and needing to sift through thousands of scholarly works—can possibly do justice to the tiny minority of work that really is “world leading in terms of originality, significance and rigour.” But to expect the panels to do this is to miss the point of the RAE. Its roots are in the corporate, not the academic, world. It is really a “quality control” exercise imposed on academics by politicians; and the RAE grades are simply the raw material for Key Performance Indicators [KPIs], which politicians and bureaucrats can then manipulate in order to show that academics are (or are not) providing value for taxpayers’ money.
Imagine “needing to sift through thousands of scholarly works” in short order; what a bizarre process. There are many critics of RAE; this essay is particularly worth reading because it connects the dots between corporate-speak and the new academic order:
Read the rest of this post »
posted by Frank Pasquale
Search neutrality is on the rise in Europe, and on the ropes in the US (or at least should be, according to James Grimmelmann). We barely have net neutrality here, and the tech press bridles at the thought of a sclerotic DC agency regulating god-like Googlers. I want to question its conventional wisdom, by proving how modest the “search neutrality” agenda is now, and how well it fits with classic ideals of neutrality in law.
There are many reasons to think that Google will continue to dominate the general purpose search field. Sure, searchers and advertisers can access a vibrant field of also-rans. But most users will always want a shot at Google for serious searching and advertising, just as a mobile internet connection is no substitute for a high bandwidth one for many important purposes.
Given these parallels, I’ve compared principles of broadband non-discrimination and search non-discrimination. But virtually every time the term “search neutrality” comes up in conversation, people tend to want to end the argument by saying “there is no one best way to order search results—editorial discretion is built into the process of ranking sites.” (See, for example, Clay Shirky’s response to my position in this documentary.) To critics, a neutral search engine would have to perform the (impossible) task of ranking every site according to some Platonic ideal of merit.
But on my account of neutrality, a neutral search engine must merely avoid certain suspect behaviors, including:
Read the rest of this post »
posted by Frank Pasquale
Paul Caron brings news of the ranking system from Thomas M. Cooley School of Law, which pegs itself at #2, between Harvard and Georgetown. Caron calls it “the most extreme example of the phenomenon we observed [in 2004]: in every alternative ranking of law schools, the ranker’s school ranks higher than it does under U.S. News.” I just wanted to note a few other problems with such systems, apart from what I’ve discussed in earlier blog posts and articles on search engine rankings.
In the 1980s, statisticians at Bell Laboratories studied the data from the 1985 “Places Rated Almanac,” which ranked 329 American cities on how desirable they were as places to live. (This book is still published every couple of years.) My colleagues at Bell Labs tried to assess the data objectively. To summarize a lot of first-rate statistical analysis and exposition in a few sentences, what they showed was that if one combines flaky data with arbitrary weights, it’s possible to come up with pretty much any order you like. They were able, by juggling the weights on the nine attributes of the original data, to move any one of 134 cities to first position, and (separately) to move any one of 150 cities to the bottom. Depending on the weights, 59 cities could rank either first or last! [emphasis added]
To illustrate the problem in a local setting, suppose that US News rated universities only on alumni giving rate, which today is just one of their criteria. Princeton is miles ahead on this measure and would always rank first. If instead the single criterion were SAT score, we’d be down in the list, well behind MIT and California Institute of Technology. . . . I often ask students in COS 109: Computers in Our World to explore the malleability of rankings. With factors and weights loosely based on US News data that ranks Princeton first, their task is to adjust the weights to push Princeton down as far as possible, while simultaneously raising Harvard up as much as they can.
posted by Marcus Boon
Congratulations to all involved on the publication of the A2K volume! I think A2K is a provocative way of framing some contemporary debates around knowledge, information, community, property, intellectual or otherwise. It feels like every week brings us some new shift which is being linked to A2K issues: Tunisia; Egypt; WikiLeaks to name just a few. In many of these situations, what’s at stake is the way that knowledge is legally characterized as property: state property; private property etc. And the ways in which our ability to reproduce and disseminate knowledge radically shifts our understanding of what an object or subject of knowledge is, bringing into being new publics and new kinds of archive.
For me, the point made at the end of Amy’s introduction, about the need to separate “knowledge” from “information” is a key one, in that if all knowledge is rendered as information and more specifically information stored and passed around in digital data networks, then knowledge has already been reified or turned into a commodity. Perhaps I might even wonder if there was a more fundamental kind of access than “access to knowledge” that was at stake in contemporary struggles about intellectual property. For example if communities and individuals are constituted by practices of copying, things like pleasure, affect, relation are all there, even “being”. It’s always possible to instrumentalize those things are forms of knowledge or “ethical know how” as Buddhist neurologist Francisco Varela termed it. But it may be the case that something important gets lost if one overemphasizes knowledge at the expense of other forms of being in the world.
In my own work, I’ve emphasized the importance of practice as being important in itself, regardless of the “content”. How do we defend particular practices of copying that may or may not be centered on knowledge production but which nonetheless are culturally significant? There’s an important body of work in critical theory, from Bataille and Blanchot through Agamben and Nancy on the importance of “nonknowledge” and “unworking” (désoeuvrement). These concepts can seem very abstract and removed from the concrete struggles of social activists, but I wonder to what degree they might be helpful in thinking and making spaces where openness and sharing prevail, spaces that can’t necessarily be defined in advance as public domain or commons etc.
posted by Frank Pasquale
The New Museum of Contemporary Art has hosted an exhibit called “The Last Newspaper” the past few months. Part of the exhibit centers around newspaper-based art. Another focus has been a “hybrid of journalism and performance art,” as groups of editors and writers developed “last newspaper sections” in areas ranging from real estate to sports to leisure. I co-edited the business section, which is available here in a low-res copy. I’m posting our editorial statement below.
I like how the various articles (contributed by entrepreneurs, theorists, designers, and others) hang together. The terrific design work is a refreshing change from the barren pages of business blogs, law reviews, and academic books (though it looks like some legal scholars are renewing interest in visual aspects of justice).
December 27, 2010 at 10:16 pm Posted in: Architecture, Cyberlaw, Economic Analysis of Law, Just for Fun, Law and Inequality, Philosophy of Social Science, Politics, Technology Print This Post 2 Comments
posted by Frank Pasquale
Momus once predicted that, on the Internet, everyone will be famous for 15 people. But there are still valiant warriors against media fragmentation. Epagogix tries to find the movie scripts that will appeal to wide audiences. Now James Frey is assembling writers, Andy Warhol factory-style, to try find the next Twilight:
This is the essence of the terms being offered by Frey’s company Full Fathom Five: In exchange for delivering a finished book within a set number of months, the writer would receive $250 (some contracts allowed for another $250 upon completion), along with a percentage of all revenue generated by the project, including television, film, and merchandise rights—30 percent if the idea was originally Frey’s, 40 percent if it was originally the writer’s. The writer would be financially responsible for any legal action brought against the book but would not own its copyright.
Full Fathom Five could use the writer’s name or a pseudonym without his or her permission, even if the writer was no longer involved with the series, and the company could substitute the writer’s full name for a pseudonym at any point in the future. The writer was forbidden from signing contracts that would “conflict” with the project; what that might be wasn’t specified. The writer would not have approval over his or her publicity, pictures, or biographical materials. There was a $50,000 penalty if the writer publicly admitted to working with Full Fathom Five without permission.
At this point, perhaps a purely mechanized “writing program” would be a better approach for Frey. Kurzweil’s patented a poetry generator, and the Dada Engine can use recursive grammars to compose text. Whatever the method, I have a sense that the story of the motivations of the creator of the writing machine/collective will always be more interesting than whatever it manages to produce (just like the NYM article about Frey’s work is more interesting than Frey’s company, and Richard Powers’s Galatea 2.2 won’t be surpassed by the machines it describes.). The article mentions that Frey is inspired by artists—I wonder if one of them is Jean Tinguely?
Image Credit: Photo of Dadaist sculpture by acb.
posted by Frank Pasquale
Rakesh Khurana’s book From Higher Aims to Hired Hands: The Social Transformation of American Business Schools and the Unfulfilled Promise of Management as a Profession is a profound contribution to sociology and institutional analysis. It is also a persuasive critique of some of the most disturbing trends in the American economy. While B-schools may seem of marginal relevance to the actual conduct of CEOs, Khurana observes in the book that they “occupy the commanding heights of higher education . . . and the kinds of knowledge and skill they purvey [are] now seemingly more essential to the tasks of university—and indeed societal—leadership than anything taught elsewhere on campus” (367). Khurana describes how leading B-Schools gained a world of power, prestige, and influence in the 20th Century, but lost their soul along the way.
The Biblical echo here is intentional: like Weber, Khurana traces the religious origins of the concepts of vocation and higher education. His focus on values—as well as his harsh indictments of business education past and present—could easily lead Khurana to jeremiads or charismatic prophecy, but he skillfully resists both of these temptations. He offers a sober vision for hope in the future of business education. Khurana’s work should inspire legal academics as well as business school professors (as it already has in a conference at the University of St. Thomas Law School (pdf) last year).
Khurana’s book has several points of interest for legal scholars. He focuses on the role of community and norms as sources of values distinct from markets and governmental hierarchies. As post-crisis interventions in the health care, finance, energy, and transport have demonstrated, the old debates over “market vs. government” solutions, or “private vs. public” spending, are of fading relevance for serious social theory in the US (however potent they may be on the campaign trail). Flaws in the “government” are all too often rooted in flaws in the “market,” which are in turn rooted in past flaws in policy, ad infinitum. Recent liberalization of campaign finance rules will only accelerate that dynamic of capture. Institutions that generate values are some of the few entities capable of short-circuiting this pernicious circularity.
Read the rest of this post »