Archive for the ‘Philosophy of Social Science’ Category
posted by Frank Pasquale
The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).
I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:
For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.
Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.
posted by Frank Pasquale
The reader of Talent Wants to be Free effectively gets two books for the price of one. As one of the top legal scholars on the intersection of employment and intellectual property law, Prof. Lobel skillfully describes key concepts and disputes in both areas. Lobel has distilled years of rigorous, careful legal analysis into a series of narratives, theories, and key concepts. Lobel brings legal ideas to life, dramatizing the workplace tensions between loyalty and commitment, control and creativity, better than any work I’ve encountered over the past decade. Her enthusiasm for the subject matter animates the work throughout, making the book a joy to read. Most of the other participants in this symposium have already commented on how successful this aspect of the book is, so I won’t belabor their points.
Talent Want to Be Free also functions as a second kind of book: a management guide. The ending of the first chapter sets up this project, proposing to advise corporate leaders on how to “meet the challenge” of keeping the best performers from leaving, and how “to react when, inevitably, some of these most talented people become competitors” (26). This is a work not only destined for law schools, but also for business schools: for captains of industry eager for new strategies to deploy in the great game of luring and keeping “talent.” Reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared. They should celebrate mobile workers, and should not seek to bind their top employees with burdensome noncompete clauses. Drawing on the work of social scientists like AnnaLee Saxenian (68), Lobel argues that an ecology of innovation depends on workers’ ability to freely move to where their talents are best appreciated.
For Lobel, many restrictions on the free flow of human capital are becoming just as much of a threat to economic prosperity as excess copyright, patent, and trademark protection. Both sets of laws waste resources combating the free flow of information. A firm that trains its workers may want to require them to stay for several years, to recoup its investment (28-29). But Lobel exposes the costs of such a strategy: human capital controls “restrict careers and connections that are born between people” (32). They can also hurt the development of a local talent pool that could, in all likelihood, redound to the benefit of the would-be controlling firm. Trapped in their firms by rigid Massachusetts’ custom and law, Route 128′s talent tended to stagnate. California refused to enforce noncompete clauses, encouraging its knowledge workers to find the firms best able to use their skills.
I have little doubt that Lobel’s book will be assigned in B-schools from Stanford to Wharton. She tells a consistently positive, upbeat story about management techniques to fraternize the incompatibles of personal fulfillment, profit maximization, and regional advantage. But for every normative term that animates her analysis (labor mobility, freedom of contract, innovation, creative or constructive destruction) there is a shadow term (precarity, exploitation, disruption, waste) that goes unexplored. I want to surface a few of these terms, and explore the degree to which they limit the scope or force of Lobel’s message. My worry is that managers will be receptive to the book not because they want talent to be free in the sense of “free speech,” but rather, in the sense of “free beer:” interchangeable cog(nitive unit)s desperately pitching themselves on MTurk and TaskRabbit.
Read the rest of this post »
November 13, 2013 at 9:59 am Posted in: Book Reviews, Corporate Law, Employment Law, Intellectual Property, Philosophy of Social Science, Political Economy, Sociology of Law, Symposium (Talent Wants to be Free) Print This Post No Comments
posted by Frank Pasquale
Back in the Bush years, health policy was all about making sure
patients consumers had “skin in the game,” and faced real costs whenever they sought care. More cautious voices worried that patients often didn’t know when to avoid unnecessary care, and when failure to visit a doctor would hurt them. Now there is renewed evidence that the cautionary voices were right:
One-third of US workers now have high-deductible health plans, and those numbers are expected to grow in 2014 as implementation of the Affordable Care Act continues. There is concern that high-deductible health plans might cause enrollees of low socioeconomic status to forgo emergency care as a result of burdensome out-of-pocket costs. . . .Our findings suggest that plan members of low socioeconomic status at small firms responded inappropriately to high-deductible plans and that initial reductions in high-severity ED visits might have increased the need for subsequent hospitalizations. Policy makers and employers should consider proactive strategies to educate high-deductible plan members about their benefit structures or identify members at higher risk of avoiding needed care. They should also consider implementing means-based deductibles.
To put this in more concrete terms: “skin in the game” for many poor families may mean choosing whether to “tough out” a peritonsillar abscess or appendicitis, knowing that the temporary pain may allow them to pay rent, but also may lead to sepsis, necrosis, peritonitis, or death. As Corey Robin has observed, there is a philosophical vision affirming the imposition of such choices, but it’s not utilitarian:
By imposing this drama of choice, the economy becomes a theater of self-disclosure, the stage upon which we discover and reveal our ultimate ends. It is not in the casual chatter of a seminar or the cloistered pews of a church that we determine our values; it is in the duress—the ordeal—of our lived lives, those moments when we are not only free to choose but forced to choose. “Freedom to order our own conduct in the sphere where material circumstances force a choice upon us,” Hayek wrote, “is the air in which alone moral sense grows and in which moral values are daily re-created.”
For some, the choice is between investing in gold or cryptocurrencies; for others, between searing pain and eviction. But the market, in the “skin in the game” vision, is at least distributing these opportunities for self-disclosure through choice to all.
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim 2: Using more years of data would reduce the earnings premium
Response: Using more years of historical data is as likely to increase the earnings premium as to reduce it
We have doubts about the effect of more data, even if Professor Tamanaha does not.
Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.
The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing. To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.
As a commenter on Tamanaha’s blog helpfully points out:
“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).
But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.
Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”
There is nothing magical about 1992. If good quality data were available, why not go back to the 1980s or beyond? Stephen Diamond and others make this point.
The 1980s are generally believed to be a boom time in the legal market. Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it. Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.
Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.
This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose. Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here. Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.
July 29, 2013 at 11:38 am Tags: Economic Value of a Law Degree, economics, law and economics Posted in: Accounting, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments
posted by Michael Simkovic
(Reposted from Brian Leiter’s Law School Reports)
BT Claim: We could have used more historical data without introducing continuity and other methodological problems
BT quote: “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”
Response: Using more historical data from SIPP would likely have introduced continuity and other methodological problems
SIPP does indeed go back farther than 1996. We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day. SIPP was substantially redesigned in 1996 to increase sample size and improve data quality. Combining different versions of SIPP could have introduced methodological problems. That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.
Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.
Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data. All else being equal, a larger sample size and more years of data are preferable. However, data quality issues suggest focusing on more recent data.
If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data. We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology. Such adjustments would inevitably have been controversial.
Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes. There are also gaps in SIPP data from the 1980s because of insufficient funding.
These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.
Changes to the new 1996 version of SIPP include:
Roughly doubling the sample size
This improves the precision of estimates and shrinks standard errors
Lengthening the panels from 3 years to 4 years
This reduces the severity of the regression to the median problem
Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data
Most government surveys topcode income data—that is, there is a maximum income that they will report. This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.
Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.
Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.
These are only a subset of the problems extending the SIPP data back past 1996 would have introduced. For us, the costs of backfilling data appear to outweigh the benefits. If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.
July 28, 2013 at 5:01 pm Tags: economic rec, Economic Value of a Law Degree, economics Posted in: Accounting, Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science, Sociology of Law Print This Post No Comments
posted by Michael Simkovic
(Cross posted from Brian Leiter’s Law School Reports)
Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution. Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”.
In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data. Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target. While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.
Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them. For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research. Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals. And his description of our present value calculations is way off the mark.
Here are some quick bullet point responses, with details below in subsequent posts:
- Forecasting and Backfilling
- Using more historical data from SIPP would likely have introduced continuity and other methodological problems
- Using more years of data is as likely to increase the historical earnings premium as to reduce it
- If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
- The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
- In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
- Data sufficiency
- 16 years of data is more than is used in similar studies to establish a baseline. This includes studies Tamanaha cited and praised in his book.
- Our data includes both peaks and troughs in the cycle. Across the cycle, law graduates earn substantially more than bachelor’s.
- Tamanaha’s errors and misreading
- We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
- This substantially reduces our earnings premium estimates
- Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
- Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
- Tamanaha is confused about present value, opportunity cost, and discounting
- Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
- Tamanaha’s best line
- “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”
July 26, 2013 at 1:26 pm Tags: Economic Value of a Law Degree, economics Posted in: Blogging, Corporate Finance, Economic Analysis of Law, Education, Empirical Analysis of Law, Law Practice, Law School, Philosophy of Social Science Print This Post No Comments
posted by Frank Pasquale
Fables have been in the politico-economic air of late. The FT’s Martin Wolf considered the locust part of a master metaphor for the future of the global economy. He concluded that “the financial crisis was the product of an unstable interaction between ants (excess savers), grasshoppers (excess borrowers) and locusts (the financial sector that intermediated between the two).”
Now Geoff Mulgan has entered the fray with the excellent book The Locust and the Bee: Predators and Creators in Capitalism’s Future. As Mulgan observes,
If you want to make money, you can choose between two fundamentally different strategies. One is to create genuinely new value by bringing resources together in ways that serve people’s wants and needs. The other is to seize value through predation, taking resources, money, or time from others, whether they like it or not.
posted by Frank Pasquale
Brett Frischmann’s book is a summa of infrastructural theory. Its tone and content approach the catechetical, patiently instructing the reader in each dimension and application of his work. It applies classic economic theory of transport networks and environmental resources to information age dilemmas. It thus takes its place among the liberal “big idea” books of today’s leading Internet scholars (including Benkler’s Wealth of Networks, van Schewick’s Internet Architecture and Innovation, Wu’s Master Switch, Zittrain’s Future of the Internet,and Lessig’s Code.) So careful is its drafting, and so myriad its qualifications and nuances, that is likely consistent with 95% of the policies (and perhaps theories) endorsed in those compelling books. And yet the US almost certainly won’t make the necessary investments in roads, basic research, and other general-purpose inputs that Frischmann promotes. Why is that?
Lawrence Lessig’s career suggests an answer. He presciently “re-marked” on Frischmann’s project in a Minnesota Law Review article. But after a decade at the cutting edge of Internet law, Lessig switched direction entirely. He committed himself to cleaning up the Augean stables of influence on Capitol Hill. He knew that even best academic research would have no practical impact in a corrupted political sphere.
Were Lessig to succeed, I have little doubt that the political system would be more open to ideas like Frischmann’s. Consider, for instance, the moral imperative and economic good sense of public investment in an era of insufficient aggregate demand and near-record-low interest rates:
The cost of borrowing to fund infrastructure projects, [as Economic Policy Institute analyst Ethan Pollack] points out, has hit record “low levels.” And the private construction companies that do infrastructure work remain desperate for contracts. They’re asking for less to do infrastructure work. “In other words,” says Pollack, “we’re getting much more bang for our buck than we usually do.”
And if we spend those bucks on infrastructure, we would also be creating badly needed jobs that could help juice up the economy. Notes Pollack: “This isn’t win-win, this is win-win-win-win.” Yet our political system seems totally incapable of seizing this “win-win-win-win” moment. What explains this incapacity? Center for American Progress analysts David Madland and Nick Bunker, see inequality as the prime culprit.
April 26, 2012 at 8:17 am Posted in: Economic Analysis of Law, Infrastructure Symposium, Innovation, Law and Inequality, Philosophy of Social Science, Political Economy, Politics, Symposium (Infrastructure), Technology Print This Post 2 Comments
posted by Deven Desai
Andrew Morin and six others have argued for open access to source code behind scientific publishing so that the work can be tested and live up to the promise of the scientific method. At least, I think that is the claim. Ah irony, the piece is in Science and behind, oh yes, a pay wall! As Morin says in Scientific American:
“Far too many pieces of code critical to the reproduction, peer-review and extension of scientific results never see the light of day,” said Andrew Morin, a postdoctoral fellow in the structural biology research and computing lab at Harvard University. “As computing becomes an ever larger and more important part of research in every field of science, access to the source code used to generate scientific results is going to become more and more critical.”
If the essay were available, we might assess it better too.
Victoria Stodden is assistant professor of Statistics at Columbia University and serves as a member of the National Science Foundation’s Advisory Committee on Cyberinfrastructure (ACCI), and on Columbia University’s Senate Information Technologies Committee. She is one of the creators of SparseLab, a collaborative platform for reproducible computational research and has developed an award winning licensing structure to facilitate open and reproducible computational research, called the Reproducible Research Standard. She is currently working on the NSF-funded project: “Policy Design for Reproducibility and Data Sharing in Computational Science.”
Victoria is serving on the National Academies of Science committee on “Responsible Science: Ensuring the Integrity of the Research Process” and the American Statistical Association’s “Committee on Privacy and Confidentiality” (2013).
In other words, if you are interested in thisarea, you may want to contact Victoria as well as Mr. Morin.
posted by Frank Pasquale
Paul A. Lombardo published an essay “Legal Archaeology: Recovering the Stories behind the Cases” in the Fall 2008 issue of the Journal of Law, Medicine, and Ethics. It reminded me of the wonderful chapters in this volume of “health law stories.” Here are some excerpts that may be of interest:
Every lawsuit is a potential drama: a story of conflict, often with victims and villains, leading to justice done or denied. Yet a great deal, if not all, that we learn about the most noteworthy of lawsuits — the truly great cases — comes from reading the opinion of an appellate court, written by a judge who never saw the parties of the case, who worked at a time and a place far removed from the events that gave rise to litigation.
Rarely do we admit that the official factual account contained in an appellate opinion may have only the most tenuous relationship to the events that actually led the parties to court. The complex stories — turning on small facts, seemingly trivial circumstances, and inter-contingent events — fade away as the “case” takes on a life of its own as it leaves the court of appeals.
How can a law professor correct this bias? Here are some of Lombardo’s suggestions:
posted by Biella Coleman
Inspired by Orin Kerr’s question (“is your work focused on the internal narratives and ideologies that people use to describe/justify what they do, or is it focused externally on the actual conduct of what people do?”) below I will give a sense of how I walk the line between what we might call idealism and practice among the geeks and hackers I study.
One of the toughest parts about working with the type of technologists I focus on— intelligent, opinionated, online a lot of the time—is that many will unabashedly dissect my every word, statement, and media appearance. This attribute of my research, unsurprisingly, has been the source of considerable anxiety, only made worse in recent times with Anonymous as I have to make “authoritative” statements about them in the midst studying them, in other words, in the midst of having incomplete information.
All of this is to say I am deliberate and diplomatic when it comes to word choice, framing, and arguments. But most of the time examining practice in light of or up against idealism does not take the somewhat noxious form of “exposing” secrets, the implication being that people are so mystified and deluded that you, the outsider, are there to inform the world of what is really going on (there is a a long standing tradition in the humanities and social sciences, loosely inspired by Karl Marx and especially Pierre Bourdieu, taking this stance, not my favorite strain of analysis unless done really when needed and very well).
Much of what I do is to unearth those dynamics which may not be natively theorized but are certainly in operation. Take for instance the following example at the nexus of law and politics: during fieldwork it was patently clear that many free software hackers were wholly uninterested in politics outside of software freedom and those aligned with open source explicitly disavowed even this narrowly defined political agenda. Many were also repelled by the law (as one developer put it, “writing an algorithm in legalese should be punished with death…. a horrible one, by preference”) and yet weeks into research it was obvious that many developers are nimble legal thinkers, which helps explain how they have built, in a relatively short time period, a robust alternative body of legal theory and laws. One reason for this facility is that the skills, mental dispositions, and forms of reasoning necessary to read and analyze a formal, rule-based system like the law parallel the operations necessary to code software. Both are logic-oriented, internally consistent textual practices that require great attention to detail. Small mistakes in both law and software—a missing comma in a contract or a missing semicolon in code—can jeopardize the integrity of the system and compromise the intention of the author of the text. Both lawyers and programmers develop mental habits for making, reading, and parsing what are primarily utilitarian texts and this makes a lot of free software hackers, who already must pay attention to the law in light of free software licenses, adept legal thinkers, although of course this does not necessarily mean they would make good lawyers.
posted by Amanda Pustilnik
By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law. Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”
Ben’s question suggests that ostensibly rational human beings often act in irrational ways. To prove his point, I’m actually going to address his enormous question within a blog post. I hope you judge the effort valiant, if not complete.
The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality. The first view is that greater rationality might be possible – but might not confer greater benefits. I call this the “anti-Vulcan hypothesis”: While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock. A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group. In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases. Yet, whether we are Kirk or Flossie, the implication for law may be the same: Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.
First, a slight cavil with the question: The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control. Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution. Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true. (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.) Rationality divorced from affect arguably may not even be possible for humans, much less desirable. Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.
Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor. By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.
Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest. Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing. Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills. This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.
So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference. It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions. Further, the rational cognition we can access can be totally swamped out by sudden and strong affect. With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”
This fragility may be more boon than bane: Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage. Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations. Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors. To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility. What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational. This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.
An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory. While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality. In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”
On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it. Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group. Rationality operates, if at all, post hoc: It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions. (Note that different cultural groups assign different values to rational forms of thought and inquiry. In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming. Children of academics and knowledge-workers: I’m looking at you.)
This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data. And that this cognitive mode inheres in us makes a certain kind of sense: Most people face far greater immediate danger from defying their social group than from global warming or gun control policy. The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.
To descend from Olympus to the village: What could this mean for law? Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored. I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.
Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are designed. Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions. The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.
Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy. In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community. And in still other contexts, we might value narrow rationality above all. Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas. Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.
Cultural cognition may offer strategies for communicating with the public about important issues. The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it. If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow: Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities. The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.
To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers. But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.
October 16, 2011 at 2:25 am Tags: cultural cognition, emotion & cognition, irrationality, law & neuroscience, rationality Posted in: Behavioral Law and Economics, Law and Psychology, Legal Theory, Philosophy of Social Science, Uncategorized Print This Post 11 Comments
posted by Frank Pasquale
It’s becoming clearer that classic Keynesian stimulus—ranging from Obama’s minimalist jobs program to the robust visions of a Krugman or Delong—won’t be enough to get us out of the Great Recession/Lesser Depression. The exhaustion of conventional macroeconomic thought (chronicled in outlets like the Real World Economics Review) has cleared some space for more imaginative thinkers. As John Kay observes:
Economics is not a technique in search of problems but a set of problems in need of solution. Such problems are varied and the solutions will inevitably be eclectic. Such pragmatic thinking requires not just deductive logic but an understanding of the processes of belief formation, of anthropology, psychology and organisational behaviour, and meticulous observation of what people, businesses and governments do.
In this post, I want to briefly highlight Bernard Harcourt’s work in crossing disciplinary boundaries to engage in the synthesis necessary to truly understand our plight.
posted by Frank Pasquale
The US faced two great crises during the first decade of the 21st century: the attacks of September, 2001, and the meltdown of its financial system in September, 2008. In the case of 9/11, the country reluctantly concluded that it had made a category mistake about the threat posed by terrorism. The US had relied on cooperation among the Federal Aviation Administration, local law enforcement, and airlines to prevent hijacking. Assuming that, at most, a hijacked or bombed airplane would kill the passengers aboard the plane, government officials believed that national, local, and private authorities had adequate incentives to invest in an optimal level of deterrence. Until the attack occurred, no high official had deeply considered and acted on the possibility that an airplane itself could be weaponized, leading to the deaths of thousands of civilians.
After the attack, a new Department of Homeland Security took the lead in protecting the American people from internal threats, while existing intelligence agencies refocused their operations to better monitor internal threats to domestic order. The government massively upgraded its surveillance capabilities in the search for terrorists. DHS collaborated with local law enforcement officials and private critical infrastructure providers. Federal agencies, including the Department of Homeland Security, gather information in conjunction with state and local law enforcement officials in what Congress has deemed the “Information Sharing Environment” (ISE), held together by information “fusion centers” and other hubs. My co-blogger Danielle Citron and I wrote about some of the consequences in an article that recently appeared in the Hastings Law Journal:
In a speech at the Washington National Cathedral three days after 9/11, then-President George W. Bush proclaimed that America’s “responsibility to history is already clear[:] . . . [to] rid the world of evil.” For the next seven years, the Bush administration tried many innovations to keep that promise, ranging from preemptive war in Iraq to . . . changes in law enforcement and domestic intelligence . . . Fusion centers are a lasting legacy of the Administration’s aspiration to “eradicate evil,” a great leap forward in both technical capacity and institutional coordination. Their goal is to eliminate both the cancer of terror and lesser diseases of the body politic.
September 12, 2011 at 2:59 pm Posted in: Current Events, Cyberlaw, Philosophy of Social Science, Politics, Privacy, Privacy (Law Enforcement), Privacy (National Security), Sociology of Law Print This Post 9 Comments
posted by Olivier Sylvain
Like Professor Zick, I am grateful for the invitation to share my view of the world with Concurring Opinions. I’d like to pick up where his post on strange expressive acts left off and, along the way, perhaps answer his question.
Flash mobs have been eliciting wide-eyed excitement for the better part of the past decade now. They were playful and glaringly pointless in their earliest manifestations. Mobbers back then were content with the playful performance art of the thing. Early proponents, at the same time, breathlessly lauded the flash mob “movement.”
Today, the flash mob has matured into something much more complex than these early proponents prophesied. For one, they involve unsupported and disaffected young people of color in cities on the one hand and, on the other, anxious and unprepared law enforcement officials. A fateful mix.
In North London in early August, mobile online social networking and messaging probably helped outrage over the police shooting of a young black man morph into misanthropic madness. Race-inflected flash mob mischief hit the U.S. this summer, too. Most major metropolitan newspapers and cable news channels this summer have run stories about young black people across the country using their idle time and fleet thumbs to organize shoplifting, beatings, and general indiscipline. This is not the first time the U.S. has seen the flash mob or something like it. (Remember the 2000 recount in Florida?) But the demographic and commercial politics of these events in particular ought to raise eyebrows.
Read the rest of this post »
September 5, 2011 at 11:52 pm Posted in: Constitutional Law, Culture, Current Events, First Amendment, Media Law, Philosophy of Social Science, Politics, Race, Social Network Websites, Sociology of Law, Technology, Web 2.0 Print This Post 8 Comments
posted by Dave Hoffman
Among its many other vices, does legal education teach you to argue less persuasively and in a way that unsettles civil society? That accusation is implicit in Dan Kahan’s new magisterial HLR Forward, Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law. In Some Problems, Kahan considers the Supreme Court’s perceived legitimacy deficit when it resolves high-stakes cases. Rejecting the common criticism that focuses on the ideal of neutrality, Kahan argues than the Court’s failure is one of communication. The issues that the Court considers are hard, the they often turn on disputed policy judgments. But the Justices resort to language which is untempered by doubt, and which advances empirical support that is said to be conclusive. Like scientists, judges’ empirical messages are read by elites, and thus understood through polarizing filters. As a result, Justices on the other sides of these fights quickly seek to undermine these purported empirical foundations – - as Justice Scalia argued last term in Plata:
“[It] is impossible for judges to make “factual findings” without inserting their own policy judgments, when the factual findings are policy judgments. What occurred here is no more judicial factfinding in the ordinary sense than would be the factual findings that deficit spending will not lower the unemployment rate, or that the continued occupation of Iraq will decrease the risk of terrorism.”
Kahan resists Scalia’s cynicism — and says that in fact Scalia is making the problem worse. Overconfident display encourages people to take polarized views of law, to distrust the good faith of the Court and of legal institutions, and to experience the malady of cognitive illiberalism. Kahan concludes that Courts ought to show doubt & humility – aporia – when deciding cases, so as to signal to the other justices & the public that the losing side has been heard. Such a commitment to humble rhetoric would strengthen the idea of neutrality, which currently is attacked by all comers. Moreover, there is evidence that these sorts of on-the-one-hand/on-the-other-hand arguments do work. As Dan Simon and co-authors have found, people are basically likely to consider as legitimate arguments whose outcomes they find congenial. But when they dislike outcomes, people are better persuaded by arguments that are explicitly two-sided: that is, the form of very muscular rhetoric typical in SCOTUS decisions is likely to be seen, by those who disagree with the Court’s outcomes, are particularly unpersuasive, illegitimate, and biased.
I love this paper — it’s an outgrowth of the cultural cognition project, and it lays the groundwork for some really neat experiments. So the point of the post is partly to encourage you to go read it. But I wanted to try as well to connect this line of research to the recent “debate” about Law Schools.
posted by Frank Pasquale
Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:
Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.
Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:
The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.
Yves Smith finds Kramer’s response unconvincing:
The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.
Felix Salmon also challenges Kramer’s logic:
[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”
Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.
That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.
How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”
We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).
Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.
Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.
The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.
The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.
Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.
We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.
*In the spirit of full disclosure: I did participate in this roundtable.
X-Posted: Health Law Profs Blog.
posted by Frank Pasquale
Daniel Altman’s book Outrageous Fortunes is consistently smart, engaging, and counterintuitive. Ambitious in scope, it discusses several important forces shaping the global economy over the next few decades.
Very long-term thinking has two characteristic pitfalls. As the Village’s deficit obsession shows, sometimes panic over a distant threat can derail attention to much more pressing ones. There’s also little accountability for long-term prognosticators. A lot can happen between now and 2030, and as Philip Tetlock has shown, media and academic elites rarely lose visibility or credibility in the wake of even grotesquely wrong predictions. The futuristic novel can be a much safer place to conjure up ensuing decades.
But unlike speculative fiction, or the slightly less speculative macro-predictive fare of a “Megatrends” or “Bold New World,” Altman’s book is grounded in a deep engagement with current economic dilemmas. His analysis works on two levels. First, for a self-interested investor, it’s good to be aware of the long-run influences on productivity and power that Altman outlines. For example, his discussion of the new colonialism demonstrates both the short-term profits and long-term risks that arise when countries like China and Saudi Arabia start buying rights to agricultural land and other resources in poorer places. He also challenges conventional wisdom on disintermediation, making a compelling case that certain middlemen and arbitrageurs can only gain from market integration.
Outrageous Fortunes also succeeds as a work for wonks, taking its place in the often noble genre dubbed by David Brin the self-preventing prophecy. As Altman puts it, “a frequent goal of prediction is to alter the future – to warn of impending danger so that it can be avoided.” The book describes many impending dangers, including increasing inequality driven by global warming, accelerating brain drains, and an enormous financial black market that is developing outside of traditional financial centers. Altman’s description of that black market is particularly acute, and worth discussing in some detail.
Read the rest of this post »
posted by Frank Pasquale
Google’s been in the news a lot the past month. Concerned about the quality of their search results, they’re imposing new penalties on “content farms” and certain firms, including JC Penney and Overstock.com. Accusations are flying fast and furious; the “antichrist of Silicon Valley” has flatly told the Googlers to “stop cheating.”
As the debate heats up and accelerates in internet time, it’s a pleasure to turn to Siva Vaidhyanathan’s The Googlization of Everything, a carefully considered take on the company composed over the past five years. After this week is over, no one is going to really care whether Google properly punished JC Penney for scheming its way to the top non-paid search slot for “grommet top curtains.” But our culture will be influenced in ways large and small by Google’s years of dominance, whatever happens in coming years. I don’t have time to write a full review now, but I do want to highlight some key concepts in Googlization, since they will have lasting relevance for studies of technology, law, and media for years to come.
Dan Solove helped shift the privacy conversation from “Orwell to Kafka” in a number of works over the past decade. Other scholars of surveillance have first used, and then criticized, the concept of the “Panopticon” as a master metaphor for the conformity-inducing pressures of ubiquitous monitoring. Vaidhyanathan observes that monitoring is now so ubiquitous, most people have given up trying to conform. As he observes,
[T]he forces at work in Europe, North America, and much of the rest of the world are the opposite of a Panopticon: they involve not the subjection of the individual to the gaze of a single, centralized authority, but the surveillance of the individual, potentially by all, always by many. We have a “cryptopticon” (for lack of a better word). Unlike Bentham’s prisoners, we don’t know all the ways in which we are being watched or profiled—we simply know that we are. And we don’t regulate our behavior under the gaze of surveillance: instead, we don’t seem to care.
Of course, that final “we” is a bit overinclusive, for as Vaidhyanathan later shows in a wonderful section on the diverging cultural responses to Google Street View, there are bastions of resistance to the technology:
Read the rest of this post »
March 12, 2011 at 12:38 pm Posted in: Cyberlaw, First Amendment, Google & Search Engines, Philosophy of Social Science, Privacy, Privacy (Electronic Surveillance), Social Network Websites, Technology Print This Post No Comments
posted by Frank Pasquale
Brian McKenna published an interesting piece in the Society for Applied Anthropology Newsletter, which is reprinted here. He quotes Financial Times Managing Editor Gillian Tett on one underexplored reason for lack of public attention to “financial innovation” pre-2008: “Once something is labeled boring, it’s the easiest way to hide it in plain sight.” He also reproduces a fascinating reflection from Annelise Riles, whose work Collateral Knowledge: Legal Reasoning in the Global Financial Markets will soon be released:
I think Tett’s diagnosis should cause academics to ask some hard questions about why we did not do more to highlight and critique the problems in the financial markets prior to the crash. For myself, for example, fieldwork in the derivatives markets had convinced me long before the crash that all was not well in these markets. My husband (also an ethnographer of finance) and I often joked way back around 2002 that our research had convinced us not to put a penny of our own money in these markets.
But our own disciplinary silo made us feel that it was impossible to counter the enthusiasm for financial models out there in the economics departments, the business schools, the law schools, the corridors of regulatory institutions. There surely was some truth to our sense that no one wanted to hear that markets were not rational in the sense assumed by the firms’ and regulators’ models. But maybe we should have tried a bit harder; it turns out many other people also had doubts and thought they too were alone. What might have happened if we had all found a way to link our skepticisms?
At this point, it may well be the case that most financial economists have so barren a theory of the social purpose of financial markets that they really are only teaching people how to succeed within the current system, rather than improving the system overall. It’s a bit like a divinity school run by “believers,” rather than a religious studies department trying to study the religious (to borrow a distinction from Paul Kahn’s Cultural Study of Law).
Read the rest of this post »