Archive for the ‘Legal Theory’ Category
posted by Samir Chopra
Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents. I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read the rest of this post »
February 19, 2012 at 6:40 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents), Technology Print This Post No Comments
posted by Samir Chopra
I want to thank Harry Surden for his rich, technically-informed response to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:
There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”
The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.
I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.
Having said this, I can now turn to responding to Harry’s excellent post.
Read the rest of this post »
February 19, 2012 at 3:26 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Psychology and Behavior, Symposium (Autonomous Artificial Agents), Technology Print This Post 6 Comments
posted by Samir Chopra
Andrea Matwyshyn’s reading of the agency analysis of contracting (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.
At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.
[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.
Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.
Third, an implicit, unstated economic incentive.
February 19, 2012 at 2:10 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Cyberlaw, Economic Analysis of Law, Legal Theory, Symposium (Autonomous Artificial Agents), Technology, Tort Law Print This Post No Comments
posted by Samir Chopra
I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive) response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read the rest of this post »
February 18, 2012 at 12:47 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Contract Law & Beyond, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents) Print This Post No Comments
posted by Ian Kerr
In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.
Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”
Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.
Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.
At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.
The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.
In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.
But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:
“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”
The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.
Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.
But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.
I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.
My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.
When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.
I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.
I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.
posted by Danielle Citron
Our guest blogger Neil Richards, a Professor of Law at Washington University School of Law, turns his sights on video privacy in this guest blog post. It whets our appetite for his forthcoming book on Intellectual Privacy. So here is Professor Richards’s post:
The House of Representatives recently passed an amendment to a fairly obscure a law known as the Video Privacy Protection Act. This law protects the privacy of our video rental records. It ensures that companies who have information about what videos we watch keep them confidential, and it requires them to get meaningful consent from us before they publish them. The House, at the urging of Netflix and Facebook, has passed an amendment that would allow these companies to share our movie watching habits much more easily. The Video Privacy Act was passed after the Washington City Paper obtained the video rental records of Supreme Court nominee Robert Bork and published them in order to politically discredit him. It worked. The Video Privacy Act rests on the enduring wisdom that what we watch is our own business, regardless of our politics. It allows us to share films we’ve watched on our own terms and not those of video stores or online video providers.
What’s at stake is something privacy scholars call “intellectual privacy” – the idea that records of our reading habits, movie watching habits, and private conversations deserve special protection from other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we make sense of the world and make up our minds about political and non-political issues. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist or anti-globalization books; or visiting web sites about abortion, gun control, cancer, or coming out as gay; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”
For generations, librarians have understood this. Libraries were the Internet before computers – they presented the world of reading to us, and let us as patrons read (and watch) freely for ourselves. But librarians understood that intellectual privacy matters. A good library lets us read freely, but keeps our records confidential in order to safeguard our intellectual privacy. But we are told by Netflix, Facebook, and other companies that the world has changed. “Sharing” as they call it is the way of the future. I disagree. Sharing can be good, and sharing of what we watch and read is very important. But the way we share is essential. Telling our friends “hey – read this – it’s important” or “watch this movie – it’s really moving” is one of the great things that the Internet has made easier. But sharing has to be done on our terms, not on those that are most profitable for business. Sharing doesn’t mean a norm of publishing everything we read on the Internet. It means giving us a conscious choice about when we are sharing our intellectual habits, and when we are not.
Industry groups are fond of saying that good privacy practices require consumer notice and consumer choice. The current Video Privacy Act is one of the few laws that does give consumers meaningful choice about protecting their sensitive personal information. Now is not the time to cut back on the VPPA’s protections. Now is the time to extend its protections to the whole range of intellectual records – the books we buy, our internet search histories, and ISP logs of what we read on the Internet. As a first step, we should reject this attempt to eviscerate our intellectual privacy.
posted by Danielle Citron
My brilliant colleague and co-author Leslie Meltzer Henry is a thought leader on dignity’s jurisprudential and philosophical implications. University of Pennsylvania Law Review just published her engrossing and important piece entitled “The Jurisprudence of Dignity.” I’m hoping to have a longer conversation about the piece in the future. For now, here is the abstract:
Few words play a more central role in modern constitutional law without appearing in the Constitution than “dignity.” The term appears in more than nine hundred Supreme Court opinions, but despite its popularity, dignity is a concept in disarray. Its meanings and functions are commonly presupposed but rarely articulated. The result is a cacophony of uses so confusing that some critics argue the word ought to be abandoned altogether.
This Article fills a void in the literature by offering the first empirical study of Supreme Court opinions that invoke dignity and then proposing a typology of dignity based on an analysis of how the term is used in those opinions. The study reveals three important findings. First, the Court’s reliance on dignity is increasing, and the Roberts Court is accelerating that trend. Second, in contrast to its past use, dignity is now as likely to be invoked by the more conservative Justices on the Court as by their more liberal counterparts. Finally, the study demonstrates that dignity is not one concept, as other scholars have theorized, but rather five related concepts.
The typology refers to these conceptions of dignity as institutional status as dignity, equality as dignity, liberty as dignity,personal integrity as dignity, and collective virtue as dignity. This Article traces each type of dignity to its epistemic origins and describes the substantive dignitary interests each protects. Importantly, the typology offers more than a clarification of the conceptual chaos surrounding dignity. It provides tools to track the Court’s use of different types of dignity over time. This permits us to detect doctrinally transformative moments, in such areas as state sovereign immunity and abortion jurisprudence, that arise from shifting conceptions of dignity.
posted by Daniel Solove
The longstanding attacks on legal scholarship all seem to assume a particular relationship between theory and practice, one that I believe is flawed. Recently, I responded to one such critique. There are others, with Justice Roberts and many other judges and practitioners claiming that legal scholarship isn’t worth their attention and isn’t useful to the practice of law.
It seems to me that those making these critiques assume that the primary value of legal scholarship should be to (1) describe current legal doctrine to make legal research easier for practitioners; or (2) influence an immediate and direct change in the law. In an earlier post, I argued that #2 above is an unreasonable standard. Legal change is slow, and rarely will one article have a direct influence. Rarely does one thing have a direct influence — change typically occurs more through an indirect influence by numerous sources. Only in the movies or in simplistic historical accounts will we see one article or book lead to dramatic changes. Of course, it occasionally happens, but rarely.
In this post, I want to tackle claim #1. The treatise writers and doctrinal legal scholarship of yesteryear has diminished, though it isn’t gone. Last I checked, there were quite a lot of treatises written by quite a lot of law professors. But there is today a lot more theoretical scholarship. Is this scholarship valuable if it doesn’t help in legal research?
The answer is yes for many reasons:
1. As with all humanities, the value of any particular work is hard to quantify. What’s the value of Kafka’s The Trial or works by Shakespeare? What’s the value of reading history? What’s the value of learning things that don’t have direct application to one’s career? I believe there’s a lot of value. Reading these works opens up new ways of thinking, sparks new ideas, and helps people understand the world differently. This can indirectly affect one’s legal practice skills by enhancing creativity, improving one’s writing style, or making one see the facts of a case in a different light. It is interesting that many of the great jurists were also avid readers of literature. Indeed, many of the great thinkers and writers throughout history had wide-ranging intellectual interests and reading habits. Would people like Benjamin Franklin or Thomas Jefferson be as creative if they had more narrow and workmanlike intellectual exploration? Probably not. Would Justice Holmes have been as great without his love of the humanities? I doubt it.
2. There is a value in critiquing legal decisions and laws, even if the critique winds up remaining in dissent. Why do justices bother to write dissents? After all, it often takes decades if not 40-50 years for the Supreme Court to change the law. They write dissents in the hope that one day the Court will see things differently. They write them to make a record. There is a value in criticizing legal opinions and laws even if it doesn’t immediately result in a change. Indeed, many of the critiques of legal decisions and laws that I read in legal scholarship are very powerful ones. Courts and lawmakers should pay more attention, as the scholarship often reveals logical flaws in reasoning, clear errors in applying precedent, assumptions that are based on faulty facts, assumptions that are wrong based on empirical evidence, or assumptions that are contrary to widely-accepted conclusions in science or social science. Courts and legislatures may hide their heads in the sand, but that shouldn’t be a justification for criticizing legal scholarship — it should be a basis for criticizing courts and lawmakers.
posted by Daniel Solove
A reader of my post about the N.Y. Times critique of legal education writes, in regard to the value of legal scholarship:
I happen to be on the editorial board of a T14 law school’s law review, so I have to cite check and read articles regularly. Of those I’ve read, I can’t think of a single one I thought would be useful to a practicing lawyer. The problem is, in my experience, most seem to advocate a fundamental change in philosophy to an area of law that diverges from what precedent would suggest. To me, this seems extremely unhelpful, because A. Lower courts aren’t likely to accept a grand new theory that seems to contradict what SCOTUS is saying, B. As far as I can tell SCOTUS seems not to usually change its theory either, and C. I don’t think most policymakers tend to read law review articles.
This leads me to be inclined to believe that most law review articles are useless. Are you saying my sample is unrepresentative of what’s out there? Or do I simply have a narrower definition of usefulness? Could you perhaps suggest some articles from the past year that in your mind represented useful legal scholarship?
This commentator assumes that usefulness is the equivalent of being accepted by the courts. I quarrel with this view for many reasons:
1. An article can have an influence on cases, even if difficult to demonstrate. Many courts don’t cite law review articles even when they rely on them. Judges are notorious for not being particularly charitable with citations. They often copy verbatim parts of briefs, for example. If a law professor relies on a scholarly work even in a minor way, the professor will typically cite to the work. Not so for courts.
2. Most articles will not change the law. Changing the law is quite difficult, and if most law review articles changed the law, the law would be ridiculously more dynamic than it currently is.
3. No matter what discipline or area, most of the things produced are not going to be great. Most inventions are flops. Most books, songs, movies, TV shows, art works, architecture, or anything produced are quite forgettable and will likely be forgotten. Great lasting works only come around infrequently, no matter what the field.
4. Most people are forgettable too. In the law, most practitioners and judges have been forgotten. Only a few great ones are remembered. Of the judges who are most well-known, it is interesting that many were more theoretical in nature and had a major impact in changing the law — typically in ways law professors might change the law. Think of Benjamin Cardozo, who wrote many articles and books and who radically changed the law. Think of Felix Frankfurter, a former law professor. Think of Louis Brandeis. Think of Oliver Wendell Holmes. These were jurists who were thinkers. They were readers. They were literary. They were writers of scholarship too. Maybe the forgettable practitioners and judges are the ones who ignore legal scholarship.
posted by Amanda Pustilnik
By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law. Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”
Ben’s question suggests that ostensibly rational human beings often act in irrational ways. To prove his point, I’m actually going to address his enormous question within a blog post. I hope you judge the effort valiant, if not complete.
The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality. The first view is that greater rationality might be possible – but might not confer greater benefits. I call this the “anti-Vulcan hypothesis”: While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock. A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group. In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases. Yet, whether we are Kirk or Flossie, the implication for law may be the same: Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.
First, a slight cavil with the question: The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control. Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution. Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true. (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.) Rationality divorced from affect arguably may not even be possible for humans, much less desirable. Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.
Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor. By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.
Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest. Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing. Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills. This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.
So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference. It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions. Further, the rational cognition we can access can be totally swamped out by sudden and strong affect. With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”
This fragility may be more boon than bane: Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage. Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations. Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors. To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility. What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational. This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.
An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory. While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality. In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”
On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it. Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group. Rationality operates, if at all, post hoc: It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions. (Note that different cultural groups assign different values to rational forms of thought and inquiry. In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming. Children of academics and knowledge-workers: I’m looking at you.)
This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data. And that this cognitive mode inheres in us makes a certain kind of sense: Most people face far greater immediate danger from defying their social group than from global warming or gun control policy. The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.
To descend from Olympus to the village: What could this mean for law? Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored. I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.
Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are designed. Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions. The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.
Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy. In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community. And in still other contexts, we might value narrow rationality above all. Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas. Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.
Cultural cognition may offer strategies for communicating with the public about important issues. The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it. If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow: Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities. The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.
To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers. But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.
October 16, 2011 at 2:25 am Tags: cultural cognition, emotion & cognition, irrationality, law & neuroscience, rationality Posted in: Behavioral Law and Economics, Law and Psychology, Legal Theory, Philosophy of Social Science, Uncategorized Print This Post 11 Comments
posted by Danielle Citron
Professor Gregory Keating has two new pieces up on SSRN, both illuminating and important. A quick overly-brief primer: Keating’s fairness theory provides the “moral logic” for treating strict enterprise liability as the modern default rule for tort law. It requires an enterprise to compensate individuals injured by its risky, yet profitable activities if the victim does not benefit from those activities to the same extent that the enterprise does. In that sense, strict liability exacts a just price for an enterprise’s freedom to engage in profitable activities where the victim did not similarly enjoy such a liberty but nonetheless suffered injury. In the abstract included below for Recovering Rylands: An Essay for Bob Rabin (forthcoming DePaul Law Review), Keating celebrates and builds upon Robert Rabin’s article “The Historical Development of the Fault Principle,” providing a moral and historical account of Rylands v. Fletcher’s strict liability alternative to fault liability while recognizing its practical limitations. After the jump, I will include the abstract for Keating’s Nuisance as a Strict Liability Wrong. Here is the abstract for Recovering Rylands:
This paper, written for a Clifford Symposium Festschrift for Robert Rabin, comments on his lovely, widely admired, and yet still underappreciated paper The Historical Development of the Fault Principle: A Reinterpretation. Rabin’s paper teaches us something essential about the character and structure of modern tort law at the moment of its genesis, and it reminds us of the even more general truth that what the law does not cover is at least as important as what it does cover. The Historical Development of the Fault Principle is constructed around a simple, but powerful, distinction between fault as a breach of duty and fault as a cause of action. Negligence as a cause of action is an institution, a system of related rules, concepts, principles and policies. This simple but penetrating observation transforms the question of just what is at stake in the conventional thesis that the late nineteenth century was the heyday of “universal fault liability.”
Whether or not fault liability was “universal” at the end of the nineteenth century turns, Rabin teaches, not on whether tort liability for accidental injury is constructed around fault or strict liability. The “universality” of fault liability is, rather, a question about the percentage of the legal landscape for unintentional harm that the institution of negligence liability governs. Building on this point, The Historical Development of the Fault Principle shows that the age of “universal fault liability” is better described as an age where “no duty” predominated. Tort liability – fault liability retreated whenever contract was capable of taking hold of a domain of accidental injury. It retreated both in the presence of contractual relations (in the workplace context) and in the absence of contractual relations (in the product context). Property, contract, and “no duty” all trumped tort. This insight not only changes our understanding of the rise of fault liability; it also provides a powerful rebuttal of the still influential, if waning, view that the common law of torts circa 1870-1905 was economically efficient.
Rabin’s critique leaves intact the thesis that negligence liability itself emerged as a freestanding form of tort liability at the end of the nineteenth century. Prior to that time, negligence was merely the mental element of a number of discrete, nominate torts. Late in the nineteenth century, negligence transforms into a norm of conduct and thereby emerges as a distinctive form of tort liability. This development sets the stage for the expansion of fault liability into the domains of product accidents, landowner liability, and some forms of pure economic and emotional harm. The late nineteenth century thus sets the stage for the “universal fault liability” that it so conspicuously fails to achieve.
Recovering Rylands argues that Rylands v. Fletcher represents a parallel development with respect to strict liability. Rylands generalizes ancient forms of liability in nuisance and trespass into a coherent, general alternative to fault liability. The opinions in the case both articulate strict liability as a general principle of responsibility for harm done and clarify the fundamental perception on which strict liability rests, namely, that harm justifiability inflicted – harm which is unavoidable in the sense that it should be inflicted – can trigger responsibilities of repair. The idea that the justified infliction of harm gives rise to responsibilities of repair stands in sharp contrast to the root premise of fault liability, and accounts for the enduring significance of strict liability as form of legal responsibility for harm done.
After excavating the basis and nature of strict liability in Rylands, the paper traces the ebb and flow of the strand of strict liability that it inspired over the past century and a half. On the one hand, that history shows that fault liability is never universal, though generally dominant. On the other hand, that history suggests that the difficulty of attributing harms to activities without deploying a fault criterion may be a permanent, insurmountable barrier to universal, common law strict liability. Last, but surely not least, Rylands’ articulation of strict liability as a general idea is an essential part of the formative moment of modern tort law that Bob Rabin did so much to help us understand. Adding an account of Rylands is a way of building on his seminal contribution. Read the rest of this post »
posted by Danielle Citron
Cornell Law Review just published Professor David Super’s article Against Flexibility, a forceful and engrossing indictment of flexibility and legal procrastination at its core. Here is the abstract:
Contemporary legal thinking is in the thrall of a cult of flexibility. We obsess about avoiding decisions without all possible relevant information while ignoring the costs of postponing decisions until that information becomes available. We valorize procrastination and condemn investments of decisional resources in early decisions.
Both public and private law should be understood as a productive activity converting information, norms, and decisional and enforcement capacity into outputs of social value. Optimal timing depends on changes in these inputs’ scarcity and in the value of the decision they produce. Our legal culture tends to overestmate the value of information that may become available in the future while discounting declines over time in decisional resources and the utility of decisions. Even where postponing some decisions is necessary, a sophisticated appreciation of discretion’s components often exposes aspects of decisions that can and should be made earlier.
Disaster response illustrates the folly of legal procrastination as it shrinks the supply of decisional resources while increasing the demand for them. After Hurricane Katrina, programs built around flexibility failed badly through a combination of late and defective decisions. By contrast, those that appreciated the scarcity of decisional resources and had developed detailed regulatory templates in advance provided quick and effective relief.
posted by Daniel Solove
Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011). Like all of Lior’s work, the book is creative, thought-provoking, and compelling. There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way. That’s what Lior achieves in his book, and that’s quite an achievement.
I recently had the opportunity to chat with Lior about the book.
Daniel J. Solove (DJS): What drew you to the topic of exclusion?
Lior Jacob Strahilevitz (LJS): It was an observation I had as a college sophomore. I lived in the student housing cooperatives at Berkeley. Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process. The cooperatives, by contrast, were open to any student. But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities. That made me curious. It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system. But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone? That question was one I kept wondering about as a law student, lawyer, and professor.
That’s why page 1 of the book begins with a discussion of exclusion in the Greek system. I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.) The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge. Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.
DJS: What is the central idea in your book?
LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services. When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it. Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria. There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.
posted by Matthew Lister
(Thanks to Danielle and the Co-Op crowed for letting me stick around a bit longer.)
I am interested in how we should think about treaties. More specifically, I am interested in different ways we might think about treaties, and why different ways might be appropriate in different circumstances. At one extreme we might think of treaties as establishing sacred duties, as being based on oaths with deep religious implications. (Jeremy Waldon has a very interesting discussion of the history of this idea in his recent Charles E. Test lectures, “A Religious View of the Foundations of International Law”.) I think that there’s a case to be made that supposed principle of international law (or of natural law, depending on one’s account), pacta sunt servanda, depends on this understanding, though I won’t try to make that case here. (If so, this would be interesting in light of fact that Hans Kelsen at one point held, I believe, pacta sunt servanda to be the “basic norm” of international law, though he later abandoned this.) Read the rest of this post »
posted by Alexander Tsesis
Jack M. Balkin’s profound book, Constitutional Redemption, develops an aspirational interpretation of the Constitution. The presentation is not nostalgic; rather, Balkin provides a hopeful picture of an evolving form of constitutional interpretation. His methodology requires the reexamination of existing social morality and political forms but not an abandonment of the Constitution’s commitments to standards and principles of justice.
Balkin’s narrative of redemption speaks of unfulfilled promises made at the nation’s founding. These promises, he argues, should guide reform. Improvement, amendment, and advancement are not merely results of blind flux, but concerted efforts to achieve the “promise[s] of the past.” He neither seeks nor engages in constitutional idolatry, but a belief that the ideals of liberty and equality imbedded into the document can mold public opinion against injustices that violate them.
Such a grand vision is based on faith that the Constitution’s flexible framework will be instrumental to the achievement of social justice. Balkin’s perspective is positioned with the leanings of scholars like Mark Tushnet, , Sanford Levinson, William Eskridge, and Larry Kramer, who regard social and political movements to be important actors for “shifting the boundaries” of what are considered to be reasonable and plausible alternatives to existing inequalities. According to Balkin’s perspective, the effect of civil rights groups on our understanding of the Constitution is reflected in cases like Brown v. Board of Education, Reed v. Reed, and Lawrence v. Texas. These decisions, indeed, bear witness to the ability of litigation groups–like the National Association for the Advancement of Colored People, Women’s Rights Project, and the Lambda Legal Defense and Education Fund–to integrate visionary popular activism into a constitutional framework compelling enough to alter Supreme Court decisionmaking.
I believe that in Balkin’s redemptive vision of constitutional interpretation lies, arguably, the central paradox of American history. The nation was built on the principled foundations of the Declaration of Independence, which recognizes universal inalienable rights like life, liberty and the pursuit of happiness, but from its inception the United States failed to fully carry those ideals into law. The Declaration too, I argue in a forthcoming book, offers the sort of visionary (or in Balkin’s language redemptive) possibilities that drove Abraham Lincoln’s vision of federal government and Martin Luther King, Jr.’s advocacy of reform.
While the founding document spoke in terms of liberal equality, not quite twelve years after the Declaration was signed (on June 21, 1788 when New Hampshire became the ninth sate to adopt the Constitution) the Constitution’s notorious protections of slavery became binding. That is, the Constitution was not merely a step forward in the establishment of binding institutions pregnant with redemptive possibilities but also a document that compromised some of the ideals of the Revolution. Even the ratification of the Reconstruction Amendments did not lead to immediate redemptions of those original ideals. But I believe that Balkin is correct, that the Constitution just as its legal forerunner, the Declaration of Independence, contains the necessary kernels of wisdom that allow for the national and human evolution of understanding about the significance of due process, equal protection, and the pursuit of happiness.
Balkin correctly points out that the many failures to live up to the nation’s ideals do not diminish the value of anti-classist promises the nation made to improve of people’s welfare. His redemptive model helps explain why abolitionists could condemn the nation for its gross failures while clinging to its ideals. The original documents were useful for those who condemned the nation’s existing practices and for those who sought a jubilaic plan for its reform.
A letter published in abolitionist Frederick Douglass’s newspaper, The North Star, mocked the Declaration of Independence’s assertion that “all men are created equal.” The author insisted that the document should be rewritten to say, “All men are created equal; but many are made by their Creator, of baser material, and inferior origin, and are doomed now and forever to the sufferance of certain wrongs–amongst which is Slavery!” To blacks, the writer went on to say, the Fourth of July was “but a mockery and an insult.” To the advocates of slavery, he surmised, “liberty and equality” meant no more than the noises of firecrackers, raised flags, and other raucous festivities. J.D. “The Ever-glorious Fourth”, North Star (Rochester, NY), July 13, 1849.
But there was more to be said about America; it was not merely a composite of its failures but also a set of affective and effective norms. Despite the nation’s failures, the Declaration of Independence committed the country to liberal equality. In this context, an ex-slave’s daughter described her father’s awakening when he heard the Declaration read aloud. From that moment, she wrote, “he resolved that he would be free, and to this early determination, the cause of human freedom is indebted for one of its most effective advocates.” Biography of an American Bondman, by His Daughter 15-16 (1856). Her father, William Wells Brown, successfully escaped in 1834, later to become a prolific novelist and abolitionist lecturer.
The author of Douglass’s paper reflects the failure to live up to the substance of freedom. But Brown’s experience speaks to the possibility of unfulfilled aspiration to inspire and guide individuals, and perhaps even the nation, to liberal equality. This ability to animate hope even in the course of culturally accepted injustice demonstrates the Constitution’s redemptive quality, providing visionary revitalization of existing institutions and leading to social beneficial revision.
posted by Douglas NeJaime
I want to thank Danielle Citron for inviting me to participate in this symposium. And I want to thank Jack Balkin for giving me the great honor of commenting on his wonderful book. In Constitutional Redemption, Balkin offers an important, insightful, and useful corrective to the pessimism that pervades a significant amount of legal scholarship on the left. His constitutional optimism suggests the potential and possibilities of constitutional mobilization.
Balkin’s book offers incredible amounts of rich material. He provides a descriptive account of constitutional change, a normative vision of democratic culture, and an interpretative theory aimed at fulfilling the Constitution’s promises. In showing how social movements believe in and agitate for constitutional redemption, Balkin redeems the Constitution for legal scholarship, reminding us that the Constitution serves both as a potent symbol of social change and as a vehicle for continued reform. In this commentary, I first want to focus on why I think Balkin’s descriptive account is accurate by pointing to two essential moves I see him making. I then want to show Balkin’s theory in action in the marriage equality context as a way to translate his analysis into a useful lesson for liberals and progressives.
To my mind, two key moves allow Balkin to see what many others miss and thereby to bridge the often vast divide between constitutional theory and on-the-ground social movement activity. First, Balkin decenters adjudication, and in a sense detaches constitutional claims-making from constitutional decision-making. Of course, Balkin discusses at great length the decisions of the Supreme Court on various significant issues – from race to abortion to labor – and these decisions are crucial to an account of social change. But he analyzes adjudication through the lens of political and movement mobilization, showing the evolution of constitutional principles through the symbiotic relationship among courts, culture, and social movements. (Balkin, p. 63)
By deemphasizing adjudication, Balkin suggests that the most significant effects of constitutional claims emerge from the claims-making process itself. The claim is not merely instrumental – to convince a judge to grant some right or benefit to the plaintiff. Rather, the claim may be transformative and may articulate a vision that holds power regardless of judicial validation. In fact, when the judge validates the plaintiff’s claim, it is often because that claim has already affected the culture more generally.
Balkin’s second key move, which follows from the first, is his contextualization of courts within a broader political and cultural world. (Balkin, pp. 97-98) For Balkin, constitutional claims-making is political and moral claims-making. (Balkin, p. 118) Through this lens, courts cannot (and generally do not) go it alone. Instead, courts participate in an ongoing dialogue with other social change agents, including social movements and political actors.
August 1, 2011 at 9:00 am Tags: balkin, constitutional redemption, lgbt rights, marriage equality Posted in: Constitutional Law, Constitutional Redemption Symposium, Courts, Legal Theory, LGBT, Sociology of Law, Uncategorized Print This Post No Comments
posted by Ari Waldman
I begin my Co-Op blogging stint with deep appreciation for Danielle Citron’s invitation and for the entire Co-Op community’s indulgence. I am honored to be a small part of a wonderful online community that brings out the best in us and, for that matter, Web 2.0. My name is Ari, I am a Legal Scholar Teaching Fellow (just like a VAP) at California Western School of Law and I am a student of the interplay among the First Amendment, the Internet and other modern technologies and their effects on minority populations, like gays and lesbians. I go on the professor job market this Fall. I have a weekly blog (every Wednesday) over at the country’s most popular gay news site, Towleroad, for those interested in perspectives on LGBT legal issues for a mass audience. I also have a healthy relationship with physical fitness and an unhealthy relationship with the store Jack Spade. If there’s counseling for the latter, I’d appreciate a reference. Kidding…
For my month of blogging, I hope to engage with you in a few conversations, mostly about cyberharassment and the First Amendment, and hopefully with a healthy dose of humor.
My current project is the third in a series of projects about cyberharassment. The previous articles, available here, address the effects of cyberharassment on LGBT youth, argue for the use of affirmative “soft power” rather than after-the-fact criminalization to solve the problem and create a new analytical framework for adjudicating student free speech defenses to a school’s authority to punish cyberaggressors. Now I am considering the effect that cyberharassment, particularly harassment of a minority group, has on civic participation and the realization of democratic values. I argue that Internet intermediaries self-regulation of their sites and services to filter out hate, sexual harassment and other aggression conforms with long-standing First Amendment values.
Like President Obama likes to say, let me be clear. I do not mean to suggest that the First Amendment applies as a limit on the activities of private actors like Facebook or MySpace or Google; rather, I think that contrary to libertarian First Amendment scholars, we can expect these online intermediaries to regulate content and say that doing so reflects the democratic interests that underly the First Amendment.
Here’s the draft argument in brief that I am currently working out: The view of the Internet as an unencumbered and unfettered town square deserving the same Rawlsian liberal approach to free speech is wrong. Every online interaction is governed by intermediaries of varying kinds, all of which are the filters through which our online speech makes it through to our online communities. Traditional intermediaries have the power to regulate content consistent with the First Amendment, especially when not doing so would interfere with their and their users’ ability to participate in civil society. We see this more Aristotelian/communitarian approach to First Amendment values in intermediary jurisprudence — from publishers to book stores, and from schools to workplaces. And, like schools and workplaces, which can regulate their members’ speech in order to fulfill the institutions’ purposes, so too can online intermediaries like Facebook.
This project is in the early stages, and I always welcome comments/suggestions/evisceration of the argument. More to come…
I look forward to continuing this and other discussions with this splendid community.
posted by Stefan Bird-Pollan
Kant’s Doctrine of Right: A Commentary by B. Sharon Byrd and Joachim Hruschka, Cambridge University Press, 2010.
B. Sharon Byrd and Joachim Hruschka bill their new book on Kant’s legal philosophy as a commentary but it is really much more than that. It is an authoritative and comprehensive systematization of Kant’s legal philosophy. What makes it a commentary is that the authors deal with all of the central ideas in Kant’s Doctrine of Right rather than just selecting those which fit their thesis. The authors argue that Kant is the first to present us with “one single model designed to ensure peace on the national, international, and cosmopolitan levels.” (1) This is an ambitious project and only a few political philosophers have followed Kant in seeking a complete theory along these lines. Hegel is an obvious example but few 20th Century theorists come to mind.
Such a theory requires sound philosophical footing and one of the achievements of Byrd and Hruschka’s commentary is that they are particularly strong on the philosophical foundations of Kant’s system, both with regard to how the legal theory relates to the moral theory and on how the overall structure of law relates to the different concrete legal spheres. These are the elements that I will concentrate on in this review.
A perennial problem in Kant scholarship has been the question of how Kant’s legal and moral philosophies relate. Kant characterizes the universal law of right thus: “Act externally so that the free use of your choice [can] coexist with everyone’s freedom according to a universal law”. (10, Akademie Ausgabe (AA) 5:231) The problem is that while the categorical imperative (“Act only in accordance with that maxim through which you can at the same time will that it become a universal law.” (AA 4:421)) applies to purely rational beings (who are not affected by their bodily conditions) the universal law of right has to take our embodiment into account because it deals precisely with the external relations between people. The question thus becomes: how is the moral law which applies to humans qua purely rational beings related to humans qua rational embodied beings? It may be that, as some commentators have urged, our embodiment cannot play any role in the specification of actual human laws. (This is Arthur Ripstein’s position, whose Force and Freedom I reviewed in this space a year ago. http://www.concurringopinions.com/archives/2010/03/book-review-ripsteins-force-and-freedom-kants-legal-and-political-philosophy.html) Or it may be, as H. L. A. Hart has argued, following Hume, that the specific embodiment does play an important role in the sorts of laws we legislate for ourselves. This is the gist of Hart’s giant crab example in “Positivism and the Separation of Law and Morality” (Harvard Law Review, 1958, 623).
posted by Lawrence Cunningham
The hottest book of the century, on corporate law, is in production, thanks to editors Brett McDonnell and Claire Hill, both of Minnesota. As part of a series investigating the economics of particular legal subjects, overseen by Richard Posner and Francesco Perisi, this Research Handbook on the Economics of Corporate Law, promises a comprehensive canvass of the broadest definition of this field of law as it has been structured by economic theories over the past forty years.
My contribution addresses the influence of law and economics on the sub-field of law and accounting, which I suggest takes the form of “two steps forward one step back.” You can read a draft of my chapter (comments welcome!), available free here, accompanied by the following abstract:
Theory can have profound effects on practice, some intended and desirable, others unintended and undesirable. That’s the story of the influence the field of law and economics has had on the domain of law and accounting. That influence comes primarily from agency theory and modern finance theory, specifically through the efficient capital market hypothesis and capital asset pricing model. Those theories have forged considerable change in federal securities regulation, accounting standard setting, state corporation law, and financial auditing. Affected areas include the nature of disclosure, the measure of financial concepts, the limits of shareholder protection, and the scope of auditor duty.
Analysis reveals how agency theory and finance theory often but not always point to the same policy implications; it reveals how finance theory’s assumptions and limitations are often but not always respected in policy development. As a result, while these theories sometimes produced policy changes that were both intended and desirable, some policy changes were both unintended and undesirable while others were intended but undesirable. Examination stresses the power of ideas and how they are used and cautions creators and users of ideas to take care to appreciate the limits of theory when shaping practice. That’s vital since the effects of law and economics on law and accounting remain debated in many contexts.
Other contributions to the book similarly available in draft form are by Matt Bodie (St. Louis), David Walker (BU) and Charles Whitehead (Cornell). The following scholars are also contributing chapters: Bobby Ahdieh (Emory), Steve Bainbridge (UCLA), Margaret Blair (Vandy), Rob Daines (Stanford), Steve Davidoff (Ohio State), Jill Fisch (Penn), Tamar Frankel (BU), Ron Gilson (Stanford/Columbia), Jeff Gordon (Columbia), Sean Griffith (Fordham), Don Langevoort (GT), Ian Lee (Toronto), Richard Painter (Minnesota), Frank Partnoy (SD), Gordon Smith (BYU), Randall Thomas (Vandy), and Bob Thompson (GT).
posted by Sasha Romanosky
In previous posts (here and here), I suggested that analytical modeling can be useful to better understand data breaches, information disclosure laws and the costs to both companies and individuals because of these laws. I’d like to now expand on those ideas.
To be clear, there are many kinds of models and modeling approaches but what I’m interested in is the economic analysis of tort law. For those not aware, this approach is concerned with the cost of accidents to an injurer and a victim and it analyzes how various policy rules (typically regulation or liability) can minimize the sum of those costs.
The way I’ve come to interpret and apply models (e.g. mathematical equations) is to illustrate how agent’s incentives change under different policy interventions. For example, if companies are forced to notify consumers of a data breach, will they be induced to spend more or less money protecting consumer data? Will individuals take more or less care once notified? Will these actions together increase or decrease overall social costs?