Site Meter

Category: Legal Theory

1

Robots in the Castle

In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.

Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”

Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.

Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.

At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.

The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.

In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.

But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:

“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”

The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.

Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.

But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.

I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.

My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.

When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.

I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.

I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.

0

Neil Richards on Why Video Privacy Matters

Our guest blogger Neil Richards, a Professor of Law at Washington University School of Law, turns his sights on video privacy in this guest blog post.  It whets our appetite for his forthcoming book on Intellectual Privacy.  So here is Professor Richards’s post:

The House of Representatives recently passed an amendment to a fairly obscure a law known as the Video Privacy Protection Act.  This law protects the privacy of our video rental records.  It ensures that companies who have information about what videos we watch keep them confidential, and it requires them to get meaningful consent from us before they publish them.  The House, at the urging of Netflix and Facebook, has passed an amendment that would allow these companies to share our movie watching habits much more easily.  The Video Privacy Act was passed after the Washington City Paper obtained the video rental records of Supreme Court nominee Robert Bork and published them in order to politically discredit him.  It worked.  The Video Privacy Act rests on the enduring wisdom that what we watch is our own business, regardless of our politics.  It allows us to share films we’ve watched on our own terms and not those of video stores or online video providers.

What’s at stake is something privacy scholars call “intellectual privacy” – the idea that records of our reading habits, movie watching habits, and private conversations deserve special protection from other kinds of personal information.  The films we watch, the books we read, and the web sites we visit are essential to the ways we make sense of the world and make up our minds about political and non-political issues.  Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read.  It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things.  It lets us watch or read whatever we want without fear of embarrassment or being outed.  This is the case whether we’re reading communist or anti-globalization books; or visiting web sites about abortion, gun control, cancer, or coming out as gay; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”

For generations, librarians have understood this.  Libraries were the Internet before computers – they presented the world of reading to us, and let us as patrons read (and watch) freely for ourselves. But librarians understood that intellectual privacy matters.  A good library lets us read freely, but keeps our records confidential in order to safeguard our intellectual privacy.   But we are told by Netflix, Facebook, and other companies that the world has changed.  “Sharing” as they call it is the way of the future.  I disagree.  Sharing can be good, and sharing of what we watch and read is very important.  But the way we share is essential.  Telling our friends “hey – read this – it’s important” or “watch this movie – it’s really moving” is one of the great things that the Internet has made easier.  But sharing has to be done on our terms, not on those that are most profitable for business.  Sharing doesn’t mean a norm of publishing everything we read on the Internet.  It means giving us a conscious choice about when we are sharing our intellectual habits, and when we are not.

Industry groups are fond of saying that good privacy practices require consumer notice and consumer choice.  The current Video Privacy Act is one of the few laws that does give consumers meaningful choice about protecting their sensitive personal information.  Now is not the time to cut back on the VPPA’s protections.  Now is the time to extend its protections to the whole range of intellectual records – the books we buy, our internet search histories, and ISP logs of what we read on the Internet.  As a first step, we should reject this attempt to eviscerate our intellectual privacy.

2

Understanding Dignity

My brilliant colleague and co-author Leslie Meltzer Henry is a thought leader on dignity’s jurisprudential and philosophical implications.  University of Pennsylvania Law Review just published her engrossing and important piece entitled “The Jurisprudence of Dignity.”  I’m hoping to have a longer conversation about the piece in the future.  For now, here is the abstract:

Few words play a more central role in modern constitutional law without appearing in the Constitution than “dignity.” The term appears in more than nine hundred Supreme Court opinions, but despite its popularity, dignity is a concept in disarray. Its meanings and functions are commonly presupposed but rarely articulated. The result is a cacophony of uses so confusing that some critics argue the word ought to be abandoned altogether.

This Article fills a void in the literature by offering the first empirical study of Supreme Court opinions that invoke dignity and then proposing a typology of dignity based on an analysis of how the term is used in those opinions. The study reveals three important findings. First, the Court’s reliance on dignity is increasing, and the Roberts Court is accelerating that trend. Second, in contrast to its past use, dignity is now as likely to be invoked by the more conservative Justices on the Court as by their more liberal counterparts. Finally, the study demonstrates that dignity is not one concept, as other scholars have theorized, but rather five related concepts.

The typology refers to these conceptions of dignity as institutional status as dignityequality as dignity, liberty as dignity,personal integrity as dignity, and collective virtue as dignity. This Article traces each type of dignity to its epistemic origins and describes the substantive dignitary interests each protects. Importantly, the typology offers more than a clarification of the conceptual chaos surrounding dignity. It provides tools to track the Court’s use of different types of dignity over time. This permits us to detect doctrinally transformative moments, in such areas as state sovereign immunity and abortion jurisprudence, that arise from shifting conceptions of dignity.

7

The Relationship Between Theory and Practice

The longstanding attacks on legal scholarship all seem to assume a particular relationship between theory and practice, one that I believe is flawed.  Recently, I responded to one such critique.  There are others, with Justice Roberts and many other judges and practitioners claiming that legal scholarship isn’t worth their attention and isn’t useful to the practice of law.

It seems to me that those making these critiques assume that the primary value of legal scholarship should be to (1) describe current legal doctrine to make legal research easier for practitioners; or (2) influence an immediate and direct change in the law.  In an earlier post, I argued that #2 above is an unreasonable standard.  Legal change is slow, and rarely will one article have a direct influence.  Rarely does one thing have a direct influence — change typically occurs more through an indirect influence by numerous sources.  Only in the movies or in simplistic historical accounts will we see one article or book lead to dramatic changes.  Of course, it occasionally happens, but rarely.

In this post, I want to tackle claim #1.  The treatise writers and doctrinal legal scholarship of yesteryear has diminished, though it isn’t gone.  Last I checked, there were quite a lot of treatises written by quite a lot of law professors.  But there is today a lot more theoretical scholarship.  Is this scholarship valuable if it doesn’t help in legal research?

The answer is yes for many reasons:

1. As with all humanities, the value of any particular work is hard to quantify.  What’s the value of Kafka’s The Trial or works by Shakespeare?  What’s the value of reading history?  What’s the value of learning things that don’t have direct application to one’s career?  I believe there’s a lot of value.  Reading these works opens up new ways of thinking, sparks new ideas, and helps people understand the world differently.  This can indirectly affect one’s legal practice skills by enhancing creativity, improving one’s writing style, or making one see the facts of a case in a different light. It is interesting that many of the great jurists were also avid readers of literature.  Indeed, many of the great thinkers and writers throughout history had wide-ranging intellectual interests and reading habits.  Would people like Benjamin Franklin or Thomas Jefferson be as creative if they had more narrow and workmanlike intellectual exploration?  Probably not.  Would Justice Holmes have been as great without his love of the humanities?  I doubt it.

2. There is a value in critiquing legal decisions and laws, even if the critique winds up remaining in dissent.  Why do justices bother to write dissents?  After all, it often takes decades if not 40-50 years for the Supreme Court to change the law.  They write dissents in the hope that one day the Court will see things differently.  They write them to make a record.  There is a value in criticizing legal opinions and laws even if it doesn’t immediately result in a change.  Indeed, many of the critiques of legal decisions and laws that I read in legal scholarship are very powerful ones.  Courts and lawmakers should pay more attention, as the scholarship often reveals logical flaws in reasoning, clear errors in applying precedent, assumptions that are based on faulty facts, assumptions that are wrong based on empirical evidence, or assumptions that are contrary to widely-accepted conclusions in science or social science.   Courts and legislatures may hide their heads in the sand, but that shouldn’t be a justification for criticizing legal scholarship — it should be a basis for criticizing courts and lawmakers.

Read More

6

The Usefulness of Legal Scholarship

A reader of my post about the N.Y. Times critique of legal education writes, in regard to the value of legal scholarship:

I happen to be on the editorial board of a T14 law school’s law review, so I have to cite check and read articles regularly. Of those I’ve read, I can’t think of a single one I thought would be useful to a practicing lawyer. The problem is, in my experience, most seem to advocate a fundamental change in philosophy to an area of law that diverges from what precedent would suggest. To me, this seems extremely unhelpful, because A. Lower courts aren’t likely to accept a grand new theory that seems to contradict what SCOTUS is saying, B. As far as I can tell SCOTUS seems not to usually change its theory either, and C. I don’t think most policymakers tend to read law review articles.

This leads me to be inclined to believe that most law review articles are useless. Are you saying my sample is unrepresentative of what’s out there? Or do I simply have a narrower definition of usefulness? Could you perhaps suggest some articles from the past year that in your mind represented useful legal scholarship?

This commentator assumes that usefulness is the equivalent of being accepted by the courts.  I quarrel with this view for many reasons:

1. An article can have an influence on cases, even if difficult to demonstrate.  Many courts don’t cite law review articles even when they rely on them.  Judges are notorious for not being particularly charitable with citations.  They often copy verbatim parts of briefs, for example.  If a law professor relies on a scholarly work even in a minor way, the professor will typically cite to the work.  Not so for courts.

2. Most articles will not change the law.  Changing the law is quite difficult, and if most law review articles changed the law, the law would be ridiculously more dynamic than it currently is.

3. No matter what discipline or area, most of the things produced are not going to be great.  Most inventions are flops.  Most books, songs, movies, TV shows, art works, architecture, or anything produced are quite forgettable and will likely be forgotten.  Great lasting works only come around infrequently, no matter what the field.

4. Most people are forgettable too.  In the law, most practitioners and judges have been forgotten.  Only a few great ones are remembered.  Of the judges who are most well-known, it is interesting that many were more theoretical in nature and had a major impact in changing the law — typically in ways law professors might change the law.  Think of Benjamin Cardozo, who wrote many articles and books and who radically changed the law.  Think of Felix Frankfurter, a former law professor.  Think of Louis Brandeis.  Think of Oliver Wendell Holmes.  These were jurists who were thinkers.  They were readers.  They were literary.  They were writers of scholarship too.  Maybe the forgettable practitioners and judges are the ones who ignore legal scholarship.

Read More

11

An Irrational Undertaking: Why Aren’t We More Rational?

By unanimous reader demand – all one out of one readers voting, as of last week – this post will explore the small topic of the biological basis of “irrationality,” and its implications for law.  Specifically, Ben Daniels of Collective Conscious asked the fascinating question: “What neuro-mechanisms enforce irrational behavior in a rational animal?”

Ben’s question suggests that ostensibly rational human beings often act in irrational ways.  To prove his point, I’m actually going to address his enormous question within a blog post.  I hope you judge the effort valiant, if not complete.

The post will offer two perspectives on whether, as the question asks, we could be more rational than we are if certain “neuro-mechanisms” did not function to impair rationality.  The first view is that greater rationality might be possible – but might not confer greater benefits.  I call this the “anti-Vulcan hypothesis”:  While our affective capacities might suppress some of our computational power, they are precisely what make us both less than perfectly rational and gloriously human – Captain Kirk, rather than Mr. Spock.  A second, related perspective offered by the field of cultural cognition suggests that developmentally-acquired, neurally-ingrained cultural schemas cause people to evaluate new information not abstractly on its merits but in ways that conform to the norms of their social group.  In what I call the “sheep hypothesis,” cultural cognition theory suggests that our rational faculties often serve merely to rationalize facts in ways that fit our group-typical biases.  Yet, whether we are Kirk or Flossie, the implication for law may be the same:  Understanding how affect and rationality interact can allow legal decision-makers to modify legal institutions to favor the relevant ability, modify legal regimes to account for predictable limitations on rationality, and communicate in ways that privilege social affiliations and affective cues as much as factual information.

First, a slight cavil with the question:  The question suggests that people are “rational animal[s]” but that certain neurological mechanisms suppress rationality – as if our powerful rational engines were somehow constrained by neural cruise-control.  Latent in that question are a factual assumption about how the brain works (more on that later) and a normative inclination to see irrationality as a problem to which rationality is the solution.  Yet, much recent work on the central role of affect in decision-making suggests that, often, the converse may be true.  (Among many others, see Jonathan Haidt and Josh Greene; these links will open PDF articles in a new window.)  Rationality divorced from affect arguably may not even be possible for humans, much less desirable.  Indeed, the whole idea of “pure reason” as either a fact or a goal is taking a beating at the hands of researchers in behavioral economics, cognitive neuroscience, and experimental philosophy – and perhaps other fields as well.

Also, since “rational” can mean a lot of things, I’m going to define it as the ability to calculate which behavior under particular circumstances will yield the greatest short-term utility to the actor.  By this measure, people do irrational things all the time: we discount the future unduly, preferring a dollar today to ten dollars next month; we comically misjudge risk, shying away from the safest form of transportation (flying) in favor of the most dangerous (driving); we punish excessively; and the list goes on.

Despite these persistent and universal defects in rationality, experimental data indicates that our brains have the capacity to be more rational than our behaviors would suggest.  Apparently, certain strong affective responses interfere with activity in particular regions of the prefrontal cortex (pfc); these areas of the pfc are associated with rationality tasks like sequencing, comparing, and computing.  Experiments in which researchers use powerful magnets to temporarily “knock out” activity in limbic (affective) brain regions, the otherwise typical subjects displayed savant-like abilities in spatial, visual, and computational skills.  This experimental result mimics what anecdotally has been reported in people who display savant abilities following brain injury or disease, and in people with autism spectrum disorders, who may have severe social and affective impairments yet also be savants.

So: Some evidence suggests the human brain may have massively more computing power than we can to put to use because of general (and sometimes acute) affective interference.  It may be that social and emotional processing suck up all the bandwidth; or, prosocial faculties may suppress activity in computational regions.  Further, the rational cognition we can access can be totally swamped out by sudden and strong affect.  With a nod to Martha Nussbaum, we might call this the “fragility of rationality.”

This fragility may be more boon than bane:  Rationality may be fragile because, in many situations, leading with affect might confer a survival advantage.  Simple heuristics and “gut” feelings, which are “fast and cheap,” let us respond quickly to complex and potentially dangerous situations.  Another evolutionary argument is that all-important social relationships can be disrupted by rational utility-maximizing behaviors – whether you call them free-riders or defectors.  To prevent humans from mucking up the enormous survival-enhancing benefits of community, selection would favor prosocial neuroendocrine mechanisms that suppress or an individual’s desire to maximize short-term utility.  What’s appealing about this argument is that – if true – it means that that which enables us to be human is precisely that which makes us not purely rational.  This “anti-Vulcan” hypothesis is very much the thrust of the work by Antonio Damasio (and here), Dan Ariely, and Paul Zak, among many other notable scholars.

An arguably darker view of the relationship between prosociality and rationality comes from cultural cognition theory.  While evolutionary psychology and behavioral economics suggest that people have cognitive quirks as to certain kinds of mental tasks, cultural cognition suggests that people’s major beliefs about the state of the world – the issues that self-governance and democracy depend upon – are largely impervious to rationality.  In place of rationality, people quite unconsciously “conform their beliefs about disputed matters of fact … to values that define their cultural identities.”

On this view, people aren’t just bad at understanding risk and temporal discounting, among other things, because our prosocial adaptations suppress it.  Rather, from global warming to gun control, people unconsciously align their assessments of issues to conform to the beliefs and values of their social group.  Rationality operates, if at all, post hoc:  It allows people to construct rationalizations for relatively fact-independent but socially conforming conclusions.  (Note that different cultural groups assign different values to rational forms of thought and inquiry.  In a group that highly prizes those activities, pursuing rationally-informed questioning might itself be culturally conforming.  Children of academics and knowledge-workers: I’m looking at you.)

This reflexive conformity is not a deliberate choice; it’s quite automatic, feels wholly natural, and is resilient against narrowly rational challenges based in facts and data.  And that this cognitive mode inheres in us makes a certain kind of sense:  Most people face far greater immediate danger from defying their social group than from global warming or gun control policy.  The person who strongly or regularly conflicts with their group becomes a sort of socially stateless person, the exiled persona non grata.

To descend from Olympus to the village:  What could this mean for law?  Whether we take the heuristics and biases approach emerging from behavioral economics and evolutionary psychology or the cultural cognition approach emerging from that field, the social and emotional nature of situated cognition cannot be ignored.  I’ll briefly highlight two strategies for “rationalizing” aspects of the legal system to account for our affectively-influenced rationality – one addressing the design of legal institutions and the other addressing how legal and political decisions are communicated to the public.

Oliver Goodenough suggests that research on rational-affective mutual interference should inform how legal institutions are  designed.  Legal institutions may be anywhere on a continuum from physical to metaphorical, from court buildings to court systems, to the structure and concept of the jury, to professional norms and conventions.  The structures of legal institutions influence how people within them engage in decision-making; certain institutional features may prompt people bring to bear their more emotive (empathic), social-cognitive (“sheep”), or purely rational (“Vulcan”) capacities.

Goodenough does not claim that more rationality is always better; in some legal contexts, we might collectively value affect – empathy, mercy.  In another, we might value cultural cognition – as when, for example, a jury in a criminal case must determine whether a defendant’s response to alleged provocation falls within the norms of the community.  And in still other contexts, we might value narrow rationality above all.  Understanding the triggers for our various cognitive modes could help address long-standing legal dilemmas.  Jon Hanson’s work on the highly situated and situational nature of decision-making suggests that the physical and social contexts in which deliberation takes place may be crucial to the answers at which we arrive.

Cultural cognition may offer strategies for communicating with the public about important issues.  The core insight of cultural cognition is that people react to new information not primarily by assessing it in the abstract, on its merits, but by intuiting their community’s likely reaction and conforming to it.  If the primary question a person asks herself is, “What would my community think of this thing?” instead of “What is this thing?”, then very different communication strategies follow:  Facts and information about the thing itself only become meaningful when embedded in information about the thing’s relevance to peoples’ communities.  The cultural cognition project has developed specific recommendations for communication around lawmaking involving gun rights, the death penalty, climate change, and other ostensibly fact-bound but intensely polarizing topics.

To wrap this up by going back to the question: Ben, short of putting every person into a TMS machine that makes us faux-savants by knocking out affective and social functions, we are not going to unleash our latent (narrowly) rational powers.  But it’s worth recalling that the historical, and now unpalatable term, for natural savants used to be “idiot-savant”: This phrase itself suggests that, without robust affective and social intelligence – which may make us “irrational” – we’re not very smart at all.

0

Recommended Reading: Gregory Keating’s Fairness Theory, New Papers on Rylands and Nuisance

Professor Gregory Keating has two new pieces up on SSRN, both illuminating and important.  A quick overly-brief primer: Keating’s fairness theory provides the “moral logic” for treating strict enterprise liability as the modern default rule for tort law.  It requires an enterprise to compensate individuals injured by its risky, yet profitable activities if the victim does not benefit from those activities to the same extent that the enterprise does.  In that sense, strict liability exacts a just price for an enterprise’s freedom to engage in profitable activities where the victim did not similarly enjoy such a liberty but nonetheless suffered injury.  In the abstract included below for Recovering Rylands: An Essay for Bob Rabin (forthcoming DePaul Law Review), Keating celebrates and builds upon Robert Rabin’s article “The Historical Development of the Fault Principle,” providing a moral and historical account of Rylands v. Fletcher’s strict liability alternative to fault liability while recognizing its practical limitations.  After the jump, I will include the abstract for Keating’s Nuisance as a Strict Liability WrongHere is the abstract for Recovering Rylands:

This paper, written for a Clifford Symposium Festschrift for Robert Rabin, comments on his lovely, widely admired, and yet still underappreciated paper The Historical Development of the Fault Principle: A Reinterpretation. Rabin’s paper teaches us something essential about the character and structure of modern tort law at the moment of its genesis, and it reminds us of the even more general truth that what the law does not cover is at least as important as what it does cover. The Historical Development of the Fault Principle is constructed around a simple, but powerful, distinction between fault as a breach of duty and fault as a cause of action. Negligence as a cause of action is an institution, a system of related rules, concepts, principles and policies. This simple but penetrating observation transforms the question of just what is at stake in the conventional thesis that the late nineteenth century was the heyday of “universal fault liability.”

Whether or not fault liability was “universal” at the end of the nineteenth century turns, Rabin teaches, not on whether tort liability for accidental injury is constructed around fault or strict liability. The “universality” of fault liability is, rather, a question about the percentage of the legal landscape for unintentional harm that the institution of negligence liability governs. Building on this point, The Historical Development of the Fault Principle shows that the age of “universal fault liability” is better described as an age where “no duty” predominated. Tort liability – fault liability retreated whenever contract was capable of taking hold of a domain of accidental injury. It retreated both in the presence of contractual relations (in the workplace context) and in the absence of contractual relations (in the product context). Property, contract, and “no duty” all trumped tort. This insight not only changes our understanding of the rise of fault liability; it also provides a powerful rebuttal of the still influential, if waning, view that the common law of torts circa 1870-1905 was economically efficient.

Rabin’s critique leaves intact the thesis that negligence liability itself emerged as a freestanding form of tort liability at the end of the nineteenth century. Prior to that time, negligence was merely the mental element of a number of discrete, nominate torts. Late in the nineteenth century, negligence transforms into a norm of conduct and thereby emerges as a distinctive form of tort liability. This development sets the stage for the expansion of fault liability into the domains of product accidents, landowner liability, and some forms of pure economic and emotional harm. The late nineteenth century thus sets the stage for the “universal fault liability” that it so conspicuously fails to achieve.

Recovering Rylands argues that Rylands v. Fletcher represents a parallel development with respect to strict liability. Rylands generalizes ancient forms of liability in nuisance and trespass into a coherent, general alternative to fault liability. The opinions in the case both articulate strict liability as a general principle of responsibility for harm done and clarify the fundamental perception on which strict liability rests, namely, that harm justifiability inflicted – harm which is unavoidable in the sense that it should be inflicted – can trigger responsibilities of repair. The idea that the justified infliction of harm gives rise to responsibilities of repair stands in sharp contrast to the root premise of fault liability, and accounts for the enduring significance of strict liability as form of legal responsibility for harm done.

After excavating the basis and nature of strict liability in Rylands, the paper traces the ebb and flow of the strand of strict liability that it inspired over the past century and a half. On the one hand, that history shows that fault liability is never universal, though generally dominant. On the other hand, that history suggests that the difficulty of attributing harms to activities without deploying a fault criterion may be a permanent, insurmountable barrier to universal, common law strict liability. Last, but surely not least, Rylands’ articulation of strict liability as a general idea is an essential part of the formative moment of modern tort law that Bob Rabin did so much to help us understand. Adding an account of Rylands is a way of building on his seminal contribution. Read More

2

Recommended Reading: David A. Super’s Against Flexibility

Cornell Law Review just published Professor David Super’s article Against Flexibility, a forceful and engrossing indictment of flexibility and legal procrastination at its core.  Here is the abstract:

Contemporary legal thinking is in the thrall of a cult of flexibility. We obsess about avoiding decisions without all possible relevant information while ignoring the costs of postponing decisions until that information becomes available. We valorize procrastination and condemn investments of decisional resources in early decisions.

Both public and private law should be understood as a productive activity converting information, norms, and decisional and enforcement capacity into outputs of social value. Optimal timing depends on changes in these inputs’ scarcity and in the value of the decision they produce. Our legal culture tends to overestmate the value of information that may become available in the future while discounting declines over time in decisional resources and the utility of decisions. Even where postponing some decisions is necessary, a sophisticated appreciation of discretion’s components often exposes aspects of decisions that can and should be made earlier.

Disaster response illustrates the folly of legal procrastination as it shrinks the supply of decisional resources while increasing the demand for them. After Hurricane Katrina, programs built around flexibility failed badly through a combination of late and defective decisions. By contrast, those that appreciated the scarcity of decisional resources and had developed detailed regulatory templates in advance provided quick and effective relief. 

3

Q&A with Lior Strahilevitz about Information and Exclusion

Lior Strahilevitz, Deputy Dean and Sidley Austin Professor of Law at the University of Chicago Law School recently published a brilliant new book, Information and Exclusion (Yale University Press 2011).  Like all of Lior’s work, the book is creative, thought-provoking, and compelling.  There are books that make strong and convincing arguments, and these are good, but then there are the rare books that not only do this, but make you think in a different way.  That’s what Lior achieves in his book, and that’s quite an achievement.

I recently had the opportunity to chat with Lior about the book. 

Daniel J. Solove (DJS): What drew you to the topic of exclusion?

Lior Jacob Strahilevitz (LJS):  It was an observation I had as a college sophomore.  I lived in the student housing cooperatives at Berkeley.  Some of my friends who lived in the cooperatives told me they felt morally superior to people in the fraternities and sororities because the Greek system had an elaborate, exclusionary rush and pledge process.  The cooperatives, by contrast, were open to any student.  But as I visited friends who lived in the various cooperative houses, the individual houses often seemed no more heterogeneous than the fraternities and sororities.  That made me curious.  It was obvious that the pledging and rushing process – formal exclusion – created homogeneity in the Greek system.  But what was it that was creating all this apparent homogeneity in a cooperative system that was open to everyone?  That question was one I kept wondering about as a law student, lawyer, and professor.

That’s why page 1 of the book begins with a discussion of exclusion in the Greek system.  I start with really accounts of the rush process by sociologists who studied the proxies that fraternity members used to evaluate pledges in the 1950s (attire, diction, grooming, firm handshakes, etc.)  The book then brings us to the modern era, when fraternity members peruse Facebook profiles that provide far more granular information about the characteristics of each pledge.  Proxies still matter, but the proxies are different, and those differences alter the ways in which rushing students behave and fraternities exclude.

DJS: What is the central idea in your book?

LJS: The core idea is that asymmetric information largely determines which mechanisms are used to exclude people from particular groups, collective resources, and services.  When the person who controls a resource knows a lot about the people who wish to use it, she will make decisions about who gets to access it.  Where she lacks that information, she’ll develop a strategy that forces particular groups to exclude themselves from the resource, based on some criteria.  There’s a historical ebb and flow between these two sorts of strategies for exclusion, but we seem to be in a critical transition period right now thanks to the decline of practical obscurity in the information age.

Read More

4

What is a treaty? Is that the right question?

(Thanks to Danielle and the Co-Op crowed for letting me stick around a bit longer.)

I am interested in how we should think about treaties.  More specifically, I am interested in different ways we might think about treaties, and why different ways might be appropriate in different circumstances.  At one extreme we might think of treaties as establishing sacred duties, as being based on oaths with deep religious implications.  (Jeremy Waldon has a very interesting discussion of the history of this idea in his recent Charles E. Test lectures, “A Religious View of the Foundations of International Law”.)  I think that there’s a case to be made that supposed principle of international law (or of natural law, depending on one’s account), pacta sunt servanda, depends on this understanding, though I won’t try to make that case here.  (If so, this would be interesting in light of fact that Hans Kelsen at one point held, I believe, pacta sunt servanda to be the “basic norm” of international law, though he later abandoned this.) Read More