Author: Ian Kerr

5

Some More Play (Symposium on Configuring the Networked Self)

I remember meeting Julie Cohen for the first time. It was right round the time she was writing “Lochner in Cyberspace.” We were hanging during one of the breaks at the Berkman Center’s first really big conference on the Internet & Society, eating ice cream cones with Mark Lemley.

If you had told me then that the (then) University of Pittsburgh, Professor of Law would one day turn her academic focus to many of Nietzsche’s preoccupations in La Gaya Scienza, I am not sure that I would have believed you. The Heraclitian privileging of becoming over being? Human maturity consisting in the seriousness of a child at play? Remaining faithful to the body? What does any of that have to do with digital rights management or a right to be anonymous? Read More

1

Robots in the Castle

In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.

Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”

Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.

Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.

At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.

The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.

In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.

But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:

“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”

The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.

Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.

But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.

I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.

My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.

When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.

I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.

I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.