Camel, Weasel, Whale
posted by Ryan Calo
Samir Chopra—whom I consider to be something of a pioneer in thinking through the philosophic and legal issues around artificial intelligence—did not much care for my initial thoughts about his and Lawrence White’s new book, A Legal Theory For Autonomous Agents. The gist of my remarks was that, while interesting and well researched, the book does not deliver on its promise of advancing “a legal theory.” Mostly what the book does (I read the book cover to cover, as you can see!) is identify new and old ways the law might treat complex software to advance various, seemingly unrelated goals. The book is largely about removing conceptual obstacles to treating software as “agreeing,” “knowing,” or “taking responsibility,” should we be inclined to do so in particular cases for independent policy reasons.
In the second chapter of the book, for instance, Chopra and White argue that treating software capable of calculating, offering, and appearing to accept terms as legal agents is not only coherent, but results in greater economic efficiency. The upshot is lesser contractual liability than treating the software as a mere instrument because, in instances where software makes the right kind of mistake, the entity that deployed the software—usually a sophisticated corporation—will not be held to the agreement. In the third chapter, the authors abandon economic efficiency entirely. Here the argument is that we ought to look to agency law in order to attribute more information to corporations because “[o]nly such a treatment would do justice to the reality of the increased power of the corporation, which is a direct function of the knowledge at its disposal.” In other words, by treating its software as agents rather than tools, we can either limit corporate liability for reasons of efficiency, or expand it for reasons of fairness.
In reply to my initial post, Chopra writes:
It is also clear that you don’t (or choose not to) understand the conditional nature of the claim we make when we say that intentional stance is to be chosen if it results in the best predictive and explanatory position. Read James Grimmelmann’s post; he gets to the heart of the matter when he notes that the complexity of these systems is key. Your example of the fire is silly; you are the one dabbling in in appropriate metaphor here; we always have the physical mode of description available here as the best explanatory device. We note in the book that the intentional stance will become the best strategy when we lose epistemic hegemony over these agents; on other occasions it will be available to us and we can use it to facilitate certain kinds of discourse – as in when we want to treat artificial agents as legal agents.
The exact opposite is true. I see very well that Chopra and White would adopt the intentional stance only if doing so “results in the best predictive and explanatory position.” I am trying to figure out what “the best predictive and explanatory position” might be. The point of each of my examples is that reasonable minds will disagree about “the best explanatory device” for a given phenomenon; they will disagree as to whether it is better, as a policy matter, for the law to treat Google’s algorithm as though it were a human employee. Perhaps it is better from the perspective of consumer privacy but worse from that of government surveillance.
A Legal Theory For Autonomous Agents continues the project—dating back to at least Sam Lehman-Wilzig’s 1981 essay Frankenstein Unbound—of identifying options for how the law might treat software of ever-increasing complexity and independence. Maybe complex software is a like a child, a slave, a corporation, a ship, an animal, an agent. We are still like Polonius and Hamlet discussing the cloud. I admire Chopra and White’s book for its sustained attention to agency law as a possible means to reach desirable results in some cases involving complex software. I admire its commitment to interdisciplinary study. But I do not see the book as delivering “a prescriptive legal theory to guide our interactions with artificial agents.” Chopra and other participants are free to disagree.