Archive for the ‘Symposium (Autonomous Artificial Agents)’ Category
posted by Samir Chopra
I want to wrap up discussion in this wonderful online symposium on A Legal Theory for Autonomous Artificial Agents that Frank Pasquale and the folks at Concurring Opinions put together. I appreciate you letting me hijack your space for a week! Obviously, this symposium would not have been possible without its participants–Ken Anderson, Ryan Calo, James Grimmelmann, Sonia Katyal, Ian Kerr, Andrea Matwyshyn, Deborah DeMott, Paul Ohm, Ugo Pagallo, Lawrence Solum, Ramesh Subramanian and Harry Surden–and I thank them all for their responses. You’ve all made me think very hard about the book’s arguments (I hope to continue these conversations over at my blog at samirchopra.com and on my Twitter feed at @EyeOnThePitch). As I indicated to Frank by email, I’d need to write a second book in order to do justice to them. I don’t want to waffle on too long so let me just quote from the book to make clear what our position is with regards to artificial agents and their future legal status:
posted by Samir Chopra
I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick and AJ in the comments spaces where their remarks were originally made).
February 20, 2012 at 4:32 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents), Technology Print This Post 4 Comments
posted by Samir Chopra
Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents. I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read the rest of this post »
February 19, 2012 at 6:40 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents), Technology Print This Post No Comments
posted by Samir Chopra
I want to thank Harry Surden for his rich, technically-informed response to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:
There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”
The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.
I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.
Having said this, I can now turn to responding to Harry’s excellent post.
Read the rest of this post »
February 19, 2012 at 3:26 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Psychology and Behavior, Symposium (Autonomous Artificial Agents), Technology Print This Post 6 Comments
posted by Samir Chopra
Andrea Matwyshyn’s reading of the agency analysis of contracting (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.
At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.
[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.
Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.
Third, an implicit, unstated economic incentive.
February 19, 2012 at 2:10 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Cyberlaw, Economic Analysis of Law, Legal Theory, Symposium (Autonomous Artificial Agents), Technology, Tort Law Print This Post No Comments
posted by Samir Chopra
I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
Read the rest of this post »
February 18, 2012 at 3:05 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Symposium (Autonomous Artificial Agents), Technology Print This Post 7 Comments
posted by Samir Chopra
I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive) response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read the rest of this post »
February 18, 2012 at 12:47 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Contract Law & Beyond, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents) Print This Post No Comments
posted by Samir Chopra
In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to Gillick v West Norfolk and Wisbech Area Health Authority)
I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.
Read the rest of this post »
February 18, 2012 at 10:54 am Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Contract Law & Beyond, Cyberlaw, Symposium (Autonomous Artificial Agents) Print This Post 3 Comments
posted by Ken Anderson
My first encounters with legal issues of autonomous artificial agents came a few years ago in international law of autonomous lethal weapons systems. In an email exchange with an eminent computer scientist working on the problems of engineering systems that could follow the fundamental laws of war, I expressed some doubt that it would be quite so easy as all that come up with algorithms that could, in effect, “do Kant” (in giving effect to the categorical legal imperative not to target the civilians). Or, even more problematically, “do Bentham and Mill” (in providing a proportionality calculus of civilian harm set against military necessity). Indeed (I noted primly, clutching my Liberal Arts degree firmly in the Temple of STEM), we humans didn’t have an agreed upon way of addressing the proportionality calculus ourselves, given that it seemed to invoke incomparable and incommensurable values. So how was the robot going to do what we couldn’t?
The engineer’s answer was simultaneously cheering and alarming, but mostly insouciant: ‘I don’t have to solve the philosophical problems. My machine programming just has to do on average as well or slightly better than human soldiers do.’ Which, in effect, sets up what we might call an “ethical Turning Test” for the ideal autonomous artificial agent. ”Ethics for Robot Soldiers,” as Matthew Waxman and I are calling it in a new project on autonomous robotic weapons. If, in practice, we can’t tell which is the human and which is the machine in matters of ethical decision-making, then it turns out not to matter how we get to that point. Getting there means, in this case, not so much human versus machine, but instead behaviorism versus intentionality.
It is on account of reflections on autonomous robot soldiers of the (possible) future that I so eagerly read Samir Chopra and Laurence White’s book. It does not disappoint. It is the only general theory of what might emerge across multiple areas of law over the next few decades. Still more importantly in my view, it is the only account on offer that manages to find the sweet spot between a sci-fi speculation so rampant that it merely assumes away the problems by making artificial agents into human beings, on the one hand, and so granular that it does not offer a theory of agents and agency, rather than a collection of discrete legal problems, on the other. It accomplishes all this splendidly.
But it precisely because the text finds that sweet spot that I have a nagging question – one that is perhaps answered in the book but which I simply didn’t adequately understand. But let me put it directly, as a way of understanding the book’s fundamental frame. In the struggle between behaviorism and the “intentional stance” that runs throughout the book, but particularly in its encounters with the law of agency, and particularly as found in the Restatement, I was not sure where the argument finally comes down as regarding the status of intentionality. At some points, it did seem to be an irreducible aspect of certain behaviors, insofar as those behaviors could only be such under an intentional description, such as human relationships. But sometimes it seemed as though intentionality was an irreducible aspect of human behavior – even though the artificial agent might still pass the Turing Test on a purely behavioral basis and be indistinguishable from the human.
At still other points, I thought I was to understand that intentionality was no longer an ontological status, but something closer to an “organizational heuristic” for how human beings direct themselves toward particular goals – a human methodology, true, but merely one way of going about means to ends behaviors, in which an artificial agent might accomplish the task quite differently. And in that case, I had a further question as to whether the underlying view of the “formation of judgment” was one that assumed the model of “supply ends, I’ll supply means” – or whether, instead, it held, at least as far as human judgment goes, a view that the formation of judgment does not cleanly separate them in this way. It seemed to matter, at least as far as the conceptualization of how the artificial agent made its judgments, and in what they would finally consist.
It is entirely possible that I have not understood something fundamental in the book, and the answer to what does “intention” mean in the text is actually quite plain. But this question, in relation to behaviorism and the artificial agent, is what I have found hardest to grasp. I suppose this is particularly so when, for good reasons, the book is mostly about behavior, not intention. The reason I find the question important is that it seems to me that many of the crucial relationships (and also judgments, per the worry above) that might be permitted, or ascribed, to artificial agents depend upon a certain relation – that of a fiduciary, for example, with all the peculiar “relational” positioning that is implied in that special form of agency.
Does being a fiduciary, then, at least in the strong sense of exercising discretion, imply relationships that only exist under a certain intention? Or relationships that might be said to exist only under a certain affect – love, for example? And does it finally matter? Or is the position taken by the book finally one that either reduces the intention to the sum of behaviors, or else suggests that for the purposes for which we create – “endow,” more precisely – artificial agents, behavior is enough, without it being under any kind of description? I apologize for being overly abstract and obscure here. Reduced to the most basic: what is the status, on this general theory, of intention? And with that question, let me say again: Outstanding book; congratulations!
posted by Ian Kerr
In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.
Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”
Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.
Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.
At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.
The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.
In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.
But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:
“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”
The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.
Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.
But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.
I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.
My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.
When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.
I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.
I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.
posted by Andrew Sutter
Is this the book to separate the legal issues of “autonomous artificial agents” from the more controversial questions of whether code or silicon can function as “people”? The one that can stick to the practical issues of contract formation, tort liability and the like, without blurring the boundaries between legal personhood and personhood in a fuller sense?
I think this was the intention of the authors (C&W). And I certainly agree with other participants in the forum that they’ve done a wonderful job of identifying and analyzing many key legal and philosophical issues in this field; no doubt the book will be framing the debate about autonomous artificial “agents” (AAAs) for years to come. But the style of C&Ws’ argument and the philosophical positions they take may make it hard to warm up to some of their analysis and recommendations unless you’re happy to take a rather expansive view of the capabilities of artificial intelligence — such as imputing a moral consciousness to programs and robots. And even if you’re happy to do so, what about everyone else? I’ll explain below the fold.
Read the rest of this post »
posted by Harry Surden
This post expands upon a theme from Samir Chopra and Lawrence White’s excellent and thought-provoking book – A Legal Theory for Autonomous Artificial Agents. One question pervading the text: to what extent should lawmakers import or extend existing legal frameworks to cover the activities of autonomous (or partially autonomous) computer systems and machines? These are legal frameworks that were originally created to regulate human actors. For example, the authors query whether the doctrines and principles of agency law can be mapped onto actions carried out by automated systems on behalf of their users? As the book notes, autonomous systems are already an integral part of existing commercial areas (e.g. finance) and may be poised to emerge in others over the next few decades (e.g. autonomous, self-driving automobiles). However, it is helpful to further expand upon one dimension raised by the text: the relationship between the technology underlying autonomous agents, and the activity or results produced by the technology.
Two Views of Artificial Intelligence
The emergence of partially autonomous systems – computer programs (or machines) carrying out activities at least partially in a self-directed way, on behalf of their users, is closely aligned with the field of Artificial Intelligence (AI) and developments therein. (AI is a sub-discipline of computer science.) What is the goal of AI research? There is probably no universally agreed upon answer to this question – as there have been a range of approaches and criteria for systems considered to be successful advances in the field. However, some AI researchers have helpfully clarified two dimensions along which we can think about AI developments. Consider a spectrum of possible criteria under which one might label a system to be a “successful” AI product:
View 1) We might consider a system to be artificially intelligent only if it produces “intelligent” results based upon processes that model, approach or replicate the high-level cognitive abilities or abstract reasoning skills of humans ;or
View 2) We might most evaluate a system primarily based upon the quality of the output it produces – if it produces results that humans would consider accurate and helpful – even if the results or output came about through processes that do not necessarily model , approach, or resemble actual human cognition, understanding, or reasoning.
We can understand the first view as being concerned with creating systems that replicate to some degree something approaching human thinking and understanding, whereas the second is more concerned with producing results or output from computer agents that would be considered “intelligent” and useful, even if produced from systems which likely do not approach human cognitive processes. (Russell and Norvig, Artificial Intelligence: A Modern Approach, 3 Ed, 2009, 1-5). These views represent poles on a spectrum, and many actual positions fall in between. However, this distinction is more than philosophical. It has implications on the sensibility of extending existing legal doctrines to cover the activities of artificial agents. Let us consider each view briefly in turn, and some possible implications upon law.
View 1 – Artificial Intelligence as Replicating Some or All Human Cognition
The first characterization – that computer systems will be successful within AI when they produce activities resulting from processes approaching the high-level cognitive abilities of humans, is considered an expansive and perhaps more ambitious characterization of the goals of AI. It also seems to be the one most closely associated with the view of AI research in the public imagination. In popular culture, artificially intelligent systems replicate and instantiate – to varying degrees – the thinking facilities of humans (e.g. the ability to engage in abstract thought, carry on an intelligent conversation, or to understand or philosophize concerning concepts at a depth associated with intelligence). I raise this variant primarily to note that despite (what I believe is a) common lay view of the state of the research- this “strong” vision of AI is not something that has been realized (or is necessarily near realization) within the existing state-of-the art systems that are considered successful products of AI research. As I will suggest shortly, this nuance may not be something within the awareness of lawmakers and judges who will be the arbiters of such decisions concerning systems that are labeled artificially intelligent. Although AI research has not yet produced artificial human-level cognition, that does not mean that AI research has been unsuccessful. Quite to the contrary – over the last 20 years AI research has produced a series of more limited, but spectacularly successful systems as judged by the second view.
View 2 – “Intelligent” Results (Even if Produced by Non-Cognitive Processes)
The second characterization of AI is perhaps more modest, and can be considered more “results oriented.” This view considers a computer system (or machine) to be a success within artificial intelligence based upon whether it produces output or activities that people would agree (colloquially speaking) are “good” and “accurate” and “look intelligent.” In other words, a useful AI system in this view is characterized by results or output are likely to approach or exceed that which would have been produced by a human performing the same task. Under this view, if the system or machine produces useful, human-like results, this is a successful AI machine – irrespective as to whether these results were produced from a computer-based process instantiating or resembling human cognition, intelligence or abstract reasoning.
In this second view, AI “success” is measured based upon whether the autonomous system produces “intelligent” (or useful) output or results. We can use what would be considered “intelligent” conduct of a similarly situated human as a comparator. If a modern auto-pilot system is capable of landing airplanes in difficult conditions (such as thick fog) at a success rate that meets or exceeds human pilots under similar conditions, we might label it a successful AI system under this second approach. This would be the case even if we all agreed that the autonomous autopilot system did not have a meaningful understanding of the concepts of “airplanes”, “runways”, or “airports.” Similarly, we might label IBM’s Jeopardy playing “Watson” computer system to be a successful AI system since it was capable of producing highly accurate answers, to a surprisingly wide and difficult range of questions – the same answers that a strong, human Jeopardy champions would have produced. However, there is no suggestion that Watson’s results were the result of the same high-level cognitive understanding and processes that likely animated the result of the human champions like Ken Jennings. Rather, Watson’s accurate output came from techniques such as highly sophisticated statistical machine-learning algorithms that were able to quickly rank possible candidate answers through immense parallel processing of large amounts of existing written documents that happened to contain a great deal knowledge about the world.
Machine-Translation: Automated Translation as an Example
To understand this distinction between AI views rooted in computer-based cognition and those in “intelligent” or accurate results, it is helpful to examine the history of computer-based language translation (e.g. English to French). Translation (at least superficially) appears to be a task deeply connected to the human understanding of the meaning of language, and the conscious replication of that meaning in the target language. Early approaches to machine translation followed this cue, and sought to convey aspects to computer system – like the rules of grammar in both languages, and the pairing of words with the same meanings in both language – that might mimic the internal structures undergirding human cognition and translation. However, this meaning and rules-based approach to translation proved limited and surprised researchers by producing somewhat poor results based upon the rules of matching and syntactical construction. Such system had difficulty in determining whether the word “plant” in English should be translated to the equivalent of “houseplant” or “manufacturing plant” in French. Further efforts attempted to “teach” the computer rules about how to understand and make more accurate distinctions for ambiguously situated words but still did not produce marked improvements in translation quality.
Machine Learning Algorithms: Using Statistics to Produce Surprisingly Good Translations
However, over the last 10-15 years, a markedly different approach to computer translation occurred – made famous by Google and others. This approach was not primarily based upon top-down communication of the basics of constructing and conveying knowledge to a computer system (e.g. language pairing and rules of meaning). Rather, many of the successful translation techniques developed were largely statistical in nature, relying on machine-learning algorithms to scour large amounts of data and create a complex representation of correlations between languages. Google translate – and other similar statistical approaches – work in part by leveraging vast amounts of data that has previously been translated by humans. For example, the United Nations and the European Union frequently translate official documents into multiple languages using professional translators. This “corpus” of millions of paired and translated documents became publicly available electronically over the last 20 years to researchers. Systems such as Google Translate are able to process vast numbers of documents and leverage these paired, translated translation to create statistical models which are able to produce surprisingly accurate translation results using probabilities – for arbitrary new texts.
Machine-Learning Models: Producing “intelligent”, highly useful results
The important point is that these statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition (e.g. view 1). (It is important to note that the machine-learning translation approach does not achieve translation on its own but “leverages: previous human cognition through the efforts of the original UN translators that made the paired translations.) Thus, for certain, limited tasks, these systems have shown that it is possible for contemporary autonomous agents to produce “intelligent” results without relying upon what we would consider processes approaching human-level cognition.
Distinguishing “intelligent results” and actions produced via cognitive intelligence
The reason to flag this distinction, is that such successful AI systems (as judged by their results), will pose a challenge to the task of importing and extending of existing legal doctrinal frameworks – (which were mostly designed to regulate people) into the domain of autonomous computer agents. Existing “type 2″ systems that produce surprisingly sophisticated, useful, and accurate results without approaching human cognition are the basis of many products now emerging from earlier AI research and are becoming integrated (or are poised to become ) integrated into life. These include IBM’s Watson, Apple’s SIRI, Google Search – and in perhaps the next decade or two – Stanford’s/Google’s Autonomous self-driving cars, and autonomous music composing software. These systems often use statistics to leverage existing, implicit human knowledge. Since these systems produce output or activities that in some cases appear to approach or exceed humans in particular tasks, and the results that are autonomously produced are often surprisingly sophisticated, and seemingly intelligent – such “results-oriented”, task specific (e.g. driving, answering questions, landing planes) systems seem to be the near path of much AI research.
However, the fact that these intelligent-seeming results do not result from systems approaching human-cognition is a nuance that should not be lost on policymakers (and judges) seeking to develop doctrine in the area of autonomous agents. Much – perhaps most of law – is designed and intended to regulate the behavior of humans (or organizations run by humans). Thus embedded in many existing legal doctrines are underlying assumptions about cognition and intentionality that are implicit and are so basic that they are often not articulated. The implicitness of such assumptions may make these assumptions easy to overlook.
Given current trends, many contemporary (and likely future) AI systems that will be integrated into society (and therefore more likely the subject of legal regulation) will use algorithmic techniques focused upon producing “useful results” (view 2), rather than focusing on systems aimed at replicating human-level cognition, self-reflection, and abstraction (view 1). If lawmakers merely follow the verbiage (e.g. a system that has been labeled “artificially intelligent” did X or resulted in Y) and employ only a superficial understanding of AI research, without more closely understanding these technical nuances, there is the possibility of conflation in extending existing legal doctrines to circumstances based upon “intelligent seeming” autonomous results. For example, the book authors explore the concept of requiring fiduciary duties on the part of autonomous systems in some circumstances. But it will take a careful judge or lawmaker to distinguish existing fiduciary/agency doctrines with embedded (and often unarticulated) assumptions of human-level intentionality among agents (e.g. self-dealing) from those that may be more functional in nature (e.g. duties to invest trust funds). In other words, an in-depth understanding of the technology underlying particular autonomous agents should not be viewed as a technical issue. Rather it is a serious consideration which should be understood in some detail by lawmakers in any decisions to extend or create new legal doctrine from our existing framework to cover situations involving autonomous agents.
February 16, 2012 at 10:41 pm Tags: A Legal Theory for Autonomous Artificial Agents, Automated law, computer agents Posted in: Cyberlaw, Google and Search Engines, Symposium (Autonomous Artificial Agents) Print This Post 2 Comments
posted by Deborah DeMott
Many thanks to the organizers for asking me to comment on Samir Chopra and Lawrence White’s book, A Legal Theory for Autonomous Artificial Agents. I enjoyed thinking about the issues the book raises. My focus as a reader was the common law of agency. I served as the Reporter for the American Law Institute’s Restatement (Third) of Agency (2006), to which the book frequently refers. My immersion in agency law of course shapes my reading.
One concern I had as I read through the book is its possible conflation of two different kinds of claims: (1) the status of an autonomous artificial agent (hereinafter “AAA”) under present law; and (2) how the law should or could change in response to AAAs. At points I wondered whether the book implicitly flirted with a romance of the “ideal legal agent” (p. 23), an alluring prospect because “incapable of the kinds of misconduct associated with human agents.”As a scholar of the law, I was struck that the authors explicitly rejected the possibility that AAAs might best be termed “constructive” agents (p. 24) and that they do not explicitly engage with the large literature on fictions in the law. For one might read the book as an intriguing exercise in thinking “as if,” or as an extended construction of a metaphor or an analogy.
I’ll turn first to points concerning claims in category (1). The book might have benefited from a more robust account early on of the requisites for a relationship of common-law agency. Although agency does not require a contract between principal and agent, agency is a relationship grounded in mutual consent. It appears the book may discard consent as a requirement on p. 18, but mutual consent underpins much that follows in the specifics of agency doctrine. Indeed, the law recognizes non-consensual relationships in which one person has power to represent another and take action affecting the represented person’s legal position-such as the designation by statute of a secretary of state as an agent for service of process-but these relationships are not within the ambit of common-law agency. Consent, a concept that carries operative significance in many bodies of law, could be defined as an uncoerced expression of a person’s will. Thus, including AAAs within the ambit of present-day agency law requires that they be persons that can meaningfully said to have wills that can make expressions in an uncoerced fashion. Late in the book (p. 175) an AAA may be “said to possess” free will, but many assumptions precede this claim. Separately, AAAs are said to have duties to their principals early in the book (p. 21) but meaningful liabilities follow only much later, once AAAs hold dependent or independent legal personality.
To be sure, parallels can be drawn between AAAs and agents as the law conventionally understands them; one might delegate tasks to an AAA just as one might delegate tasks to a human agent or a legal-person agent such as a corporation (p.23). But the fact that task delegation is possible does not establish agency. I delegate the task of mobility to my car, much as the dog-owner in the Restatement illustration delegates to his trained pet the task of fetching beer1 (p. 55), but the fact of delegation does not itself make either my agent. I think this is so even if the car is equipped with computer-enabled functions and the dog can learn from experience (perhaps that beer has a delectable taste).
Chapter 2 on contracting problems, somewhat to my surprise, does not deal with the fundamental challenge of accommodating agency relationships within conventional accounts of how contractual obligations are formed. Just as it is difficult to understand how a contract could be formed via an AAA when the parties’ intentions are not referable to a particular offer and acceptance (p. 34), so it seems to be a broader predicament how a principal could be bound by a contract entered into by an agent when the principal was unaware of the specifics of the offer or acceptance. How could the principal be bound when the principal has not consented to the particular transaction? Agency resolves this predicament not by demanding transaction-by-transaction assent from the principal, but in characterizing the principal’s conferral of authority on the agent as an advance expression of the principal’s willingness to be bound which thereafter lurks in the background of the agent’s dealings with third parties. (Or appears so to do, when the principal can be bound only on the basis of the agent’s apparent authority). For a fuller account, see Gerard McMeel, Philosophical foundations of the law of agency, 116 L.Q.R. 387 (2000).
I turn now to (2), and how the law could or should change in response to AAAs. Many of the specifics detailed in the book. In particular, the questions in chapter 3 about attribution of knowledge are thought-provoking and novel. It’s not clear to me, though, why or how ascribing legal personality to AAAs would be a good solution. More generally, at points (explicitly on p. 43) the book may assume that increasing the usage of AAAs is self-evidently attractive, and thus that legal rules should be modified “to limit the potential moral and/or legal responsibility of principals for agents’ behavior.” Why this should be so is not clear. On this point, the history and pragmatics of ascribing legal personality to corporations could be informative. More pragmatically, would legal change through legislation be preferable to change though case-by-case litigation?
Of course, there’s much more I could say and much more in the book I admire. My reservations aside, it’s gratifying that agency law has found a lively and ingenious audience!
1 The hypothetical facts underlying this illustration were shared with me as present “in a real case” but diligent research never located a citation.
posted by Giovanni Sartor
The book “A Legal Theory for Autonomous Artificial Agents” by Samir Chopra and Laurence White provides a very comprehensive and well written account of a challenging issue, namely, of how the law should address the creation and deployment of intelligent artefacts capable of goal-oriented action and social interaction. No comparable work is today available, and thus I think that this is a very valuable contribution to the interface of ICT (and in particular AI) and law.
As some commentators have already observed, the title words “A legal theory” may be a bit misleading, since one does not find in the book a new approach to legal theory inspired by artificial agents, but rather a theoretically-grounded analysis of the legal implications of this new socio-technological phenomenon. However, awareness is shown of legal theory and various legal theoretical themes are competently discussed in the book.
The fundamental idea which is developed in the first chapters is that when interacting which such artificial agents we need to adopt the intentional stance, and understand their behaviour as resulting form the agents’ beliefs and goals. Often indeed there is no other strategy available to us: we have no power, no ability and in any case no time, to examine the internal structure and functioning of such artificial entities. The only chance we have to make sense of their behaviour is to assume that they tend to achieve their goals on the basis of information they collect and process, namely, the idea that they endowed to a certain kind and extent of theoretical and practical rationality: they can track the relevant aspects of their (physical or virtual) environment, and adopt plans of actions on how to achieve their goals in such an environment.
As an example quite remote for the domain considered by the authors of the book, consider an autopilot system for an aircraft. The system has a complex goal to achieve (bring the airplane to destination, safely , in time, consuming as less fuel as possible), collects though various sensors information from the environment (height, speed of wind, expected weather conditions, on ground obstacles and incoming aircrafts, etc.) and from the airplane itself (available fuel, temperature, etc), draws theoretical conclusions (the length still to be covered, the speed needed for getting to destination in time, the expected evolution of the weather, etc.) and makes choices on various matters (speed, path, etc.) on this basis. Moreover, it receives and sends messages concerning the performance of its task, interacting, with pilots, with air traffic systems, and with other manned and unmanned aircrafts. Clearly, the pilot has little idea of the internal structure of the autopilot (probably he or she has only a vague idea of the autopilot’s architecture, and does not even know what are the procedures included in its software, let alone the instructions composing each such procedures) and has no direct access to the information being collected by automatic sensors and processed by the system. The only way to sensibly understand what the autopilot is doing, and the messages it is sending, is indeed to assume that it is performing a cognitive goal-directed activity, namely, adopting actions on the basis of its goals and its representations of the context of its action, as well as communicating what it assumes to be hold in its environment (what it believes), the objectives it is currently pursuing (its goals) and what it is going to do next (its intentions or commitments). As autopilot systems become more and more sophisticated (approaching the HAL of 2001 Space Odyssey), take on new functions (such as controlling distances, avoiding collisions, governing take off and landing) and use an increasing amount of information, their autonomy increases as well as their communication capacities. Thus it becomes more natural and useful (inevitable, I would say) to adopt the intentional stance toward them.
I have addressed myself the need to adopt the intentional stance toward certain artificial entities (Cognitive Automata and the Law), where the intentional stance was discussed to some extent, and the legal relevance of Daniel Dennett’s distinction of physical, design and intentional stance was considered. An aspect I have considered there, that it is not addressed in the book (though being quite significant for legal theory) is whether the cognitive states we attribute to an artificial entity only exist in the eye of the observer, according to a behaviouristic approach to intentionality (only the behaviour of a system verifies or falsifies any assertions concerning its intentional states, regardless of the system’s internal conditions) or whether such cognitive states states also concern specific internal features of the entity to which they are attributed. I have sided with the second approach, on the basis of a functional understanding of mental states. For instance, a belief may be viewed as an internal state that co-variates with environmental condition, in such a way that co-variation enables approbate reactions to such conditions. Having such a realist approach to cognitive states of artificial agents enables us to distinguish ontologically cases when agents have a cognitive state from cases where they only appear to have it (a distinction which is different from the issue of what evidence may justifiably support such conclusions, and what behaviour justifies one’s reliance on the existence of certain mental states). This is not usually relevant in private law and in particular with regard to contracts (we are entitled to assume that people have the mental states they appear to have, for the sake of reliance, regardless of whether they really have such states), but may be significant in some contexts, such as criminal law or even some parts of civil liability (intentional torts).
Another idea I find useful for distinguishing agents from mere tools is the idea of cognitive delegation (also discussed in the above contribution). While we can delegate various simple tasks to our tools (e.g. we use a spreadsheet for making calculations or a a thermometer for measuring temperature), we can delegate only to agents tasks pertain to the deployment of practical cognition (determining what to do, given certain goals, in inga certain environment). It is since agents engage in practical cognition, as they have been required to do, that we can (and should) understand their action according to the intentional stance.
In conclusion, not only I fully agree with the book’s idea of adopting the intentional stance with regard to artificial agents, but I think that this idea should be further developed and that this may lead to a better understanding of how the law takes into accounts both human and artificial minds. I think that this may indeed be the way in which the book can most contribute to legal theory.
posted by Danielle Citron
Thanks so much to everyone participating in the LTAAA symposium: what a terrific discussion. Given my work on Technological Due Process, I could not help but think about troubled public benefits system in Colorado known as CBMS. Ever since 2004, the system has been riddled with delays, faulty law embedded in code, and system crashes. As the Denver Post reports, the state has a $44 million contract with Deloitte consultants to overhaul the system–its initial installation cost $223 million with other private contractors. CBMS is a mess, with thousands of overpayments, underpayments, delayed benefits, faulty notices, and erroneous eligibility determinations. And worse. In the summer of 2009, 9-year-old Zumante Lucero died after a pharmacy — depending upon the CBMS system — wouldn’t fill his asthma prescription despite proof the family qualified for Medicaid help. In February 2011, CBMS failed eight different tests in a federal review, with auditors pointing to new “serious” problems while saying past failures are “nearly the same” despite five years of fixes. The federal Centers for Medicare and Medicaid Services (CMS), which provides billions of dollars each year for state medical aid, said Colorado risks losing federal money for programs if it doesn’t make changes from the audit. All of this brings to mind whether a legal theory of automated personhood moves this ball forward. Does it help us sort through the mess of opacity, insufficient notice, and troubling and likely unintended delegation of lawmaking to computer programmers? Something for me to chew on as the discussion proceeds.
Image: Wikimedia Commons
February 15, 2012 at 5:55 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Administrative Law, Architecture, Symposium (Autonomous Artificial Agents) Print This Post 3 Comments
posted by Andrea Matwyshyn
In an extremely forward-looking and thought-provoking book, Samir Chopra and Lawrence F. White rekindle important legal questions with respect to autonomous artificial agents or bots. It was a pleasure to engage with the questions that the authors raise in A Legal Theory for Autonomous Artificial Agents, and the book is a valuable scholarly contribution. In particular, because of my own research interests, Chapter 2 Artificial Agents and Contracts was of special interest to me.
In Chapter 2, the authors apply the agency theory that they advocate in Chapter 1 to the context of contracts. They challenge the view that bots are “mere tools” used for extension of the self by contracting parties. In doing so, they assert differences between “closed” and “open” systems and various theoretical types of bots, arguing that parties who use bots as part of contracting should be protected from contract liability in some cases of bot error or malfunction. From my reading, they argue in favor of using principles of agency law to replace some traditional contract law constructs when bots are involved in contracts.
Their argument is nuanced and thoughtful from an economic and agency law perspective. In the comments that follow, I raise five sets of questions for thought, admittedly from the perspective of my own research on contract law, consumer privacy in technology contexts, and information security law.
1. Private ordering and accepting responsibility for imprudent technology risks. The authors are concerned with providing better liability protection to contracting parties who use bots. They assert that “[a] strict liability principle [which views bots as mere tools or means of communication] may not be fair to those operators and users who do not anticipate the contractual behavior of the agent in all possible circumstances and cannot reasonably be said to be consenting to every contract entered into by the agent.” As I was reading this chapter, I pondered whether bots do indeed warrant special contract law rules. How is a failure to anticipate the erratic behavior of a potentially poorly-coded bot not simply one of numerous categories of business risk that parties may fail to foresee? Applying a contract law perspective, one might argue that the authors’ approach usurps for law what should be left to private ordering and risk management. No one forces a party to use a bot in contracting; perhaps choosing to do so is simply an information risk that should be planned around with insurance?
2. Traditional contract law breach and damages analysis and the expectations of the harmed party. The authors opt away from a discussion of traditional breach analysis and damages remedies when addressing bot failures. Instead, they apply a tort-like calculation of a lowest cost avoider principle, which they argue “correctly allocate[s] the risk of specification or induction errors to the principle/operator and of malfunction to the principle or user, depending on which was the least–cost avoider “ However, should we perhaps temper this analysis by recognizing that contract law as embodied by the UCC and caselaw is not concerned solely or even primarily with efficiency in contractual relationships? How does the authors’ efficiency analysis square with traditional consideration sufficiency (versus adequacy) analysis, where courts enforce contracts with bad deal terms regularly, choosing not to question the choices of the parties? A harmed consumer who was not using a bot in a contract pitted against a sophisticated company using a poorly-coded bot (because it chose to hire a bargain programmer) may indeed have inefficient needs, but is not the consumer the party more in need of the court’s protection as a matter of equity?
The authors note that, for example, prices quoted by a bot are akin to pricing details provide by a third party – a scenario that they assert may make it unfair to bind the bot-using party to terms of a contract executed by his bot when he does not pre-approve each individual deal. “In many realistic settings involving medium–complexity agents, such as modern shopping websites, the principal cannot be said to have a pre-existing “intention” in respect of the particular contract that is “communicated “to the user.” Again, to what extent are such bot dynamics truly unforeseeable? Can it be argued that coding up your bot to offer very specific deal terms when a consumer clicks on something constitutes an indication of actual knowledge and intention similar to a price list? Is a coding error with a wrong price not simply akin to a mismarked pricetag in real space? But even assuming that we agree with the argument that coding up a bot is a relinquishment of control to a third party of sorts, how would the bot dynamics at issue differ from those in real space contracts where prices are specified using a variable third party index or where performance details are left variable – dynamics that have been found unproblematic in real space contract cases? 
3. The bot problems that currently exist in contract law. The authors take us through two cases with respect to bots – eBay v. Bidder’s Edge and Register v. Verio.com, arguing them primarily through the lens of tort, particularly trespass to chattels. I found myself wondering about the authors’ agency analysis in the contract-driven bot cases where the trespass to chattels line of argument was deemed unpersuasive. For example, how would the authors’ agency analysis apply in the context of the two Ticketmaster v. Tickets.com bot cases, particularly the second where the trespass to chattels claim was dismissed and the contract count was the only count to survive summary judgment? Also, I would be curious to hear more about the extrapolation of their agency approach to the current wave of bot cases that blend contract formation questions with allegations of computer intrusion, such as Facebook v. Power Ventures and United States v. Lowson.
4. Duties of information security. Turning to information security, the authors point out that a party may try to hack a bot used by the other party in order to gain contracting advantage. While this is a valid computer intrusion concern, another pressing contract concern is that a malicious third-party (who is not one of the parties seeking to be in privity) will chooses to hack the bot to steal money on an ongoing basis. If the bot is vulnerable to easy attack because of information security deficits of its coding, should the party using it get a free pass for its failure to exercise due care in information security? Is it fair to impose information security losses on the other contracting party who was prudent enough not to use a vulnerable bot in contracting? Would a straightforward ‘your vulnerability, your responsibility’ approach create better incentives for close monitoring and better information security practices, a goal already recognized by Congress as a social good?
5. The broader implications for “b0rked” code. The separateness of bots from their creators came across to me as an underlying premise for the entirety of the authors’ conversation. For example, the authors reference situations where the bot autonomously “misrepresents” information that its wielding party would not approve. Is it not perhaps more accurate to say that the bot contains programming bugs its wielding party failed to catch and rectify? Is not a bot simply lines of code written by a human (who may or may not be skilled in a particular code language) that will always be full of errors (because a human authored it)? Is perhaps the appropriate goal not to protect bots but to incentivize bot creators to make fewer errors and rectify errors once they are found after “shipping” the code?
The authors argue that holding a contracting party accountable for bot malfunction is “unjust” in some circumstances. Is this consonant with the contract law approach that drafting errors and ambiguities are construed against the drafter? Is the author/operator of the error-ridden code considered the drafter here? How is choosing a bad programmer to build your flawed bot different from choosing a bad lawyer to draft the flawed language of your contract?
The analogy of a bot to a rogue beer-fetching dog I found to be a particularly apt one. Many scholars would argue that, much like having a pet, using a code-based creation such as a bot in contracting is a choice and an assumption of responsibility. Both dogs and bots are things that are optional and limited in their capacities: we choose to unleash them on the rest of the world. If a dog or a bot causes harm, even when the owner has not expressly directed it to do so, isn’t it always the owner’s failure to supervise that is to blame? I fear that comparing a bot to a human of any sort – slave, child, employee – at the current juncture for purposes of crafting law may be premature. No machine is capable of replicating human behavior smoothly at present. Will one arrive in the future? Yes, it is likely. However, I fear that aggressive untethering of the legal responsibility of the coder from her coded creation may send us down an undesirable path of uncompensated consumer harms in our march toward our brave new cyborg world.
The book’s purposes are ambitious, and I truly enjoyed pondering the questions it raises. I thank the organizers for allowing me to participate in this symposium.
 p. 36
 p. 31-32
 p. 35-6
 The authors appear to argue from the perspective that encouraging the use of bots in contracting is a good thing and, as such merits special legal protection. While it is clear that digital contracts and physical space contracts are to be afforded legal parity, is it indeed clear that our legislatures and courts have decided to encourage parties to use bots instead of humans in contracting? Perhaps encouraging the use of more humans in operations and contracting is instead the preferable policy goal and the one that warrants the more protective legal regime?
 p. 48
 Indeed the consumer protection analysis that is omnipresent in contract law does not seem to be a dominant thread in the authors’ analysis. When a sophisticated company using a bot is contracting with a consumer, the power and balance that already exists between these parties – a traditional concern of contract law – is exacerbated by the presence of the bot and arguably favors protecting the consumer more aggressively in any technology malfunction related to the formation of the contract.
 p. 36
 See, e.g., UCC §2-305; Eastern Air Lines, Inc. v. Gulf Oil Corporation, 415 F. Supp. 429 (1975).
 The situation where a party seeks to gain advantage and a contracting relationship by hacking the other party’s bot, I would argue, is not primarily a contract law question. This is arguably an active computer intrusion best left for analysis under the Computer Fraud and Abuse Act.
 p. 50
 I have argued that it is the responsibility of businesses who use code to interact with consumers and other entities to warn protect and repair the unsafe code environments to which they subject others.
 p. 55
 As upcoming work with my coauthor Miranda Mowbray will explain, the most sophisticated Twitter bots have now become quite good at approximating the speech patterns of humans, and humans seem to like interacting with them; however, even they eventually give themselves away as mere code-based creations. When a Cylon-like code creation finally arrives, it may be nothing like what we expect it to be.
February 15, 2012 at 5:52 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Cyberlaw, Symposium (Autonomous Artificial Agents), Uncategorized Print This Post 2 Comments
posted by James Grimmelmann
In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.
Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.
Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.
A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.
When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.
Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.
When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.
And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.
My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.
posted by Ryan Calo
Samir Chopra—whom I consider to be something of a pioneer in thinking through the philosophic and legal issues around artificial intelligence—did not much care for my initial thoughts about his and Lawrence White’s new book, A Legal Theory For Autonomous Agents. The gist of my remarks was that, while interesting and well researched, the book does not deliver on its promise of advancing “a legal theory.” Mostly what the book does (I read the book cover to cover, as you can see!) is identify new and old ways the law might treat complex software to advance various, seemingly unrelated goals. The book is largely about removing conceptual obstacles to treating software as “agreeing,” “knowing,” or “taking responsibility,” should we be inclined to do so in particular cases for independent policy reasons.
In the second chapter of the book, for instance, Chopra and White argue that treating software capable of calculating, offering, and appearing to accept terms as legal agents is not only coherent, but results in greater economic efficiency. The upshot is lesser contractual liability than treating the software as a mere instrument because, in instances where software makes the right kind of mistake, the entity that deployed the software—usually a sophisticated corporation—will not be held to the agreement. In the third chapter, the authors abandon economic efficiency entirely. Here the argument is that we ought to look to agency law in order to attribute more information to corporations because “[o]nly such a treatment would do justice to the reality of the increased power of the corporation, which is a direct function of the knowledge at its disposal.” In other words, by treating its software as agents rather than tools, we can either limit corporate liability for reasons of efficiency, or expand it for reasons of fairness. Read the rest of this post »
posted by Ryan Calo
A corporation, it is said, “is no fiction, no symbol, no piece of the state’s machinery, no collective name for individuals, but a living organism and a real person with a body and members and a will of its own.” A ship, described as a “mere congeries of wood and iron,” on being launched, we are told, takes on a personality of its own, a name, volition, capacity to contract, employ agents, commit torts, sue and be sued.” Why do lawyers and judges assume thus to clothe inanimate objects and abstractions with the qualities of human beings?
The answer, in part at least, is to be found in characteristics of human thought and speech not peculiar to the legal profession. Men are not realists either in thinking or in expressing their thoughts. In both processes they use figurative terms. The sea is hungry, thunder rolls, the wind howls, the stars look down at night, time is not an abstraction, rather it is “father time” or the “grim reaper”…
Bryant Smith, Legal Personality, 37 Yale Law Journal 283, 285 (1928) Read the rest of this post »
posted by Ramesh Subramanian
Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.
The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand, the argument for according “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and do not have clearly defined identities), and then argue how they might be overcome legally.
Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent independent, is not too far into the future. In fact, the aftermath of Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”
We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if) artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?
And when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?
These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network? What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent? What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?
Are these scenarios too far away for us to worry about, or close enough? I wonder…
February 14, 2012 at 6:00 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Bioethics, Civil Rights, Courts, Sociology of Law, Symposium (Autonomous Artificial Agents), Technology, Uncategorized Print This Post No Comments