posted by Frank Pasquale
A Legal Theory of Autonomous Artificial Agents offers a serious look at several legal controversies set off by the rise of bots. “Autonomy” is one of the key concepts in the work. We would not think of a simple drone programmed to fly in a straight line as an autonomous entity. On the other hand, films like Blade Runner envision humanoid robots that so closely mimic real homo sapiens that it seems churlish or cruel to dismiss their claims for respect and dignity (and perhaps even love). In between these extremes we find already well-implemented, cute automatons. As Sherry Turkle has noted, when confronted by Paro (above right), children “move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?””
For today’s post, I want to move to another, perhaps childish, question: can the bot speak? The question will be particularly urgent by 2020, but is relevant even now because corporate and governmental entities want to promote armies of propagandizing bots to disseminate their views and drown out opposing voices. Consider the experiment run by Tim Hwang, of the law firm Robot, Robot, & Hwang, on Twitter, as explained in conversation with Bob Garfield:
GARFIELD: Earlier this year, 500 or so Twitterers received tweets from someone with the handle @JamesMTitus who posed one of several generic questions: How long do you want to live to, for example, or do you have any pets? @JamesMTitus was cheerful and enthusiastic, kind of like those people who comment on the weather and then laugh heartily. Perhaps because of that good nature or perhaps because of his inquiring spirit and interest in others, @JamesMTitus was able to strike up a fair number of continuing conversations. Only thing is, there is no @JamesMTitus. He, or it, is a bot, a software program designed to engage actual humans in social networks.
posted by Lawrence Solum
Legal Personhood for Artificial Agents?
A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence F. White, raises a host of fascinating questions–some of immediate practical importance (how should contract law treat artificial agents?) and some that are still in the realm of science fiction. In the latter group is a cluster of questions about legal personhood for artificial agents that do not yet exist–agents with functional capacities that approach those of humans.
I’ve written on this question, and my essay, Legal Personhood for Artificial Intelligence, suggests that the legal personhood should and will be awarded to artificial intelligences with the functional capacities of other legal persons. But legal personhood does not necessarily imply the full panoply of rights we assign to human persons. Current doctrine may afford free speech rights to corporations–but we can certainly imagine the opposite rule. If artificial agents are awarded legal personhood, they might be given rights to own property, sue and be sued, but denied others. Artificial agents might be denied freedom of speech. And like corporations, but unlike all natural persons, they might be denied the protection of the 13th Amendment. Legal persons can be owned by natural persons.
Can we imagine a (perhaps far distant future) in which artificial agents possess a set of capacities and characteristics that would lead us to grant them the full set of rights associated with human personhood?
Rather than tackling this question directly, I will use a thought experiment developed by the philosopher David Chalmers (who uses it to tackle a very different set of issues in the philosophy of mind). For some background, you can check out this wikipedia entry, this entry in the Stanford Encyclopedia of Philosophy, and this web page created by Chalmers.
Meet the Zombies
Zombies look like you and me, and indeed, from our vantage point they are indistinguishable from human persons. But there is one, very important difference: Zombies lack “consciousness.” Zombie neurons fire just like ours. Zombies laugh at jokes, go to work, write screenplays (unless they are on strike), get into fights, have sex, and go to Milk and Honey for drinks. Just like us. But zombies do not have a conscious experience of finding jokes funny. No awareness that work is boring. No phenomenological correlate of their writer’s block. No inner sensation of anger. No feelings of pleasure. No impaired consciousness from inebriation. Following the philosophers, let us call these missing elements qualia. Zombies have no qualia.
Let us imagine a world in which there are both humans and Zombies. Of course, if the Zombies were exactly like us, we wouldn’t know they exist. So let us suppose that there is some subtle characteristic that allows us to recognize the Zombies. How would we treat them? What legal rights would (and should) they have?
Equal Rights for Zombies
Zombies would, of course, demand the rights of legal personhood. (Remember that their behavior is identical to ours!) Imagine a world in which the Zombies demanded full equality with humans. They might argue that such equality is guaranteed by the Equal Protection Clause, or they might propose an Equal Zombie Rights Amendment. Because Zombies behave just like humans, they would no more be satisfied with less than full equality than would we. They would engage in political action to campaign for legal equality. They would make speeches, hold demonstrations, organize strikes and boycotts, and even resort to violence. (Humans do all these things.) If zombies were sufficiently numerous, it seems likely that the reality of human-zombie relations would result in full legal equality for zombies. Either zombies would be recognized as constitutional persons, or the Equal Zombie Rights Amendment would become law. Antidiscrimination ordinances would forbid discrimination against zombies in housing, employment, and other important contexts. One imagines that full social integration might never be accomplished—some humans might be polite to zombies in public context but shun zombies as friends.
But Should They Have Equal Rights?
Zombies could be given equal rights, and we can imagine scenarios where it seems likely that they would be given such rights. But should they have equal rights? I would like to suggest that the answer to this question is far from obvious. We might try answering this question by resorting to our deepest beliefs about morality. Are Zombies Kantian rational beings? Would a utilitarian argue that Zombies lack moral standing because they have no conscious experiences of pleasure and pain? Zombies would share human DNA: does that make them human? And whether they are human or not, are they persons?
One problem with thinking about equal rights for Zombies is that our moral intuitions, beliefs, and judgments have been shaped by a world in which humans are the only creatures with all of the capacities we associate with personhood. Animals may experience pleasure and pain, and some higher mammals have the capacity to communicate in bounded and limited ways. But there are no nonhuman creatures with the full set of capacities that normally developed human persons possess. A world with Zombies would be a different moral universe–and it isn’t clear what our moral intuitions would be in such a universe.
Back to Artificial Agents
Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons. And so we can imagine a world in which we would grant them the full panoply of rights that we grant human persons because it would serve our own interests (the interests of human persons). The truly hard question is whether we might come to believe that we should granted artificial agents the full rights of human personhood, because we are morally obliged to do so. We don’t yet live with artificial agents with functional capacities that approach or exceed those of human persons. We don’t have the emotional responses and cultural sensibilities that would develop in a world with such agents. And so, we don’t know what we should think about personhood for artificial agents.
posted by Samir Chopra
I am grateful to Concurring Opinions for hosting this online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion here; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion. (I notice that James Grimmelmann and Sonia Katyal have already posted very thoughtful responses; I intend to respond to those in separate posts later.)
Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later. Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or possibly even beyond? How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever ”know” anything? Who is doing the knowing?
I’ll be addressing questions like these and others during this online symposium; for the time being, I’d like to make a couple of general remarks.
The modest changes in legal doctrine proposed in our book are largely driven by two considerations.
First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is (i.e., merely tweak them to accommodate artificial agents without changing their status vis-a-vis contracting) but run the risk of imposing implausible readings of contract theories in doing so. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings some of us take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?). Furthermore, as we argue in Chapter 1 and 2, there is a perfectly coherent path we can take to start to consider such artificial agents as legal agents (perhaps without legal personality at first). This strategy is philosophically and legally coherent and the argument is developed in detail in Chapter 1 and 2. The argument in the latter case suggest that they be considered as legal agents for the purpose of contracting; that in the former lays out a prima facieargument for considering them legal agents. Furthermore, in Chapter 2, we say “The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not.”
Which brings me to my second point. A change in a legal doctrine can bring about better outcomes. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. As we note “Arguably, agency law principles in the context of contracting are economically efficient in the sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider (Rasmusen 2004, 369). Therefore, the case for the application of agency doctrine to artificial agents in the contractual context is strengthened if we can show similar considerations apply in the case of artificial agents as do in the case of human agents, so that similar rules of apportioning liability between the principal and the third party should also apply.”
And I think we do.
An even stronger argument can be made when it comes to privacy. In Chapter 3, the dismissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal agents. (This follows on the heel of an analysis of knowledge attribution to artificial agents so that they can be considered legal agents for the purpose of knowledge attribution.)
Much more on this in the next few days.
February 14, 2012 at 12:08 am Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Contract Law & Beyond, Symposium (Autonomous Artificial Agents) Print This Post No Comments
posted by Frank Pasquale
As noted last week, we are hosting an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Chopra will be reading posts and responding here, too. There is an introduction to the book (and some opening comments) at last week’s post. We look forward to hosting the discussion!
posted by James Grimmelmann
The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.
One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.
But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.
Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.
In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.
As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.
posted by Frank Pasquale
On February 14-16, we will host an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Given the great discussions at our previous symposiums for Tim Wu’s Master Switch and Jonathan Zittrain’s Future of the Internet, I’m sure this one will be a treat. Participants will include Ken Anderson, Ryan Calo, James Grimmelmann, Sonia Katyal, Ian Kerr, Andrea Matwyshyn, Deborah DeMott, Paul Ohm, Ugo Pagallo, Lawrence Solum, Ramesh Subramanian and Harry Surden. Chopra will be reading their posts and responding here, too. I discussed the book with Chopra and Grimmelmann in Brooklyn a few months ago, and I believe the audience found fascinating the many present and future scenarios raised in it. (If you’re interested in Google’s autonomous cars, drones, robots, or even the annoying little Microsoft paperclip guy, you’ll find something intriguing in the book.)
There is an introduction to the book below the fold. (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN). We look forward to hosting the discussion!
February 8, 2012 at 10:43 am Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Criminal Law, Current Events, Cyberlaw, Social Network Websites, Symposium (Autonomous Artificial Agents), Technology, Tort Law Print This Post 11 Comments