Site Meter

Tagged: A Legal Theory for Autonomous Artificial Agents

LTAAA Symposium: Campaign 2020′s Bots United

A Legal Theory of Autonomous Artificial Agents offers a serious look at several legal controversies set off by the rise of bots. “Autonomy” is one of the key concepts in the work. We would not think of a simple drone programmed to fly in a straight line as an autonomous entity. On the other hand, films like Blade Runner envision humanoid robots that so closely mimic real homo sapiens that it seems churlish or cruel to dismiss their claims for respect and dignity (and perhaps even love). In between these extremes we find already well-implemented, cute automatons. As Sherry Turkle has noted, when confronted by Paro (above right), children “move from inquiries such as “Does it swim?” and “Does it eat?” to “Is it alive?” and “Can it love?””

For today’s post, I want to move to another, perhaps childish, question: can the bot speak? The question will be particularly urgent by 2020, but is relevant even now because corporate and governmental entities want to promote armies of propagandizing bots to disseminate their views and drown out opposing voices. Consider the experiment run by Tim Hwang, of the law firm Robot, Robot, & Hwang, on Twitter, as explained in conversation with Bob Garfield:

GARFIELD: Earlier this year, 500 or so Twitterers received tweets from someone with the handle @JamesMTitus who posed one of several generic questions: How long do you want to live to, for example, or do you have any pets? @JamesMTitus was cheerful and enthusiastic, kind of like those people who comment on the weather and then laugh heartily. Perhaps because of that good nature or perhaps because of his inquiring spirit and interest in others, @JamesMTitus was able to strike up a fair number of continuing conversations. Only thing is, there is no @JamesMTitus. He, or it, is a bot, a software program designed to engage actual humans in social networks.

Read More

13

Artificial Agents, Zombies, and Legal Personhood

Legal Personhood for Artificial Agents?

A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence F. White, raises a host of fascinating questions–some of immediate practical importance (how should contract law treat artificial agents?) and some that are still in the realm of science fiction.  In the latter group is a cluster of questions about legal personhood for artificial agents that do not yet exist–agents with functional capacities that approach those of humans.

I’ve written on this question, and my essay, Legal Personhood for Artificial Intelligence, suggests that the legal personhood should and will be awarded to artificial intelligences with the functional capacities of other legal persons.  But legal personhood does not necessarily imply the full panoply of rights we assign to human persons.  Current doctrine may afford free speech rights to corporations–but we can certainly imagine the opposite rule.  If artificial agents are awarded legal personhood, they might be given rights to own property, sue and be sued, but denied others.  Artificial agents might be denied freedom of speech.  And like corporations, but unlike all natural persons, they might be denied the protection of the 13th Amendment.  Legal persons can be owned by natural persons.

Can we imagine a (perhaps far distant future) in which artificial agents possess a set of capacities and characteristics that would lead us to grant them the full set of rights associated with human personhood?

Rather than tackling this question directly, I will use a thought experiment developed by the philosopher David Chalmers (who uses it to tackle a very different set of issues in the philosophy of mind).  For some background, you can check out this wikipedia entry, this entry in the Stanford Encyclopedia of Philosophy, and this web page created by Chalmers.

Meet the Zombies

Zombies look like you and me, and indeed, from our vantage point they are indistinguishable from human persons. But there is one, very important difference: Zombies lack “consciousness.” Zombie neurons fire just like ours. Zombies laugh at jokes, go to work, write screenplays (unless they are on strike), get into fights, have sex, and go to Milk and Honey for drinks. Just like us. But zombies do not have a conscious experience of finding jokes funny. No awareness that work is boring. No phenomenological correlate of their writer’s block. No inner sensation of anger. No feelings of pleasure. No impaired consciousness from inebriation. Following the philosophers, let us call these missing elements qualia. Zombies have no qualia.

Let us imagine a world in which there are both humans and Zombies.  Of course, if the Zombies were exactly like us, we wouldn’t know they exist.  So let us suppose that there is some subtle characteristic that allows us to recognize the Zombies.  How would we treat them?  What legal rights would (and should) they have?

Equal Rights for Zombies

Zombies would, of course, demand the rights of legal personhood.  (Remember that their behavior is identical to ours!)  Imagine a world in which the Zombies demanded full equality with humans.  They might argue that such equality is guaranteed by the Equal Protection Clause, or they might propose an Equal Zombie Rights Amendment.  Because Zombies behave just like humans, they would no more be satisfied with less than full equality than would we.  They would engage in political action to campaign for legal equality.  They would make speeches, hold demonstrations, organize strikes and boycotts, and even resort to violence.  (Humans do all these things.)  If zombies were sufficiently numerous, it seems likely that the reality of human-zombie relations would result in full legal equality for zombies.  Either zombies would be recognized as constitutional persons, or the Equal Zombie Rights Amendment would become law.  Antidiscrimination ordinances would forbid discrimination against zombies in housing, employment, and other important contexts.  One imagines that full social integration might never be accomplished—some humans might be polite to zombies in public context but shun zombies as friends.

But Should They Have Equal Rights?

Zombies could be given equal rights, and we can imagine scenarios where it seems likely that they would be given such rights.  But should they have equal rights?  I would like to suggest that the answer to this question is far from obvious.  We might try answering this question by resorting to our deepest beliefs about morality.  Are Zombies Kantian rational beings?  Would a utilitarian argue that Zombies lack moral standing because they have no conscious experiences of pleasure and pain?  Zombies would share human DNA: does that make them human?  And whether they are human or not, are they persons?

One problem with thinking about equal rights for Zombies is that our moral intuitions, beliefs, and judgments have been shaped by a world in which humans are the only creatures with all of the capacities we associate with personhood.  Animals may experience pleasure and pain, and some higher mammals have the capacity to communicate in bounded and limited ways.  But there are no nonhuman creatures with the full set of capacities that normally developed human persons possess.  A world with Zombies would be a different moral universe–and it isn’t clear what our moral intuitions would be in such a universe.

Back to Artificial Agents

Just as we can conceive of a possible world inhabited by both humans and Zombies, we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons.  And so we can imagine a world in which we would grant them the full panoply of rights that we grant human persons because it would serve our own interests (the interests of human persons).  The truly hard question is whether we might come to believe that we should granted artificial agents the full rights of human personhood, because we are morally obliged to do so.  We don’t yet live with artificial agents with functional capacities that approach or exceed those of human persons.  We don’t have the emotional responses and cultural sensibilities that would develop in a world with such agents.  And so, we don’t know what we should think about personhood for artificial agents.

0

Artificial Agents and the Law: Some Preliminary Considerations

I am grateful to Concurring Opinions for hosting this online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion here; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion. (I notice that James Grimmelmann and Sonia Katyal have already posted very thoughtful responses; I intend to respond to those in separate posts later.)

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or possibly even beyond? How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever ”know” anything? Who is doing the knowing?

I’ll be addressing questions like these and others during this online symposium; for the time being, I’d like to make a couple of general remarks.

The modest changes in legal doctrine proposed in our book are largely driven by two considerations.

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is (i.e., merely tweak them to accommodate artificial agents without changing their status vis-a-vis contracting) but run the risk of imposing implausible readings of contract theories in doing so. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings some of us take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?). Furthermore, as we argue in Chapter 1 and 2, there is a perfectly coherent path we can take to start to consider such artificial agents as legal agents (perhaps without legal personality at first). This strategy is philosophically and legally coherent and the argument is developed in detail in Chapter 1 and 2.  The argument in the latter case suggest that they be considered as legal agents for the purpose of contracting; that in the former lays out a prima facieargument for considering them legal agents. Furthermore, in Chapter 2, we say “The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not.”

Which brings me to my second point. A change in a  legal doctrine can bring about better outcomes. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of  contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. As we note “Arguably, agency law principles in the context of contracting are economically efficient in the sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider (Rasmusen 2004, 369). Therefore, the case for the application of agency doctrine to artificial agents in the contractual context is strengthened if we can show similar considerations apply in the case of artificial agents as do in the case of human agents, so that similar rules of apportioning liability between the principal and the third party should also apply.”

And I think we do.

An even stronger argument can be made when it comes to privacy. In Chapter 3, the dismissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents. (This follows on the heel of an analysis of knowledge attribution to artificial agents so that they can be considered legal agents for the purpose of knowledge attribution.)

Much more on this in the next few days.

0

LTAAA Symposium: Complex Systems and Law

The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.

One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.

But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.

Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.

In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.

As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.

0

Personhood for Artificial Agents?

I am simply delighted to be taking part in this symposium, and extend a great deal of thanks to Frank Pasquale, the editors at Concurring opinions—and of course, most important, Samir Chopra and Laurence White for writing such an excellent, thought-provoking piece of work. I enjoyed this book immensely and consider it required reading for anyone interested in thinking through how artificial agents can and should be regulated.

While there is much in this book that deserves greater analysis and discussion from the public, I have chosen to focus my thoughts on the last chapter, Personhood for Artificial Agents. This chapter, I think, is rightly described as the “legal culmination” of their analysis of artificial agents, and encapsulates some of the most pressing and interesting questions that the authors raise on how artificial agents have been modeled and adapted for the age of information.

The questions that they raise ultimately center, however, on whether artificial agents might be transformed by the legal extension of personhood, or, whether the legal theory of personhood might be transformed by the inclusion of artificial agents. On the former question, the authors embark on a fascinating discussion of the legal theory of personhood. They begin, for example, by surveying some of the historical and theoretical underpinnings of the idea of legal personhood—noting, for example, that the decision to consider artificial agents is “a matter of decision rather than discovery.” The authors strenuously argue, employing traditional approaches to personhood, that artificial agents deserve the fictive status of legal persons because it represents the logical outcome of their increased level of responsibility in this day and age.

Normally, as the authors (and the Restatement (Third) of Agency) point out, in characterizing an entity as a principal or an agent, it is necessary to be a person, to have the capacity to be the holder of legal rights and the object of legal duties. But there are exceptions to this general rule. Children and some adults are given the status of legal personhood even though they may lack central qualities that other legal persons possess. Moreover, Chopra and White point to the example of the business corporation, a variety of other government and quasi-government entities, temples, and even ships, which are treated as legal persons under admiralty law. Each of these categories can be construed as both legal persons, and yet still dependent on other legal persons to represent them.

“If legal systems can accord dependent legal personality to children, adults who are not of sound mind, ships, temples, and even idols,” the authors write, “there is nothing to prevent the legal system from according this form of legal personality to artificial agents.” In fact, the authors suggest, there may be significant benefits to doing so. One possible benefit might be the standardization of e-commerce transactions, or to facilitate the delegation of responsibility for automated decision-making in the context of administrative law. Chopra and White also point to potential benefits of agents managing simple trusts, to reduce administrative costs. The authors also suggest that by construing artificial agents as legal persons, they might be seen as data processors or data controllers, and not just tools or instrumentalities for the purposes of the EU Data Protection Directive.

But there is more. The authors also argue – in a fascinating discussion – for the potential recharacterization of artifical agents as independent legal personalities as well, suggesting that such agents possess some intellectual capacity, rationality, comprehension, sensitivity to legal obligations and punishment, ability to form contracts, to control money and own property, and to pay compensation to others. Although the authors admit there are strong philosophical objections to the idea of extending legal personhood, those objections fall short in the eyes of the authors. For them, artificial agents have some degrees of free will, in the sense that they can reason about the past and modify their behavior for the future. They also may potentially possess degrees of rationality, and even display some degree of morality, and consciousness (defined as the ability to summarize its actions in a unitary narrative), and use that narrative to determine its future behavior. In much of their discussion, the authors confidently suggest that technology has advanced to the point where these objections are more than just philosophical navel-gazing—they may instead be outdated, given the range of possibilities for future technologies.

I found this chapter particularly thought provoking, and it made me try to understand some of the significant pragmatic benefits to extending personhood to artificial agents. Here, it might be useful to consider the contemporary context, and whether there are other periods in time where the law has extended similar recognition to inanimate entities, and why.

Consider corporate personhood. One of the standard defenses, for example, for extending legal personhood to corporations (as I understand it) had to do with the economic conditions at the time, as well as the proclivities of the justices who crafted the doctrine. Some have argued that corporate personhood was created by a court that was interested in investing corporations with the formidable ability to challenge federal and state regulations. Another approach might suggest the exact opposite—that the theory was created to support the corporation’s role in integrating with those regulations. In any event, irrespective of which side one picks, it remains notable that the doctrine was created at the time that our American economy was in flux: corporations had become economically powerful, and yet the shareholders that owned them were not liable for any of the corporation’s misdeeds. Thus, one might argue that the doctrine was created to remedy some of these issues. Still another wrinkle is an approach, favored in more contemporary periods, that considers the corporation to be nothing more than a “nexus” of contracts between individuals, which affects today’s approaches to corporate personhood.

Yet, in each of these approaches and theories—all of which are covered exhaustively by the legal literature on the corporate form—we see a clear picture that justifies the extension of corporate personhood. The unavailability of a path of liability, for example, and the existence of powerful group dynamics might arguably justify the doctrine. But similarly powerful justifications are missing from Chopra and White’s eloquent formulation. It may be that personifying artificial agents might lead to more standardization or lower administrative costs, but one might need to see more discussion of why that is a more appropriate remedy than others that raise lesser philosophical objections.

A related issue that the chapter brings up for me involves a fundamental question of innovation: one might argue that the doctrine of corporate personhood is motivated by efficiency; and more efficiency may lead to greater investment in innovation and growth. But if we were to invest artificial agents with personality, and make them susceptible to all sorts of legal responsibilities and duties, would this hamper the very sort of innovation that gave them birth? Given Chopra and White’s impressive commitments to technology and social welfare, it may be an issue worth pondering.

Perhaps worth considering in a sequel?

Symposium Next Week on “A Legal Theory for Autonomous Artificial Agents”

On February 14-16, we will host an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Given the great discussions at our previous symposiums for Tim Wu’s Master Switch  and Jonathan Zittrain’s Future of the Internet, I’m sure this one will be a treat.  Participants will include Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden.  Chopra will be reading their posts and responding here, too.  I discussed the book with Chopra and Grimmelmann in Brooklyn a few months ago, and I believe the audience found fascinating the many present and future scenarios raised in it.  (If you’re interested in Google’s autonomous cars, drones, robots, or even the annoying little Microsoft paperclip guy, you’ll find something intriguing in the book.)

There is an introduction to the book below the fold.  (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN).  We look forward to hosting the discussion!

Read More