LTAAA Symposium: Complexity, Intentionality, and Artificial Agents
posted by Samir Chopra
I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick and AJ in the comments spaces where their remarks were originally made).
Giovanni’s post was very useful, I think, in explaining the plausibility and indispensability of the intentional stance. We use the intentional stance all the time; indeed, we would use it all the time if we could with all sorts of beings but we find other modes of description and interpretation work better. For psychological beings like humans it works exceptionally well; so well in fact, that we often disdain lower-level neuroscientific descriptions in particular domains (like courts of law). It infects our language so systematically and richly that I have no idea of how we would function without it (I am reminded of what a colleague of mine said when talking about the Churchlands’ eliminativist hypothesis: Try ordering a hot dog while trying to be an elmininativist!). Our use of it in the book was driven by the methodological consideration that it would be what we would find ourselves relying on increasingly in dealing with artificial agents.
In his post Ken Anderson asked, “Is the position taken by the book finally one that either reduces the intention to the sum of behaviors, or else suggests that for the purposes for which we create – “endow,” more precisely – artificial agents, behavior is enough, without it being under any kind of description?” I think the short answer to that is while suggesting the intentional stance as an interpretive strategy for dealing with artificial agents (as a way of dealing with their complexity), we also pushed for the idea that the distinction that is commonly made between real and ersatz intentionality is not as sacrosanct as we might take it to be; that our view of ourselves as the sole possessors of real intentionality should change as a result of our thinking about artificial agents; most ambitiously we suggested that the appearance-reality distinction in intentionality is not viable (the intentional stance is not just a facon de parler as Patrick O’Donnell suggested in his comment).
The intentional stance is writ large into the law’s practices; I think this is why James Grimmelmann’s post, in many ways, gets to the heart of the matter when it comes to artificial agents: the law’s strategic response to artificial agents will often be a function of how complex an entity it perceives the artificial agent to be. From our perspective this becomes the question of how complex will these agents need to become, or how complex will we perceive them to be, before we find the language of the intentional stance indispensable as a means of doing justice to their richly inter-related responses to us, before the law starts to address their complexity as a unitary entity and respond, perhaps, by granting it a change in status. It is because artificial agents are complex and interestingly different and competent that we even think of them as posing challenges for our legal system. Challenges so severe in fact, that Ian Kerr felt compelled to say that an old legal apparatus just wouldn’t work. (Notice the polar extremes: keep things just as they are versus radical change is required; something is clearly at hand that has occasioned such divergent responses.)
This is why I found it peculiar that Ryan Calo, after asking, ‘Is an autonomous robot like a hammer?’ answered, “I don’t know”. Well, in one sense, I don’t know either. But I think I can tickle my intuitions by asking myself some questions: Would NASA send a hammer to explore the surface of Mars? Could a hammer drive the streets of Mountain View well enough to provoke the scholars at the Stanford Center for Internet and Society into organizing a seminar dedicated to exploring the legal implications of hammers capable of driving? Can hammers drive the roads well enough for some people to wish they would replace some human-driven cars on the road? Would hammers be used to sniff out survivors from the rubble of earthquake-devastated buildings? Would they be used to defuse bombs? Perhaps answering some of these questions might prompt us to think about whether a hammer is like an autonomous robot or whether it’s more like a rolling-pin. In the chapter on tort liability we split up liability schemes for artificial agents into two broad headings (James Grimmelmann read a draft of this chapter and suggested we dice up our treatment to reflect this kind of division): artificial agents understood as tools or instrumentalities or artificial agents understood as agents of varying levels of autonomy. In the former schemes we could talk about product liability, in the latter we could draw all sorts of analogies with diverse bodies of case law: are artificial agents like children? Are they like pets? Are they like a animals confined to enclosures that could do harm if released? Thinking about these analogies might help us figure how to fit the artificial agent into our legal frameworks.
Many legal constructs we are familiar with are responses to human complexity and an entire legal and moral vocabulary has developed as a result. As my response to Harry Surden’s excellent post indicated, it might be that we find this vocabulary so useful for pragmatic purposes that even if empirical research were to dispel some of this complexity, we might still want to hold on to it, because it lets us achieve ends—perhaps legal, perhaps moral—nearer and dearer to us. Conversely, our interactions with artificial agents are fraught with an epistemic asymmetry: we know a great deal about their innards; we know, despite the protestations of our best neuroscience, very little about ourselves. This familiarity, as we note in the book, can breed ill-directed contempt (“it’s just ones and zeroes”); it can cause us to ignore the significant ways in which the functionality of the artificial agent causes our legal doctrines and categories severe stress.
Fundamentally, we are creatures whose knowledge of the existence of other minds is doomed to never rise above the level of a particularly good abduction, a wonderful explanation that seems to do justice to the rich level of apparent intersubjective agreement that we appear to possess in many crucial areas. From our vantage, first-person perspective, an ‘I’ looks out, sees other beings possessing a range of external responses that correlate systematically with his own external modes of interaction, which are cued to his own internal states, and posits other ‘I’s as an explanation. The sneaking suspicion has never left us that we could engage in such communication with other beings who had no such internal lives as ours (this is the intuition at the heart of Solum’s post on zombies and has been around ever since Putnam’s “Robots: Machines or Artificially Created Life?” ).
The privileging of our inner spaces, our inner selves, the first-person-subjective point of view, runs the risk of making us an “autistic” species, locked away in our own subjectivities, unable to think about, or even want to, consider the possibility of other selves. If we define intelligence or personhood as being like us in all the relevantly human ways, then we will have preserved a special status for us, but it will be a pyrrhic victory, one obtained by merely defining away all competitors and sitting rather comfortably with our carbon-centric chauvinism. I think some of the unease occasioned by the idea that artificial agents could be legal persons is that in doing so, we might somehow be acknowledging that humans are more like machines than we are willing to admit. But admitting artificial agents as legal persons does not mean that we can now treat humans like machines. And more to the point, a glance at the history of how the law has handled the question of legal persons should convince us that ‘legal person’ and ‘person’ are distinct and we can keep it that way long after artificial agents have become legal persons.
Returning to the intentional stance, and to Solum’s post on zombies, I think some intuitions can be tickled in a little thought experiment. Let us cast aside robots and artificial agents for a moment. What would we do when extraterrestrials alight on this planet of ours and say “Take us to your Supreme Court Justices, we have a Personhood Petition to submit? How would practitioners of the law go about evaluating their claims? Would they say “Stand here, advance no further, I see no evidence of carbon-based life, no evidence of human methods of cognition used to accomplish these stupendous engineering tasks of constructing spacecraft that have brought you thus far, no internal evidence of human emotions in the letters of longing you write to your fellow creatures left back home on the Planet of Aspirational Personhood. We are a species of being committed to our uniqueness in the natural order, to the singularity we represent”?
Is that what they would say or would they start functioning like diligent field anthropologists, looking for some external behavioral evidence that they could systematically correlate with their pronouncements, and on finding that it was like ours on the surface, even if not on the interior, start thinking about whether they would be willing to file an amici on their behalf? Would our lawyers assess the status of these beings in our social orderings and on seeing they filled and performed many important executive roles, that people had formed relationships with them, think about evaluating their application seriously?
What if these creature’s innards were so mysterious that our best science gave us no handle on what their interiors represented or how they functioned? What if they made us rethink our notion of always looking for law-like correlations of outer with inner and revealed to us that in fact that was an old reductionist dogma? What if we came to realize the wisdom of the adage that the imputation of reasons is the best way to makes sense of our ETs’ behavior? Would we even then reject their claims to personhood because we were so invested in maintaining a special status for ourselves? Kavod habriyot indeed; I want us to note my worry that Kavod habriyot might do double duty in masking human chauvinism.
Philosophers have often, through the history of philosophical speculation, acknowledged the possibility that our elaborate constructions of ourselves as freely-acting, freely-chosing, rational, autonomous beings was a happy and convenient “fiction” (there’s that word again). The most sustained dismissals of these happy reassurances to ourselves, of course, takes place in Nietzsche. I am not going to attempt anything like that here; once done by that fellow, there’s no point in trying to follow up. But Nietzsche also would have told us that these are fictions we live by, ones we need; they play a rich and sustained role in the “ economy of life”. A world in which our fellow human beings were not considered freely acting human beings would be an intolerable one. Our social orderings would collapse; the ends we had settled on would not be attainable. Someone will now wail, “Are you suggesting free will is just a useful fiction”? Yes, but there is no need to be so scared of fictions. Much more is fiction than we imagine; our pictures of ourselves is one. But it is one we live by.
February 20, 2012 at 4:32 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Symposium (Autonomous Artificial Agents), Technology Print This Post