Site Meter

Artificial Agents, Zombies, and Legal Personhood

You may also like...

13 Responses

  1. Brett Bellmore says:

    Setting aside the curious question of how you determine that this human shaped, human acting entity isn’t, despite all it’s protestations, conscious… (Which is a really important question!) We’d better give them equal rights, or the government will just declare anyone it doesn’t like a “zombie”, and ignore their complaints as just mechanistic noises, not actual signs of outrage.

    You can see hints of this when people of some political persuasions, confronted with evidence that most folks don’t actually agree with them, start pulling out concepts like “false consciousness” and “preference falsification”. What are these, if not claims that you’re something of a zombie?

  2. James Grimmelmann says:

    I agree that the zombie thought experiment is illuminating, but I’d draw somewhat different conclusions from it. First, two preliminaries:

    One, these zombies are a philosophical inverse of artificial intelligences. We can tell that zombies lack qualia but they’re otherwise indistinguishable from humans in any way that has independent significance, whereas we can tell that AIs are significantly different from humans but lack the means to say with certainty whether they have qualia or not.

    Two, zombies raise an extremely difficult epistemological question: how is that we can know that humans have qualia and zombies don’t? “having qualia” is a characteristic that by its very nature is impossible to attribute or deny to an agent. You finesse the question by assuming there is some subtle physical characteristic that’s indicative of having qualia. But how could we possibly ever justify the belief that that this “subtle” characteristic is indicative of qualia? That would seem to depend on some especially strong form of a priori knowledge, as it’s neither capable of being empirically demonstrated nor of being deduced by reason. To the extent that any person overcomes the skeptical, solipsistic argument that only she has qualia, it appears to rest on some kind of judgment about the attributes of a category (“humans”), the members of which share significant (non-subtle) physical characteristics with her.

    Thus, this suggests to me that the zombie hypothetical in fact teaches us that if we wish to deny full legal rights to AIs, it’s on the basis of some significant physical characteristics that distinguish them from humans. Either those characteristics are operating directly (e.g. because AIs can multiply indefinitely, equal voting rights would be ill-advised) or they’re operating indirectly by providing reasons to believe that the AIs in question lack qualia.

    At some point, the latter argument necessarily founders on the unobservability of qualia. With some kinds of physical differences that lead to obvious cognitive limits, it seems plausible (if not rigorously probable to a skeptic) to say that some alleged “agents” lack them: rocks and pocket calculators. But with AIs of the capabilities this book worries about, that argument ultimately seems to rest on a notion of biological human exceptionalism: only humans can have qualia, because how could anything else be like us? This, to me, has always been the real force of the Turing Test: it substitutes a linguistic, behavioral question for the question of consciousness itself because the former is answerable and the latter is not.

  3. Lawrence Solum says:

    I agree with almost all of James Grimmelmann’s comment. In particular, I agree that the Turing Test provides the model of a test for the functional capacities of artificial intelligences. And I certainly agree that my hypothetical, which simply assumes a possible world in which we can distinguish Zombies from humans, requires that there be some mechanism that does not exist in the actual world. But this does not entail that the world of my thought experiment is not possible or conceivable. And it does not entail that the knowledge that we would have of Zombies would be a priori; here I assume that Grimmelmann is using a prior in the standard philosophical sense. The world that I posit is one in which we have a qualia detector–we know the detector is accurate because it works on normal humans when they are in states unaccompanied by consciousness and hence qualia–certain sorts of sleep states or coma states. That is simply a feature of the thought experiment, and I do not claim that the possible world of the thought experiment is nomologically accessible from the actual world. So knowledge is both possible and a posteriori in the world of the thought experiment.

  4. This is of course right, and my comment above is fighting the hypothetical. Leaving open the question of the accessibility of this possible world answers my principal objection to it.

  5. Larry (some of what follows might be better addressed to Chalmers and our two authors),

    Fortunately, thought experiments may testify to our powers of imagination and creativity, but they do not (as utopias do not) necessarily provide us evidence of a probable or even possible future. Perhaps I have limited powers of imagination, but I cannot imagine a future wherein zombies, san consciousness (and intentionality…), are “laugh[ing] at jokes, go[ing] to work, writ[ing] screenplays (unless they are on strike), get[ting] into fights, hav[ing] sex, and go[ing] to Milk and Honey for drinks.” I can imagine this as science fiction, but not as parts of a possible future that comes to fruition, for it ever remains fiction, the above actions and activities being indissolubly and unavoidably examples of human behavior, they are instances of “truly free acts,” and it is human attributes or properties that make them quintessentially human actions (save, perhaps, ‘having sex,’ unless by that we intend ‘making love’).

    To make sense of such behavior, one needs human (contextualizing) narratives: the aforementioned actions and activities cannot take place without a sense of past, present, and future and the narrative settings (or ‘scripts’), as it were, that accord them meaningful “sense” and reference, that make sense of them AS going to work, writing screenplays, etc. These human narratives and conceptions include the sense that certain events are actions are MY actions, as Raymond Tallis has explained. And “if it is not an illusion that one is a self, then it cannot be an illusion that one is an agent; for the same thing that gives rise to the sense of being a self, a subject, is precisely that which rise to being an agent, as opposed to merely being dissolved in the ocean of causality.” Our identity and embodied sense of self are precisely what permit our notion of freedom or agency, something by definition a zombie must lack and thus cannot, truly, laugh at jokes, go to work, write screenplays, and so on, as humans do, such actions intrinsically, unavoidably entailing a sense of self and agency conspicuously absent from zombies. In short, as Tallis argues, “the fact that one is a self and the fact that one is an agent are inseparable,” and only human agency is behind our capacity to and make sense of the meaning of, to laugh at a joke, to go to work, to write screenplays. Only a human being can imagine why, or why not, a joke may be funny, or why I chose to go to work rather than lie on the couch watching “reality” television, or why work is called work and not play, or why work may be easy or difficult, boring or creative, just a job or something quite satisfying, and so on. Only human beings can imagine what it is like to have “writer’s block,” for our screenplay to fall on deaf ears or be subject to copyright. Only a human being can imagine a traffic accident that thwart the desire to get to Milk and Honey before the doctor’s appointment. The human spatio-temporal and conceptual realms that constitute OUR world are not to be found in the world of technology, no matter how “high” or “intelligent” we understand it to be. If, as Tallis (after Heidegger) describes, the nonhuman animal realm is thus “world-poor,” the technological ream by definition is “world-empty.” Robots and androids lack the twin capacities of intentionality relative to both indexical and deindexical (propositional, and perhaps non-propositional) awareness, the latter allowing for our world to be untethered to any “particular location, or indeed to any location.” Thus our intentional states are not confined to materiality or material objects, “or clearly delineated values of space-time.” The constraints on HUMAN possibility are not material, the bounds of our world “cannot be specified by any description of an array of material objects.” Let’s invoke a more modest thought experiment from Tallis to illustrate some of the points above:

    “[Imagine] my going to London for a meeting at the Royal College of Physicians. Each of the vast number of movements comprising my journey to London, my journeying within London, what I do when I get to my destination, and my return from London, makes sense only with regard to my overarching purpose (which may be very ill-defined and is certainly highly abstract—e.g., ‘improving Registrar training’ or ‘adding to the number of brownie points on my CV’) of the meeting in London. The particular movement I make—turning the key in a door at a particular time, giving instruction to the cab driver, turning over pages of documents I don’t particularly want to read on the train, taking a particular route through the streets, hovering outside the building because I’m early—are occasioned by this overarching abstract purpose which has tentacular roots into the cumulative subsoil of the self, they are requisitioned for, and would not have occurred without, such a purpose. [….] Truly free acts would not have occurred without their being intelligible to the actor.”

    Zombies can only be loci of where things happen, of happenings, and thus be the objects of OUR intentions. It is only if we imagine our science fiction zombies to be evidence of a POSSIBLE future that “we can imagine a future in which artificial agents (or robots or androids) have all the capacities we associate with human persons,” and as that particular thought experiment (however provocative) is more imaginative than realistic, descriptively speaking, we cannot even imagine a FUTURE “in which artificial agents (or robots or androids) have all the capacities we associate with human persons,” we can simply imagine it, and that is all, it remains utterly fanciful and imaginative, flight of fancy full stop, it is not imaginatively or prospectively any possible part of A or OUR future. Robots and androids, by definition, cannot now or ever have the “fundamental intuition that the world can be grasped and changed through myself as agent,” an intuition that eventually “gives rise to or creates the context for the emergence of, specific ideas that may be true or false,” a “point-intuition” that permits awareness, the capacity for same robots and androids can never have, and thus they cannot have our awareness that we are in the natural world but not, in some sense or senses, “entirely of it” (this is the ‘existential intuition’ that historically gave rise to and now makes meaningful our concepts and conceptions of embodiment, individuation, and personal identity).

    It is thus not a question of waiting for a world to arrive in which we are interacting with androids and robots possessed of functional capacities that approach or exceed those of human persons. For a world in which androids and human possess the functional capacities of human beings cannot exist, by virtue of what makes us distinctively human, and androids and robots mere technologies, in this world and any future world. The “existential intuition” that is uniquely human is what gives rise to and makes meaningful our conceptions of embodiment, individuation, personal identity, and agency (interestingly, even ‘illusions’ can extend our freedom, as when, for example, they become self-fulfilling). The putative agency of robots and androids exists by virtue of what is entirely outside them, external influence engulfs their entire being as it were, their “power to do otherwise” will always be determined externally, by human beings, we do and will set the limits on, circumscribe, the exercise of any such power, as robots and androids are “destined” (‘determined’) to do what we program them to do, they will never be able to “freely” compose or write their own programs. They lack the power to freely choose to do otherwise (if memory serves me correctly, the HAL of 2001: A Space Odyssey notwithstanding). Human agency belongs to the world of possibilities, while robots and androids belong to the world of actualities that such possibilities may realize or fulfill. And even if our sense of self, for instance, partly illusory, it is only among beings that such an illusion can be self-fulfilling or reality-producing.

    As I wrote elsewhere, and without going into details or the myriad possible arguments, I think we find it tempting or at least easier to imagine extending our moral and psychological concepts and categories (concepts and categories intrinsic to morality, ontology, if not metaphysics, and psychology, inlcuding consciousness, intentionality, autonomy, moral agency) beyond the human and nonhuman animal world into the domain of AI technology if our understanding of the mind (including consciousness and intentionality) is beholden to metaphors, models, or pictures that are currently fashionable in some quarters of philosophy, cognitive science, and psychology. One such model comes courtesy of “cognitive naturalism,” an “interdisciplinary amalgam of psychology, artificial intelligence, neuroscience, and linguistics,” the central hypothesis of which is “that thought can be understood in terms of computational procedures on mental representations,” dubbed by the philosopher Paul Thagard as CRUM, for Computational-Representational Understanding of Mind. On this model, mental representations are like (or virtually identical to) data structures and the mind’s putative “computational procedures” are algorithms, and thus “thinking” is tantamount to running programs. The current and fairly uncritical fascination with the neurosciences, evolutionary psychology, and reductionist theories in philosophy of mind together contribute to an intellectual climate and disciplinary inquiries that directly or implicitly sanction or legitimate the legal endeavor to ascribe legal personhood (which is parasitic on moral autonomy and ethical agency) to technological programs and devices like robots and androids, the latter being part of the order of mere “things” or “substances,” while our notion of the agent-self is not of a thing or substance. Our intentions may assume form as visible incarnations or instantiations, but they “are not themselves visible or located in space.” Our capacity for “reason-giving” and reason-occasioned behavior is something other than can be captured or explained by some complex of external, mechanical, or impersonal causes (Tallis has enummerated many of the more obvious differences between reasons and causes), hence, for example, the constrained indeterminacy of reason-prompted action. Models of “computerized consciousness,” for example, make it tempting to elide the metaphysical, ontological, psychological, and moral boundaries between the human world and the technological world such that we seriously entertain the plausibility of “autonomous artificial agents” (what might be called ‘Ellul’s nightmare,’ after the well-known argument of Jacques Ellul).

    I’m not claiming that these new technologies don’t raise novel moral and legal problems (and I’m just starting to read the book by Chopra and White and sense there’s much to learn from their argument with respect to specific cases, even if we don’t share certain presuppositions or assumptions supporting the larger argument or ‘picture’) for which we may need to craft a fairly new conceptual (including legal) vocabulary. But such an enterprise would necessarily eschew simply importing existing moral and psychological principles, predicates, and concepts (as presuppositions, assumptions, or axioms) into the world of technology and law. And such an enterprise will have to avoid, at the very least, the siren call of mind-brain reductionism. In other words, consciousness, intentionality, and normativity are decisive (i.e. basic or fundamental) properties or features or characteristics of our mental life which rule out the plausibility of such reductionist or eliminativist “hypotheses.” (In Daniel Robinson’s words: ‘It cannot even be said that they [i.e., emergentism, epiphenomenalism, and supervenience] are working hypotheses, because a working hypothesis is one that will rise or fall on the basis of relevant evidence, and there is no “evidence” as such that could tell for or against “hypotheses” of this sort.’) And without such reductionism in the philosophy of mind, the prospect of legal personhood for “autonomous artificial agents” looks less considerably less plausible and not at all persuasive.

    I realize this is rather verbose yet, for better and worse, it represents only a small fraction of what I hope to go into in more detail at a later date.

    All good wishes,
    Patrick

  6. Lawrence Solum says:

    Patrick, as always, thank you so much for your thoughtful and helpful comments! I completely agree that the Zombies thought experiment could not take place in a possible world that is close to our own. To use the jargon, the world of the thought experiment is neither nomologically nor historically accessible. But I think it is nonetheless illuminating for two reasons. First, it helps us see that “missing something” arguments might not have the practical bite that we assume. And second, it helps us see how radically different would be a world with autonomous agents with functional capacities that approach or exceed those of actual human persons.

  7. Brett Bellmore says:

    Probably the best argument for giving ‘zombies’ full rights, in the context of *this* world, is that a few years after we decide whether advanced artificial intelligences get treated as our equals, advanced artifical intelligences will be deciding whether to treat us as *their* equals. Given that machines ‘evolve’ ever so much faster than humans, that moment when we and they are remotely equal will be short indeed.

    We want them thinking fondly of us once our fate is in their hands.

  8. Patrick. I can’t possibly respond to your post with anything like the detail or care you’ve put into it. But I also think it shouldn’t pass without response.

    The question I’d put to you is on what possible basis you can support the claim that androids and robots cannot have the “fundamental intuition that the world can be grasped and changed through myself as agent?” You say that it’s “by definition.” But what definition is that? That they’re composed primarily of silicon rather than carbon? That they were manufactured rather than born? By your own argument, no external set of facts about their physical existence can speak to “what makes us distinctively human.” The actual problem of how to treat animals, androids, or aliens cannot be answered by appeal to a definition of what they are, because any definition necessarily leaves open the question of whether the alleged agent before us is actually a member of the class whose characteristics have been defined.As best I can tell, you have simply posited that they lack consciousness, reflexivity, and intentionality. But any actual androids we encounter will no more be subject to such positing than any actual other people we encounter. The connection between existence and experience remains deeply obscure, but if we grant it in the case of humans notwithstanding the obscurity, why can we not grant it in the case of androids notwithstanding the same obscurity — for it is the same obscurity.

  9. James, I was rather embarrassed by the lack of “detail and care” I put into my response (hurriedly composed so as to get to my chores in a timely fashion), so I’m happy to learn you thought otherwise!

    I’ll attempt to address your question in full (so to speak) in a day or two, but for now let me simply state I was referring to what makes us human, rather than otherwise (and not so much as a definition as such, but the sundry capacities that define what it means to be a human being), and part of this involves an appreciation that the mind or consciousness (among other things and unlike robots or androids or information-processing systems) cannot be adequately described or defined in physicalist terms. My understanding of the nature of, in the first instance, “existential intuition” (other forms of intuition derived from or secondary to this primary intuition) is dependent upon an account by Raymond Tallis in his book, I Am: A Philosophical Account into First-Person Being (2003) (the second of a trilogy; all three volumes might be consulted). I have no problem assuming we can distinguish, relying upon various conceptual (including metaphysical), psychological, and behavioral criteria, between human and non-human animals, and these, in turn, from androids or “aliens,” for we do this in theory and praxis all the time. The boundaries for some capacities and functions may be soft and permeable at places and hard and fast elsewhere, but boundaries they remain. We may of course argue about these boundaries and the precise location or nature of soft and hard spots, but we assume them more or less in the first place to get arguments off the ground. More anon.

  10. I look forward to it. I’m not especially troubled by boundary problems, or by the concern that the capacities that matter could be present in varying degrees. The range of human experience itself, including the arc of any given human life, already exhibits important boundary cases, but those are not an obstacle to our ability to distinguish the “human” (in the relevant moral and psychological senses) from many things that are definitely not human. My concern is rather with whether this category is closed. I’m open to the possibility that some kinds of things that are not biologically members of the sper account cies homo sapiens could share with “humans” the relevant characteristics such that we should afford them moral, psychological, and legal agency for the same reasons we afford humans.

    I’m not familiar with Tallis, so I look forward to your account of his views, and of your own.

  11. And no, I don’t know what happened to “species” in that comment.

  12. James, It’s clear I can’t write anything of substance for some time, so when I do (a week or two I hope) I’ll post it at the Ratio Juris blog and let you know. Thanks, Patrick

  13. That’s great, and probably actually better. Given the significant amount of thought you’ve devoted to the question, a full post of its own is definitely in order.. I can’t promise anything like a full reply when you post, but I do very much look forward to reading it.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image