Site Meter

LTAA Symposium: Response to Sutter on Artificial Agents

You may also like...

7 Responses

  1. Samir,

    I’ve now received your book and have just begun reading it so I’ll refrain from any comments on the book itself but do what to point out that Dennett’s conception of “intentionality” is fairly idiosyncratic if not implausible (with regard to its philosophical history and more conventional use in philosophy) and liable to any number of objections. I would therefore simply ask that you and your co-author consider (if you’ve not already) some of the fairly vigorous critiques of his account insofar as you rely on or invoke his conception of intentionality or intentional systems.

    In particular, his notion of “intentional systems” as formulated to encompass, for example, cells, molecules, parts of the brain, thermostats, and computers, is rather implausible save a wholly “persuasive” (or unduly stipulative) definition or a wildly idiosyncratic conception of intentionality. As M.R. Bennett (a neuroscientist) and P.M.S. Hacker (a philosopher) have written,[1] one cannot intelligibly ascribe intentionality to such motley phenomena, for the proper bearers of intentionality are “a subclass of psychological attributes and only animals, and fairly sophisticated animals at that, are the only appropriate subjects of such attributes. We cannot attribute belief, fear, hope, suspicion, etc. to the aforementioned items (without, that is, indulging in crass anthropomorphism). Furthermore, “to ascribe pain, liking, disliking, perceiving, misperceiving, anger, fear, joy, knowledge, belief, memory, imagination, desire, intention, and so on, to living beings, in particular to human beings, is not to adopt an interpretative stance.” There are several manifest difficulties with the characterization of the “intentional stance” as “a mode of interpreting entities AS IF these were rational agents” (as a ‘heuristic overlay’), with beliefs, desires, and other (intentional) mental states. As Bennett and Hacker (B & H) point out, “we do NOT typically treat animals as if they were rational agents—since we know perfectly well that they are not. But we do ascribe a wide range of perceptual, affective, cognitive and volitional attributes to animals in a perfectly literal sense. Being a rational agent is not a precondition for the applicability psychological attributes to a creature.” And we certainly don’t adopt an interpretative (intentional) stance toward our own wants, likings, and so forth, rather, we give unmediated expression to our pain, regret, pleasure, hopes and fears (thus these are not ‘heuristic overlays’ or ‘theoretical posits’ of any sort). Consider too the following illustration from B & H:

    “We might be inclined to say that the computer ‘will not take your knight because it knows that there is a line of ensuing play that would lead to its loosing its rook, and it does not want that to happen.’ But what does this amount to? It is no more than a façon de parler. We know that the computer has been designed to make moves that will (probably) lead to the defeat of whomever plays with it—and there is no such thing as the computer’s wanting or knowing anything. And in order to predict its moves, we need not absurdly ascribe knowledge or wants to it, but need only understand the goals of its program and programmer (viz. to make a (mindless) chess-playing machine. For design is one form of teleology, and teleology is a basis for prediction.”

    And while you note that you share Andrew’s skepticism about neuroscientific reductionism, Dennett himself exhibits and exemplifies that very thing in his computational theory of consciousness (including the claim that the mind is identical to the neural activity of the brain) and when he asserts, for instance, that the brain “gathers information, anticipates things, interprets the information it receives, arrives at conclusions, etc.” [the mereological fallacy]. The brain, as Hacker and Bennett make pellucid, “is not a possible subject of beliefs and desires; there is not such thing as a brain acting on beliefs and desires, and there is nothing that the brain does that can be predicted on the basis of its beliefs and desires.” Dennett’s argument, after all, was in part if not in whole intended to amount to a research methodology to “help[] neuroscientists to explain the neural foundations of human powers.” In short, “no well-confirmed empirical theory in neuroscience has emerged from Dennett’s explanations, for ascribing ‘sort of psychological properties’ to part of the brain does not EXPLAIN anything.” It is hardly surprising that Dennett denies the notion of qualia, or any significant or explanatory role for subjective experiences and first-person phenomena in the endeavor to scientifically or “naturalistically” describe and “explain” the mind and consciousness. The beliefs and desires of folk psychology are mere imaginary entities, not that different from the fictional status of “selves.”[2]

    At a later date I hope to address the importance of and the reasons for coming to as much clarity as possible regarding what it means to be a human animal as distinct from a non-human animal (allowing for some overlap of course in virtue of the notion of ‘animal’), as well as the relevant differences between sentient creatures and non-sentient entities. This would include an elaboration of what science can and cannot tell us about human nature. In so doing, I hope to demonstrate why we might consider it at once metaphysically, ontologically, psychologically, and morally incoherent to speak of “robots’ moral senses,” metaphorically or literally (in as much as corporations are composed of individuals, that is an entirely different matter). For now I would ask interested readers to look at the many recent books by Raymond Tallis (especially his trilogy and the latest book) for a taste of the direction that I find compelling and persuasive. Consciousness (as perceiving, knowing, awareness, etc.), for example, is very different in essence and function from the highly specified frames of reference used with computational devices, the former possessing, in Tallis’s words, “an openness, a boundless availability to what, unscheduledly, happens,” that differs in kind from “programmed, rule-governed responsiveness.” Moreover,

    “The multiplication of rules will not solve [what Dennett defines as] the frame problem except for local AI applications that come nowhere near the global scope of consciousness. The explicit rules that may shape consciousness arise out of a background of explicitness; or the soil out of which rules grow, the solution out of which they crystallize, is a continuum of explicitness, a field of explicitness. The computer has only discrete countable rules, not the continuum of explicitness, this ‘rule mass,’ this boundless, ruly world.”

    Dennett’s reductionism may not be of the crudest sort, but it is no less reductionist, as evidenced in his belief that the mind “is our way of experiencing the machinery of the brain.”[3]

    [1] M.R. Bennett and P.M.S. Hacker, Philosophical Foundations of Neuroscience (Malden, MA: Blackwell, 2003). The first appendix is devoted entirely to these and other topics in several of Dennett’s well-known books. Dennett replies to this critique and B & H respond in turn in Bennett, Dennett, Hacker, and Searle (with Daniel Robinson), Neuroscience and Philosophy: Brain, Mind, and Language (New York: Columbia University Press, 2007).
    [2] As Tallis explains, Dennett’s “’narrative centre of gravity’ actually looks more difficult than the thing it is replacing. After all, narrative is a higher-order activity of a self; and the intuition of a centre of gravity of a larger number of independent narratives seems to be an even higher-order activity. [….] [This is a] particularly striking example of ‘the fallacy of misplaced consciousness.’ When materialists deny consciousness in the places where it is normally thought to be, it has the habit of appearing in an even more complex form where it shouldn’t be.”
    [3] Raymond Tallis, The Explicit Animal: A Defence of Human Consciousness (New York: St. Martin’s Press, 1999 ed.).

  2. erratum (first para.): “do want to point out”

  3. A.J. Sutter says:

    Thanks, Samir, for your response, and to Patrick for his helpful remarks about intentionality, and especially Dennett’s view of it.

    As the current vassal of two cats, and having shared my home with other pets most of my life, I don’t intend at all to endorse the Cartesian view of animals. (Nor the Levinasian anthropocentric view of them, either.) But obviously it’s not necessary to ascribe intentionality, moral sense or other human-like attributes to automata in order to treat animals better.

    Concerning the intentional stance toward corporations, again I think you’re approaching the issue too much from the viewpoint of legal fictions and the categories of academic philosophy and not enough from how humans actually regard corporations. For example, in the management literature, the moral sense you impute to the corporation as a “black-box” sort of entity is usually understood in terms of a culture shared among the humans in the company. And no matter what categories are used in “the law” to speak of a corporation’s rights or responsibilities, I suggest that those legal solutions are tolerated politically (when they are) because those outside the corporation understand that there are humans making the decisions and having responsibility for its “machinic-human assemblages.” That’s why the conclusion of the syllogism described in my earlier post is a non sequitur.

    I also questioned whether our reliance on those assemblages is really so desirable, and whether its spread should be encouraged. I look forward to your post about respect for humanity, and will save some of my other comments on your posts to date until I’ve read that one.

  4. A.J. Sutter says:

    One more comment apropos of this post, concerning the language of “agency”:

    That the word “agent” is full of person connotations was precisely my point, rather than something occasioning surprise. The metaphors we choose frame the way in which we think about things. For example, as I discuss at length in a Japanese book, the positive connotations of the word “growth” have led us to believe that economic growth is a good thing, without stopping to think clearly about what it’s done for us lately. We might regard it differently if we called it, say, “swelling” instead. (In fact it might not be a bad idea to switch: the same technological utopianism that motivates much of the AI project is one reason we’ve come to expect a perpetual geometric or exponential increase of GDP — something that would be a monstrosity if it were somatic growth.) Similarly, the use of the word “agent” as we deliberate on the legal personhood issue makes it perhaps too easy to come to the conclusions set forth in your book, even though, as other posters pointed out, those might be unnecessary doctrinally.

    For the reasons Patrick mentions, among others, it’s not so inevitable as you claim, citing Dennett, that we’ll come to ascribe intentionality to “automated mediation modules” or some more aptly-named contraption. But certainly there’s an irony here, too: our potted histories of science tell us that it was “primitive” peoples who believed that rocks, rivers, the sea and sky had “spirits,” and that it was some sort of “advance” for science to have dispelled these “superstitions.” Insofar as you’re suggesting that we should be more flexible about this, you may be right; I’m not one to deny all immanence, for example. By my belief in a divine immanence differs very much from ascribing intentionality to each separate pebble or wave for the sake of convenience in making predictions, both in the number of agents involved and in the motivations. When folks say “G-d works in mysterious ways,” it’s not because He’s so easy to predict.

  5. Samir Chopra says:

    Patrick, AJ: Thank you both for your comments. I intend to reply soon, either here, or in a summary final post I am preparing (it appears to touch on many of the themes you both raise here). I will post the link here in any case so that this conversation can be viewed together.

    I thank you both for such thought-provoking conversation!

  6. Samir Chopra says:

    Patrick:

    1. The idiosyncrasy of a philosophical position is no argument against it.

    2. We did not take on all objections to Dennett’s theory because our intention was to suggest a methodological strategy that the law already implicitly seemed to be using. The debate surrounding the theory is gigantic indeed, but it shows the same pattern that I notice in your response: it privileges the first-person perspective that by definition only we have, and disdains third-person perspectives and strategies.

    3. Dennett’ does not suggest we use the intentional stance for the entities you suggest; he only points out that our usage of it is almost a reflex, but that we drop it because we notice the design and physical stances are available. This defuses part of the Bennett-Hacker critique because Dennett himself says that only some kinds of creatures can prompt us to rely on the intentional stance to the exclusion of other stances.

    4. Given that the existence of other minds is an abduction at best, I fail to see how our treatment of others does not involve interpretation. Furthemore, the intentional stance is meant to provide a third-person perspective; it is not meant to provide interpretations of ourselves, though I suggest there is a certain amount of self-construction and narrative-construction that we indulge in all the time.

    5. My adoption of Dennett’s theory of the intentional stance does not commit me to a computational model of mind. So Tallis’ critique of that fails to find traction here. I personally think Tallis is on the mark (I share many of his skepticisms), insofar as I think the vocabularies we have chosen for ourselves are indispensable and we will not let go of them so easily. For what it is worth, I think the correct focus is not on the brain, but on the being, the entity, in its locus of interest, in its web of complex relationships. We are actually far more in agreement than you might imagine.

  7. Samir Chopra says:

    AJ:

    1. We might not need to do all those things you mention to treat animals better. But using a language inflected with psychological attributes is a key factor in our doing so.

    2. The reductionism employed toward a corporation could be turned back toward humans as well, to dismiss the unitary notion of a human agent. We wouldn’t do it, because we have an inner life, and a first-person perspective. But from a third-person perspective? From the perspective of an extra-terrestrial? How we slice up the world, the ontological chunking we carry out, is very much driven by our interests and our pragmatic concerns. This is why the law can treat corporations as unitary entities; why our language can rely on them being so; while all the while we can tell ourselves stories about how when a corporations ‘act’ ‘its just humans doing the acting’. Why aren’t all our actions ‘just’ C-fibers firing?

    3. To address your second comment, locating agency in this world is also a pragmatic concern and is also one driven by epistemic considerations. We ascribe plenty of agency to humans, and find plenty of unitary causes when more accurately a multiplicity should be indicted. Consider the notion of an ‘author’ for instance; we pick out one entity out of the bewildering array of forces that actually brought a book about. If we want to use ‘agents’ only for things that have beliefs and desires like us, then certainly AAs are some distance from that, but they are taking actions, and they aren’t inert. From one perspective, our agency vanishes too; we are merely instantiations of physical laws.

    Coming to epistemic considerations: In the time of the ancients you allude to, agency was seen everywhere. We retreated from those ascriptions as our knowledge of the natural world increased. But we are still mysteries to ourselves. So we continue to ascribe agency to ourselves. AAs aren’t obscure and complex enough to be treated as agents yet. There is certainly irony here; our best sciences tell us we need to diminish our sense of ourselves and the growing complexity of AAs might suggest the same.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image