LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities
posted by Samir Chopra
I want to thank Harry Surden for his rich, technically-informed response to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:
There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”
The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.
I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.
Having said this, I can now turn to responding to Harry’s excellent post.
[E]mbedded in many existing legal doctrines are underlying assumptions about cognition and intentionality that are implicit and are so basic that they are often not articulated.
This is indeed true and I hear the note of caution that Harry wants to sound about changes in legal doctrine that might think they are responding to human-like capacities in artificial agents but are only clever ‘simulations’. But it also worth acknowledging that many of our practices in dealing with other humans are also embedded in assumptions about cognition and intentionality that frankly, are little more than admissions of our ignorance about details (this idea is implicit in James Grimmelmann’s excellent post on law’s response to complexity), and that neuroscientific investigations might force us to reconsider (the pre-conscious encoding of decisions for instance). As we noted in the concluding chapter, we might reject the conclusions of these neuroscientific investigations precisely because we want to preserve our legal and moral vocabularies. Then, I think, we can see the influence going the other way; it is the legal and moral picture that also drives our conceptions and knowledge of ourselves, not just the other way around.
Harry’s post also makes us come face to face with with the fact that our knowledge of our cognitive abilities remains remarkably obscure. Notice that when we describe human capabilities, as contrasted with those of artificial agents, we often retreat into obscurity and the usage of terms that we accept uncritically. Notice for instance, Harry’s description of human facility in translation as arising from an kind of “profound understanding with the underlying “meaning” of the translated sentences”. What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others.
And what are “meanings”? I’m glad Harry put “meaning” into quotes. Are there meanings hung up in a museum (as Quine suggested and rejected as “uncritical semantics” in Two Dogmas of Empiricism)? Or do I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language? (As Wittgenstein suggested in The Philosophical Investigations?) If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?
To repeat and sum up: we might find our cognitive abilities are realizable in a variety of physical substrates, by a variety of implementation schemes; the language of our law and moral systems often reflect assumptions about humans’ capacities that on closer inspection are often shrouded in obscurity; we retain such language and such assumptions because of over-riding social objectives that might cause us to disdain the neuroscientific vocabulary in preference for the extant legal and moral vocabulary.
So Harry is right that we should not understand the multiple-realizability of human cognitive skills as a purely technical issue, but I want to suggest that the lens should be turned back on humans as well, and on the often uncritical assumptions we make that we possess uniquely, non-replicable qualities, and think more about how such grants of uniqueness underwrite important methods of self-conception, which are then written into the law.
February 19, 2012 at 3:26 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Articles and Books, Cyberlaw, Legal Theory, Psychology and Behavior, Symposium (Autonomous Artificial Agents), Technology Print This Post