Site Meter

Author: Samir Chopra

0

LTAAA Symposium Wrap-up

I want to wrap up discussion in this wonderful online symposium on A Legal Theory for Autonomous Artificial Agents that Frank Pasquale and the folks at Concurring Opinions put together. I appreciate you letting me hijack your space for a week! Obviously, this symposium would not have been possible without its participants–Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden–and I thank them all for their responses. You’ve all made me think very hard about the book’s arguments (I hope to continue these conversations over at my blog at samirchopra.com and on my Twitter feed at @EyeOnThePitch). As I indicated to Frank by email, I’d need to write a second book in order to do justice to them. I don’t want to waffle on too long so let me just quote from the book to make clear what our position is with regards to artificial agents and their future legal status:

Read More

4

LTAAA Symposium: Complexity, Intentionality, and Artificial Agents

I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick   and AJ in the comments spaces where their remarks were originally made).

Read More

0

LTAAA Symposium: Response to Pagallo on Legal Personhood

Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents.  I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read More

6

LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities

I want to thank Harry Surden for his rich, technically-informed response  to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:

There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”

The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.

I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.

Having said this, I can now turn to responding to Harry’s excellent post.
Read More

0

LTAA Symposium: Response to Matwyshyn on Artificial Agents and Contracting

Andrea Matwyshyn’s reading of the agency analysis of contracting  (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.

At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.

First,

[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.

Second,

Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.

Third, an implicit, unstated economic incentive.

Read More

7

LTAA Symposium: Response to Sutter on Artificial Agents

I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
Read More

0

LTAAA Symposium: Artificial Agents and the Law of Agency

I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive)  response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read More

3

LTAAA Symposium: Legal Personhood for Artificial Agents

In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to  Gillick v West Norfolk and Wisbech Area Health Authority)

I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.
Read More

0

Artificial Agents and the Law: Some Preliminary Considerations

I am grateful to Concurring Opinions for hosting this online symposium on my book A Legal Theory for Autonomous Artificial Agents. There has already been some discussion here; I’m hoping that once the book has been read and its actual arguments engaged with, we can have a more substantive discussion. (I notice that James Grimmelmann and Sonia Katyal have already posted very thoughtful responses; I intend to respond to those in separate posts later.)

Last week, I spoke on the book at Bard College, to a mixed audience of philosophy, computer science, and cognitive science faculty and students. The question-and-answer session was quite lively and our conversations continued over dinner later.  Some of the questions that were directed at me are quite familiar to me by now: Why make any change in the legal status of artificial agents? That is, why elevate them from non-entities in the ontology of the law to the status of legal agents, or possibly even beyond? How can an artificial agent, which lacks the supposedly distinctively-human characteristics of <insert consciousness, free-will, rationality, autonomy, subjectivity, phenomenal experience here> ever be considered an “agent” or a “person”? Aren’t you abusing language when you say that a program or a robot can be attributed knowledge? How can those kinds of things ever ”know” anything? Who is doing the knowing?

I’ll be addressing questions like these and others during this online symposium; for the time being, I’d like to make a couple of general remarks.

The modest changes in legal doctrine proposed in our book are largely driven by two considerations.

First, existent legal doctrine, in a couple of domains, most especially contracting, which kicks off our discussion and serves as the foundations for the eventual development of the book’s thesis, is placed under considerable strain by its current treatment of highly sophisticated artificial agents. We could maintain current contracting doctrines as is (i.e., merely tweak them to accommodate artificial agents without changing their status vis-a-vis contracting) but run the risk of imposing implausible readings of contract theories in doing so. This might be seen as a reasonable price to pay so that we can maintain our intuitions about the kinds of beings some of us take artificial agents to be. I’d suggest this kind of retention of intuitions starts to become increasingly untenable when we see the disparateness in the entities that are placed in the same legal category. (Are autonomous robots really just the same as tools like hammers?). Furthermore, as we argue in Chapter 1 and 2, there is a perfectly coherent path we can take to start to consider such artificial agents as legal agents (perhaps without legal personality at first). This strategy is philosophically and legally coherent and the argument is developed in detail in Chapter 1 and 2.  The argument in the latter case suggest that they be considered as legal agents for the purpose of contracting; that in the former lays out a prima facieargument for considering them legal agents. Furthermore, in Chapter 2, we say “The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not.”

Which brings me to my second point. A change in a  legal doctrine can bring about better outcomes. One of the crucial arguments in our Chapter 2, (one I really hope readers engage with) is an assessment in the economic dimension of  contracting by artificial agents considered as legal agents. I share the skepticism of those in the legal academy that economic analysis of law not drive all doctrinal changes but in this case, I’d suggest the risk allocation does work out better. As we note “Arguably, agency law principles in the context of contracting are economically efficient in the sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider (Rasmusen 2004, 369). Therefore, the case for the application of agency doctrine to artificial agents in the contractual context is strengthened if we can show similar considerations apply in the case of artificial agents as do in the case of human agents, so that similar rules of apportioning liability between the principal and the third party should also apply.”

And I think we do.

An even stronger argument can be made when it comes to privacy. In Chapter 3, the dismissal of the Google defense (“if humans don’t read your email, your privacy is not violated”) is enabled precisely by treating artificial agents as legal  agents. (This follows on the heel of an analysis of knowledge attribution to artificial agents so that they can be considered legal agents for the purpose of knowledge attribution.)

Much more on this in the next few days.

0

Money Talks Symposium: Equality of Communicative Opportunity

In my earlier post, I suggested that Deborah Hellman is right to focus on the relationship between rights and resources.  “Money is not speech,” but money can buy resources that are necessary to enable speech.  In this post, I would like to address another deep question raised by Hellman.

To raise this question, I want to distinguish between “freedom of speech” as legal doctrine and what I will call “freedom of expression,” referring now to a principle of political morality which may (or may not) match the existing positive law.

The ability to exercise a right can require resources, but resources may be scarce, both in the economist’s sense that they have a price and in the more informal sense that there may be a limited amount of a given resource.  I believe that we have two intuitions about the freedom of expression.  The first intuition is that realizing the freedom of expression is inconsistent with limits on the amount of expression.  If we are engaged in a discussion, and I have another point to make, the freedom of expression requires that I be allowed to make.  The second intuition is that realizing the freedom of expression requires equality of communicative opportunity.  If we are engaged in a discussion, and you are allowed to make a point, then I must be allowed an equal opportunity to respond.  That means that it is inconsistent with the freedom of expression to have a set of legal rules that give some speakers more communicative opportunities and other speakers fewer such opportunities.

What happens when we combine these two intuitions (“unlimited speech” and “equality of communicative opportunity”) with the facts that speech requires resources and that resources are scarce.  In the words of Johnny Mercer, “something’s gotta give, somethings gotta give, something’s gotta give.”

Given scarce resources, unlimited speech for some can interfere with equality of communicative opportunity.  And this tension is particularly noticeable in the context of political speech.  Reaching a mass audience is a resource intensive enterprise.

Deborah Hellman’s position is that “adequacy” is the key, so she formulates the principle that restrictions on speech are permissible only if government provides “an adequate alternative method of distribution.”  Perhaps, but I am not sure that our understanding of freedom of expression as a principle of political morality is captured by the notion of “adequacy.”  Suppose that I have been given an “adequate” opportunity to speak, but there is more that I want to say.  I have additional points to make & additional audiences to reach.  Does the limitation to adequate speech cohere with our intuition that the freedom of expression is violated by limits on the amount of expression? And suppose that I am given an “adequate” opportunity to speak, but someone else is given a greater opportunity?  Isn’t equality of communicative opportunity required, irrespective of the “adequacy” of the opportunity I am provided?

In the context of campaign finance regulation, unlimited speech and equality of communicative opportunity are on a collision course–especially in the context of an economic system that permits pervasive inequalities in the distribution of resources.  A given legal regime can favor either unlimited speech or equality of communicative opportunity, but no regime can simultaneously achieve both.  I think this is the reason that Mike Seidman emphasizes that for him, Hellman’s “article raises very deep questions about whether a regime of civil liberties is really possible without significant reallocation of economic resources.”

I agree with Seidman–these are deep questions, and I am skeptical about the possibility that the notion of “adequacy” can provide a deep and deeply satisfying answer.