Site Meter

Tagged: artificial agents

0

LTAAA Symposium Wrap-up

I want to wrap up discussion in this wonderful online symposium on A Legal Theory for Autonomous Artificial Agents that Frank Pasquale and the folks at Concurring Opinions put together. I appreciate you letting me hijack your space for a week! Obviously, this symposium would not have been possible without its participants–Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden–and I thank them all for their responses. You’ve all made me think very hard about the book’s arguments (I hope to continue these conversations over at my blog at samirchopra.com and on my Twitter feed at @EyeOnThePitch). As I indicated to Frank by email, I’d need to write a second book in order to do justice to them. I don’t want to waffle on too long so let me just quote from the book to make clear what our position is with regards to artificial agents and their future legal status:

Read More

4

LTAAA Symposium: Complexity, Intentionality, and Artificial Agents

I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick   and AJ in the comments spaces where their remarks were originally made).

Read More

0

LTAAA Symposium: Response to Pagallo on Legal Personhood

Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents.  I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read More

6

LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities

I want to thank Harry Surden for his rich, technically-informed response  to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:

There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”

The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.

I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.

Having said this, I can now turn to responding to Harry’s excellent post.
Read More

0

LTAA Symposium: Response to Matwyshyn on Artificial Agents and Contracting

Andrea Matwyshyn’s reading of the agency analysis of contracting  (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.

At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.

First,

[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.

Second,

Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.

Third, an implicit, unstated economic incentive.

Read More

7

LTAA Symposium: Response to Sutter on Artificial Agents

I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
Read More

0

LTAAA Symposium: Artificial Agents and the Law of Agency

I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive)  response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read More

3

LTAAA Symposium: Legal Personhood for Artificial Agents

In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to  Gillick v West Norfolk and Wisbech Area Health Authority)

I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.
Read More

0

An ‘Ethical Turing Test’ for Autonomous Artificial Agents?

 

My first encounters with legal issues of autonomous artificial agents came a few years ago in international law of autonomous lethal weapons systems. In an email exchange with an eminent computer scientist working on the problems of engineering systems that could follow the fundamental laws of war, I expressed some doubt that it would be quite so easy as all that come up with algorithms that could, in effect, “do Kant” (in giving effect to the categorical legal imperative not to target the civilians).  Or, even more problematically, “do Bentham and Mill” (in providing a proportionality calculus of civilian harm set against military necessity). Indeed (I noted primly, clutching my Liberal Arts degree firmly in the Temple of STEM), we humans didn’t have an agreed upon way of addressing the proportionality calculus ourselves, given that it seemed to invoke incomparable and incommensurable values.  So how was the robot going to do what we couldn’t?

The engineer’s answer was simultaneously cheering and alarming, but mostly insouciant: ‘I don’t have to solve the philosophical problems. My machine programming just has to do on average as well or slightly better than human soldiers do.’  Which, in effect, sets up what we might call an “ethical Turning Test” for the ideal autonomous artificial agent.  “Ethics for Robot Soldiers,” as Matthew Waxman and I are calling it in a new project on autonomous robotic weapons.  If, in practice, we can’t tell which is the human and which is the machine in matters of ethical decision-making, then it turns out not to matter how we get to that point. Getting there means, in this case, not so much human versus machine, but instead behaviorism versus intentionality.

It is on account of reflections on autonomous robot soldiers of the (possible) future that I so eagerly read Samir Chopra and Laurence White’s book. It does not disappoint.  It is the only general theory of what might emerge across multiple areas of law over the next few decades. Still more importantly in my view, it is the only account on offer that manages to find the sweet spot between a sci-fi speculation so rampant that it merely assumes away the problems by making artificial agents into human beings, on the one hand, and so granular that it does not offer a theory of agents and agency, rather than a collection of discrete legal problems, on the other. It accomplishes all this splendidly.

But it precisely because the text finds that sweet spot that I have a nagging question – one that is perhaps answered in the book but which I simply didn’t adequately understand. But let me put it directly, as a way of understanding the book’s fundamental frame. In the struggle between behaviorism and the “intentional stance” that runs throughout the book, but particularly in its encounters with the law of agency, and particularly as found in the Restatement, I was not sure where the argument finally comes down as regarding the status of intentionality. At some points, it did seem to be an irreducible aspect of certain behaviors, insofar as those behaviors could only be such under an intentional description, such as human relationships. But sometimes it seemed as though intentionality was an irreducible aspect of human behavior – even though the artificial agent might still pass the Turing Test on a purely behavioral basis and be indistinguishable from the human.

At still other points, I thought I was to understand that intentionality was no longer an ontological status, but something closer to an “organizational heuristic” for how human beings direct themselves toward particular goals – a human methodology, true, but merely one way of going about means to ends behaviors, in which an artificial agent might accomplish the task quite differently.  And in that case, I had a further question as to whether the underlying view of the “formation of judgment” was one that assumed the model of “supply ends, I’ll supply means” – or whether, instead, it held, at least as far as human judgment goes, a view that the formation of judgment does not cleanly separate them in this way.  It seemed to matter, at least as far as the conceptualization of how the artificial agent made its judgments, and in what they would finally consist.

It is entirely possible that I have not understood something fundamental in the book, and the answer to what does “intention” mean in the text is actually quite plain. But this question, in relation to behaviorism and the artificial agent, is what I have found hardest to grasp. I suppose this is particularly so when, for good reasons, the book is mostly about behavior, not intention. The reason I find the question important is that it seems to me that many of the crucial relationships (and also judgments, per the worry above) that might be permitted, or ascribed, to artificial agents depend upon a certain relation – that of a fiduciary, for example, with all the peculiar “relational” positioning that is implied in that special form of agency.

Does being a fiduciary, then, at least in the strong sense of exercising discretion, imply relationships that only exist under a certain intention? Or relationships that might be said to exist only under a certain affect – love, for example? And does it finally matter? Or is the position taken by the book finally one that either reduces the intention to the sum of behaviors, or else suggests that for the purposes for which we create – “endow,” more precisely – artificial agents, behavior is enough, without it being under any kind of description? I apologize for being overly abstract and obscure here. Reduced to the most basic: what is the status, on this general theory, of intention?  And with that question, let me say again: Outstanding book; congratulations!


1

Robots in the Castle

In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.

Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”

Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.

Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.

At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.

The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.

In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.

But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:

“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”

The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.

Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.

But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.

I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.

My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.

When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.

I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.

I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.