Tagged: artificial agents

2

Computable Contracts Explained – Part 1

Computable Contracts Explained – Part 1

I had the occasion to teach “Computable Contracts” to the Stanford Class on Legal Informatics recently.  Although I have written about computable contracts here, I thought I’d explain the concept in a more accessible form.

I. Overview: What is a Computable Contract?

What is a Computable Contract?   In brief, a computable contract is a contract that a computer can “understand.” In some instances, computable contracting enables a computer to automatically assess whether the terms of a contract have been met.

How can computers understand contracts?  Here is the short answer (a more in-depth explanation appears below).  First, the concept of a computer “understanding” a contract is largely a metaphor.   The computer is not understanding the contract at the same deep conceptual or symbolic level as a literate person, but in a more limited sense.  Contracting parties express their contract in the language of computers – data – which allows the computer to reliably identify the contract components and subjects.  The parties also provide the computer with a series of rules that allow the computer to react in a sensible way that is consistent with the underlying meaning of the contractual promises.

Aren’t contracts complex, abstract, and executed in environments of legal and factual uncertainty?  Some are, but some aren’t. The short answer here is that the contracts that are made computable don’t involve the abstract, difficult or relatively uncertain legal topics that tend to occupy lawyers.  Rather (for the moment at least), computers are typically given contract terms and conditions with relatively well-defined subjects and determinable criteria that tend not to involve significant legal or factual uncertainty in the average case.

For this reason, there are limits to computable contracts: only small subsets of contracting scenarios can be made computable.  However, it turns out that these contexts are economically significant. Not all contracts can be made computable, but importantly, some can.

Importance of Computable Contracts 

There are a few reasons to pay attention to computable contracts.   For one, they have been quietly appearing in many industries, from finance to e-commerce.  Over the past 10 years, for instance, many modern contracts to purchase financial instruments (e.g. equities or derivatives) have transformed from traditional contracts, to electronic, “data-oriented” computable contracts.   Were you to examine a typical contract to purchase a standardized financial instrument these days, you would find that it looked more like a computer database record (i.e. computer data), and less like lawyerly writing in a Microsoft Word document.

Computable contracts also have new properties that traditional, English-language, paper contracts do not have.  I will describe this in more depth in the next post, but in short, computable contracts can serve as inputs to other computer systems.  These other systems can take computable contracts and do useful analysis not readily done with traditional contracts. For instance, a risk management system at a financial firm can take computable contracts as direct inputs for analysis, because, unlike traditional English contracts, computable contracts are data objects themselves.

II. Computable Contracts in More Detail

Having had a brief overview of computable contracts, the next few parts will discuss computable contracts in more detail.

A. What is a Computable Contract?

To understand computable contracts, it is helpful to start with a simple definition of a contract generally. 

A contract (roughly speaking) is a promise to do something in the future, usually according to some specified terms or conditions, with legal consequences if the promise is not performed.   For example, “I promise to sell you 100 shares of Apple stock for $400 per share on January 10, 2015.”

computable contract is a contract that has been deliberately expressed by the contracting parties in such a way that a computer can:

1) understand what the contract is about;

2) determine whether or not the contract’s promises have been complied with (in some cases).

How can a computer “understand” a contract, and how can compliance with legal obligations be “computed” electronically?

To comprehend this, it is crucial to first appreciate the particular problems that computable contracts were developed to address.

Read More

0

LTAAA Symposium Wrap-up

I want to wrap up discussion in this wonderful online symposium on A Legal Theory for Autonomous Artificial Agents that Frank Pasquale and the folks at Concurring Opinions put together. I appreciate you letting me hijack your space for a week! Obviously, this symposium would not have been possible without its participants–Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden–and I thank them all for their responses. You’ve all made me think very hard about the book’s arguments (I hope to continue these conversations over at my blog at samirchopra.com and on my Twitter feed at @EyeOnThePitch). As I indicated to Frank by email, I’d need to write a second book in order to do justice to them. I don’t want to waffle on too long so let me just quote from the book to make clear what our position is with regards to artificial agents and their future legal status:

Read More

4

LTAAA Symposium: Complexity, Intentionality, and Artificial Agents

I would like to respond to a series of related posts made by Ken Anderson, Giovanni Sartor, Lawrence Solum, and James Grimmelmann during the LTAAA symposium. In doing so, I will touch on topics that occurred many times in the debate here: the intentional stance, complexity, legal fictions (even zombies!) and the law. My remarks here will also respond to the very substantive, engaged comments made by Patrick O’Donnell and AJ Sutter to my responses over the weekend. (I have made some responses to Patrick   and AJ in the comments spaces where their remarks were originally made).

Read More

0

LTAAA Symposium: Response to Pagallo on Legal Personhood

Ugo Pagallo, with whom I had a very useful email exchange a few months ago, has written a very useful response to A Legal Theory for Autonomous Artificial Agents.  I find it useful because I think in each of his four allegedly critical points, we are in greater agreement than Ugo imagines.
Read More

6

LTAAA Symposium: Response to Surden on Artificial Agents’ Cognitive Capacities

I want to thank Harry Surden for his rich, technically-informed response  to A Legal Theory for Autonomous Artificial Agents, and importantly, for seizing on an important distinction we make early in the book when we say:

There are two views of the goals of artificial intelligence. From an engineering perspective, as Marvin Minsky noted, it is the “science of making machines do things that would require intelligence if done by men” (Minsky 1969, v). From a cognitive science perspective, it is to design and build systems that work the way the human mind does (Shanahan 1997, xix). In the former perspective, artificial intelligence is deemed successful along a performative dimension; in the latter, along a theoretical one. The latter embodies Giambattista Vico’s perspective of verum et factum convertuntur, “the true and the made are…convertible” (Vico 2000); in such a view, artificial intelligence would be reckoned the laboratory that validates our best science of the human mind. This perspective sometimes shades into the claim artificial intelligence’s success lies in the replication of human capacities such as emotions, the sensations of taste and self-consciousness. Here, artificial intelligence is conceived of as building artificial persons, not just designing systems that are “intelligent.”

The latter conception of AI as being committed to building ‘artificial persons’ is what, it is pretty clear, causes much of the angst that LTAAA’s claims seem to occasion. And even though I have sought to separate the notion of ‘person’ from ‘legal persons’ it seems that some conflation has continued to occur in our discussions thus far.

I’ve personally never understood why artificial intelligence was taken to be, or ever took itself to be, dedicated to the task of replicating human capacities, faithfully attempting to build “artificial persons” or “artificial humans”. This always seemed such like a boring, pointlessly limited task. Sure, the pursuit of cognitive science is entirely justified; the greater the understanding we have of our own minds, the better we will be able to understand our place in nature. But as for replicating and mimicking them faithfully: Why bother with the ersatz when we have the real? We already have a perfectly good way to make humans or persons and it is way more fun than doing mechanical engineering or writing code. The real action, it seems to me, lay in the business of seeing how we could replicate our so-called intellectual capacities without particular regard for the method of implementation; if the best method of implementation happened to be one that mapped on well to what seemed like the human mind’s way of doing it, then that would be an added bonus. The multiple-realizability of our supposedly unique cognitive abilities would do wonders to displace our sense of uniqueness, acknowledge the possibility of other modes of existence, and re-invoke the sense of wonder about the elaborate tales we tell ourselves about our intentionality, consciousness, autonomy or freedom of will.

Having said this, I can now turn to responding to Harry’s excellent post.
Read More

0

LTAA Symposium: Response to Matwyshyn on Artificial Agents and Contracting

Andrea Matwyshyn’s reading of the agency analysis of contracting  (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.

At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.

First,

[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.

Second,

Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.

Third, an implicit, unstated economic incentive.

Read More

7

LTAA Symposium: Response to Sutter on Artificial Agents

I’d like to thank Andrew Sutter for his largely critical, but very thought-provoking, response to A Legal Theory for Autonomous Artificial Agents. In responding to Andrew I will often touch on themes that I might have already tackled. I hope this repetition comes across as emphasis, rather than as redundancy. I’m also concentrating on responding to broader themes in Andrew’s post as opposed to the specific doctrinal concerns (like service-of-process or registration; my attitude in these matters is that the law will find a way if it can discern the broad outlines of a desirable solution just ahead; service-of-process seemed intractable for anonymous bloggers but it was solved somehow).
Read More

0

LTAAA Symposium: Artificial Agents and the Law of Agency

I am gratified that Deborah DeMott, whose work on agency doctrines was so influential in our writing has written such an engaged (and if I may so, positive)  response to our attempt, in A Legal Theory for Autonomous Artificial Agents, to co-opt the common law agency doctrine for use with artificial agents. We did so, knowing the fit would be neither exact, nor precise, and certainly would not mesh with all established intuitions.
Read More

3

LTAAA Symposium: Legal Personhood for Artificial Agents

In this post, I’d like to make some brief remarks on the question of legal personhood for artificial agents, and in so doing, offer a response to Sonia Katyal’s and Ramesh Subramanian’s thoughtful posts on A Legal Theory for Autonomous Artificial Agents. I’d like to thank Sonia for making me think more about the history of personhood jurisprudence, and Ramesh for prompting to me to think more about the aftermath of granting legal personhood, especially the issues of “Reproduction, Representation, and Termination” (and for alerting me to  Gillick v West Norfolk and Wisbech Area Health Authority)

I have to admit that I don’t have as yet, any clearly formed thoughts on the issues Ramesh raises. This is not because they won’t be real issues down the line; indeed, I think automated judging is more than just a gleam in the eye of those folks that attend ICAIL conferences. Rather, I think it is that those issues will perhaps snap into sharper focus once artificial agents acquire more functionality, become more ubiquitous, and more interestingly, come to occupy roles formerly occupied by humans. I think, then, we will have a clearer idea of how to frame those questions more precisely with respect to a particular artificial agent and a particular factual scenario.
Read More

0

An ‘Ethical Turing Test’ for Autonomous Artificial Agents?

 

My first encounters with legal issues of autonomous artificial agents came a few years ago in international law of autonomous lethal weapons systems. In an email exchange with an eminent computer scientist working on the problems of engineering systems that could follow the fundamental laws of war, I expressed some doubt that it would be quite so easy as all that come up with algorithms that could, in effect, “do Kant” (in giving effect to the categorical legal imperative not to target the civilians).  Or, even more problematically, “do Bentham and Mill” (in providing a proportionality calculus of civilian harm set against military necessity). Indeed (I noted primly, clutching my Liberal Arts degree firmly in the Temple of STEM), we humans didn’t have an agreed upon way of addressing the proportionality calculus ourselves, given that it seemed to invoke incomparable and incommensurable values.  So how was the robot going to do what we couldn’t?

The engineer’s answer was simultaneously cheering and alarming, but mostly insouciant: ‘I don’t have to solve the philosophical problems. My machine programming just has to do on average as well or slightly better than human soldiers do.’  Which, in effect, sets up what we might call an “ethical Turning Test” for the ideal autonomous artificial agent.  “Ethics for Robot Soldiers,” as Matthew Waxman and I are calling it in a new project on autonomous robotic weapons.  If, in practice, we can’t tell which is the human and which is the machine in matters of ethical decision-making, then it turns out not to matter how we get to that point. Getting there means, in this case, not so much human versus machine, but instead behaviorism versus intentionality.

It is on account of reflections on autonomous robot soldiers of the (possible) future that I so eagerly read Samir Chopra and Laurence White’s book. It does not disappoint.  It is the only general theory of what might emerge across multiple areas of law over the next few decades. Still more importantly in my view, it is the only account on offer that manages to find the sweet spot between a sci-fi speculation so rampant that it merely assumes away the problems by making artificial agents into human beings, on the one hand, and so granular that it does not offer a theory of agents and agency, rather than a collection of discrete legal problems, on the other. It accomplishes all this splendidly.

But it precisely because the text finds that sweet spot that I have a nagging question – one that is perhaps answered in the book but which I simply didn’t adequately understand. But let me put it directly, as a way of understanding the book’s fundamental frame. In the struggle between behaviorism and the “intentional stance” that runs throughout the book, but particularly in its encounters with the law of agency, and particularly as found in the Restatement, I was not sure where the argument finally comes down as regarding the status of intentionality. At some points, it did seem to be an irreducible aspect of certain behaviors, insofar as those behaviors could only be such under an intentional description, such as human relationships. But sometimes it seemed as though intentionality was an irreducible aspect of human behavior – even though the artificial agent might still pass the Turing Test on a purely behavioral basis and be indistinguishable from the human.

At still other points, I thought I was to understand that intentionality was no longer an ontological status, but something closer to an “organizational heuristic” for how human beings direct themselves toward particular goals – a human methodology, true, but merely one way of going about means to ends behaviors, in which an artificial agent might accomplish the task quite differently.  And in that case, I had a further question as to whether the underlying view of the “formation of judgment” was one that assumed the model of “supply ends, I’ll supply means” – or whether, instead, it held, at least as far as human judgment goes, a view that the formation of judgment does not cleanly separate them in this way.  It seemed to matter, at least as far as the conceptualization of how the artificial agent made its judgments, and in what they would finally consist.

It is entirely possible that I have not understood something fundamental in the book, and the answer to what does “intention” mean in the text is actually quite plain. But this question, in relation to behaviorism and the artificial agent, is what I have found hardest to grasp. I suppose this is particularly so when, for good reasons, the book is mostly about behavior, not intention. The reason I find the question important is that it seems to me that many of the crucial relationships (and also judgments, per the worry above) that might be permitted, or ascribed, to artificial agents depend upon a certain relation – that of a fiduciary, for example, with all the peculiar “relational” positioning that is implied in that special form of agency.

Does being a fiduciary, then, at least in the strong sense of exercising discretion, imply relationships that only exist under a certain intention? Or relationships that might be said to exist only under a certain affect – love, for example? And does it finally matter? Or is the position taken by the book finally one that either reduces the intention to the sum of behaviors, or else suggests that for the purposes for which we create – “endow,” more precisely – artificial agents, behavior is enough, without it being under any kind of description? I apologize for being overly abstract and obscure here. Reduced to the most basic: what is the status, on this general theory, of intention?  And with that question, let me say again: Outstanding book; congratulations!