Site Meter

Category: Symposium (Autonomous Artificial Agents)

0

Autonomous Artificial Agents: Contracting or Expanding?

Is this the book to separate the legal issues of “autonomous artificial agents” from the more controversial questions of whether code or silicon can function as “people”? The one that can stick to the practical issues of contract formation, tort liability and the like, without blurring the boundaries between legal personhood and personhood in a fuller sense?

I think this was the intention of the authors (C&W). And I certainly agree with other participants in the forum that they’ve done a wonderful job of identifying and analyzing many key legal and philosophical issues in this field; no doubt the book will be framing the debate about autonomous artificial “agents” (AAAs) for years to come. But the style of C&Ws’ argument and the philosophical positions they take may make it hard to warm up to some of their analysis and recommendations unless you’re happy to take a rather expansive view of the capabilities of artificial intelligence — such as imputing a moral consciousness to programs and robots. And even if you’re happy to do so, what about everyone else? I’ll explain below the fold.
Read More

2

Autonomous Agents and Extension of Law: Policymakers Should be Aware of Technical Nuances

This post expands upon a theme from Samir Chopra and Lawrence White’s excellent and thought-provoking book – A Legal Theory for Autonomous Artificial Agents.  One question pervading the text: to what extent should lawmakers import or extend existing legal frameworks to cover the activities of autonomous (or partially autonomous) computer systems and machines?   These are legal frameworks that were originally created to regulate human actors.  For example, the authors query whether the doctrines and principles of agency law can be mapped onto actions carried out by automated systems on behalf of their users?  As the book notes, autonomous systems are already an integral part of existing commercial areas (e.g. finance) and may be poised to emerge in others over the next few decades (e.g. autonomous, self-driving automobiles). However, it is helpful to further expand upon one dimension raised by the text: the relationship between the technology underlying autonomous agents, and the activity or results produced by the technology.

Two Views of Artificial Intelligence

The emergence of partially autonomous systems – computer programs (or machines) carrying out activities at least partially in a self-directed way, on behalf of their users, is closely aligned with the field of Artificial Intelligence (AI) and developments therein. (AI is a sub-discipline of computer science.) What is the goal of AI research? There is probably no universally agreed upon answer to this question – as there have been a range of approaches and criteria for systems considered to be successful advances in the field. However, some AI researchers have helpfully clarified two dimensions along which we can think about AI developments. Consider a spectrum of possible criteria under which one might label a system to be a “successful” AI product:

View 1) We might consider a system to be artificially intelligent only if it produces “intelligent” results based upon processes that model, approach or replicate the high-level cognitive abilities or abstract reasoning skills of humans ;or

View 2) We might most evaluate a system primarily based upon the quality of the output it produces – if it produces results that humans would consider accurate and helpful – even if the results or output came about through processes that do not necessarily model , approach, or resemble actual human cognition, understanding, or reasoning.

We can understand the first view as being concerned with creating systems that replicate to some degree something approaching human thinking and understanding, whereas the second is more concerned with producing results or output from computer agents that would be considered “intelligent” and useful, even if produced from systems which likely do not approach human cognitive processes. (Russell and Norvig, Artificial Intelligence: A Modern Approach, 3 Ed, 2009, 1-5). These views represent poles on a spectrum, and many actual positions fall in between. However, this distinction is more than philosophical.  It has implications on the sensibility of extending existing legal doctrines to cover the activities of artificial agents. Let us consider each view briefly in turn, and some possible implications upon law.

View 1 – Artificial Intelligence as Replicating Some or All Human Cognition

The first characterization – that computer systems will be successful within AI when they produce activities resulting from processes approaching the high-level cognitive abilities of humans, is considered an expansive and perhaps more ambitious characterization of the goals of AI. It also seems to be the one most closely associated with the view of AI research in the public imagination. In popular culture, artificially intelligent systems replicate and instantiate – to varying degrees – the thinking facilities of humans (e.g. the ability to engage in abstract thought, carry on an intelligent conversation, or to understand or philosophize concerning concepts at a depth associated with intelligence). I raise this variant primarily to note that despite   (what I believe is a) common lay view of the state of the research- this “strong” vision of AI is not something that has been realized (or is necessarily near realization) within the existing state-of-the art systems that are considered successful products of AI research. As I will suggest shortly, this nuance may not be something within the awareness of lawmakers and judges who will be the arbiters of such decisions concerning systems that are labeled artificially intelligent.  Although AI research has not yet produced artificial human-level cognition, that does not mean that AI research has been unsuccessful.  Quite to the contrary – over the last 20 years AI research has produced a series of more limited, but spectacularly successful systems as judged by the second view.

View 2 – “Intelligent” Results (Even if Produced by Non-Cognitive Processes)

The second characterization of AI is perhaps more modest, and can be considered more “results oriented.”  This view considers a computer system (or machine) to be a success within artificial intelligence based upon whether it produces output or activities that people would agree (colloquially speaking) are “good” and “accurate” and “look intelligent.”  In other words, a useful AI system in this view is characterized by results or output are likely to approach or exceed  that which would have been produced by a human performing the same task.  Under this view, if the system or machine produces useful, human-like results, this is a successful AI machine – irrespective as to whether these results were produced from a computer-based process instantiating or resembling human cognition, intelligence or abstract reasoning.

In this second view, AI “success” is measured based upon whether the autonomous system produces “intelligent” (or useful) output or results.  We can use what would be considered “intelligent” conduct of a similarly situated human as a comparator. If a modern auto-pilot system is capable of landing airplanes in difficult conditions (such as thick fog) at a success rate that meets or exceeds human pilots under similar conditions, we might label it a successful AI system under this second approach. This would be the case even if we all agreed that the autonomous autopilot system did not have a meaningful understanding of the concepts of “airplanes”, “runways”, or “airports.” Similarly, we might label IBM’s Jeopardy playing “Watson” computer system to be a successful AI system since it was capable of producing highly accurate answers, to a surprisingly wide and difficult range of questions – the same answers that a strong, human Jeopardy champions would have produced. However, there is no suggestion that Watson’s results were the result of the same high-level cognitive understanding and processes that likely animated the result of the human champions like Ken Jennings. Rather, Watson’s accurate output came from techniques such as highly sophisticated statistical machine-learning algorithms that were able to quickly rank possible candidate answers through immense parallel processing of large amounts of existing written documents that happened to contain a great deal knowledge about the world.

Machine-Translation: Automated Translation as an Example

To understand this distinction between AI views rooted in computer-based cognition and those in “intelligent” or accurate results, it is helpful to examine the history of computer-based language translation (e.g. English to French). Translation (at least superficially) appears to be a task deeply connected to the human understanding of the meaning of language, and the conscious replication of that meaning in the target language. Early approaches to machine translation followed this cue, and sought to convey aspects to computer system – like the rules of grammar in both languages, and the pairing of words with the same meanings in both language – that might mimic the internal structures undergirding human cognition and translation. However, this meaning and rules-based approach to translation proved limited and surprised researchers by producing somewhat poor results based upon the rules of matching and syntactical construction. Such system had difficulty in determining whether the word “plant” in English should be translated to the equivalent of “houseplant” or “manufacturing plant” in French. Further efforts attempted to “teach” the computer rules about how to understand and make more accurate distinctions for ambiguously situated words but still did not produce marked improvements in translation quality.

Machine Learning Algorithms: Using Statistics to Produce Surprisingly Good Translations

However, over the last 10-15 years, a markedly different approach to computer translation occurred – made famous by Google and others. This approach was not primarily based upon top-down communication of the basics of constructing and conveying knowledge to a computer system (e.g. language pairing and rules of meaning). Rather, many of the successful translation techniques developed were largely statistical in nature, relying on machine-learning algorithms to scour large amounts of data and create a complex representation of correlations between languages. Google translate – and other similar statistical approaches – work in part by leveraging vast amounts of data that has previously been translated by humans. For example, the United Nations and the European Union frequently translate official documents into multiple languages using professional translators. This “corpus” of millions of paired and translated documents became publicly available electronically over the last 20 years to researchers. Systems such as Google Translate are able to process vast numbers of documents and leverage these paired, translated translation to create statistical models which are able to produce surprisingly accurate translation results using probabilities – for arbitrary new texts.

Machine-Learning Models: Producing “intelligent”, highly useful results 

The important point is that these statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition (e.g. view 1). (It is important to note that the machine-learning translation approach does not achieve translation on its own but “leverages: previous human cognition through the efforts of the original UN translators that made the paired translations.)  Thus, for certain, limited tasks,  these systems have shown that it is possible for contemporary autonomous agents to produce “intelligent” results without relying upon what we would consider processes approaching human-level cognition.

Distinguishing “intelligent results” and actions produced via cognitive intelligence

The reason to flag this distinction, is that such successful AI systems (as judged by their results), will pose a challenge to the task of importing and extending of existing legal doctrinal frameworks – (which were mostly designed to regulate people) into the domain of autonomous computer agents.  Existing “type 2″ systems that produce surprisingly sophisticated, useful, and accurate results without approaching human cognition are the basis of many products now emerging from earlier AI research and are becoming integrated (or are poised to become ) integrated into life.    These include IBM’s Watson, Apple’s SIRI, Google Search – and in perhaps the next decade or two – Stanford’s/Google’s Autonomous self-driving cars, and autonomous music composing software.  These systems often use statistics to leverage existing, implicit human knowledge.  Since these systems produce output or activities that in some cases appear to approach or exceed humans in particular tasks, and the results that are autonomously produced are often surprisingly sophisticated, and seemingly intelligent – such “results-oriented”, task specific (e.g. driving, answering questions, landing planes) systems seem to be the near path of much AI research.

However, the fact that these intelligent-seeming results do not result from systems approaching human-cognition is a nuance that should not be lost on policymakers (and judges) seeking to develop doctrine in the area of autonomous agents. Much – perhaps most of law – is designed and intended to regulate the behavior of humans (or organizations run by humans).  Thus embedded in many existing legal doctrines are underlying assumptions about cognition and intentionality that are implicit and are so basic that they are often not articulated.   The implicitness of such assumptions may make these assumptions easy to overlook.

Given current trends, many contemporary (and likely future) AI systems that will be integrated into society (and therefore more likely the subject of legal regulation) will use algorithmic techniques focused upon producing “useful results” (view 2), rather than focusing on systems aimed at replicating human-level cognition, self-reflection, and abstraction (view 1).  If lawmakers merely follow the verbiage (e.g. a system that has been labeled “artificially intelligent” did X or resulted in Y) and employ only a superficial understanding of AI research, without more closely understanding these technical nuances, there is the possibility of conflation in extending existing legal doctrines to circumstances based upon “intelligent seeming” autonomous results.   For example, the book authors explore the concept of requiring fiduciary duties on the part of autonomous systems in some circumstances. But it will take a careful judge or lawmaker to distinguish existing fiduciary/agency doctrines with embedded (and often unarticulated) assumptions of human-level intentionality among agents (e.g. self-dealing) from those that may be more functional in nature (e.g. duties to invest trust funds). In other words, an in-depth understanding of the technology underlying particular autonomous agents should not be viewed as a technical issue.   Rather it is a serious consideration which should be understood in some detail by lawmakers in any decisions to extend or create new legal doctrine from our existing framework to cover situations involving autonomous agents.

0

Reflections on Autonomous Artificial Agents and the law of agency

Many thanks to the organizers for asking me to comment on Samir Chopra and Lawrence White’s book, A Legal Theory for Autonomous Artificial Agents. I enjoyed thinking about the issues the book raises. My focus as a reader was the common law of agency. I served as the Reporter for the American Law Institute’s Restatement (Third) of Agency (2006), to which the book frequently refers. My immersion in agency law of course shapes my reading.

One concern I had as I read through the book is its possible conflation of two different kinds of claims: (1) the status of an autonomous artificial agent (hereinafter “AAA”) under present law; and (2) how the law should or could change in response to AAAs. At points I wondered whether the book implicitly flirted with a romance of the “ideal legal agent” (p. 23), an alluring prospect because “incapable of the kinds of misconduct associated with human agents.”As a scholar of the law, I was struck that the authors explicitly rejected the possibility that AAAs might best be termed “constructive” agents (p. 24) and that they do not explicitly engage with the large literature on fictions in the law. For one might read the book as an intriguing exercise in thinking “as if,” or as an extended construction of a metaphor or an analogy.

I’ll turn first to points concerning claims in category (1). The book might have benefited from a more robust account early on of the requisites for a relationship of common-law agency. Although agency does not require a contract between principal and agent, agency is a relationship grounded in mutual consent. It appears the book may discard consent as a requirement on p. 18, but mutual consent underpins much that follows in the specifics of agency doctrine. Indeed, the law recognizes non-consensual relationships in which one person has power to represent another and take action affecting the represented person’s legal position-such as the designation by statute of a secretary of state as an agent for service of process-but these relationships are not within the ambit of common-law agency. Consent, a concept that carries operative significance in many bodies of law, could be defined as an uncoerced expression of a person’s will. Thus, including AAAs within the ambit of present-day agency law requires that they be persons that can meaningfully said to have wills that can make expressions in an uncoerced fashion. Late in the book (p. 175) an AAA may be “said to possess” free will, but many assumptions precede this claim. Separately, AAAs are said to have duties to their principals early in the book (p. 21) but meaningful liabilities follow only much later, once AAAs hold dependent or independent legal personality.

To be sure, parallels can be drawn between AAAs and agents as the law conventionally understands them; one might delegate tasks to an AAA just as one might delegate tasks to a human agent or a legal-person agent such as a corporation (p.23). But the fact that task delegation is possible does not establish agency. I delegate the task of mobility to my car, much as the dog-owner in the Restatement illustration delegates to his trained pet the task of fetching beer1 (p. 55), but the fact of delegation does not itself make either my agent. I think this is so even if the car is equipped with computer-enabled functions and the dog can learn from experience (perhaps that beer has a delectable taste).

Chapter 2 on contracting problems, somewhat to my surprise, does not deal with the fundamental challenge of accommodating agency relationships within conventional accounts of how contractual obligations are formed. Just as it is difficult to understand how a contract could be formed via an AAA when the parties’ intentions are not referable to a particular offer and acceptance (p. 34), so it seems to be a broader predicament how a principal could be bound by a contract entered into by an agent when the principal was unaware of the specifics of the offer or acceptance. How could the principal be bound when the principal has not consented to the particular transaction? Agency resolves this predicament not by demanding transaction-by-transaction assent from the principal, but in characterizing the principal’s conferral of authority on the agent as an advance expression of the principal’s willingness to be bound which thereafter lurks in the background of the agent’s dealings with third parties. (Or appears so to do, when the principal can be bound only on the basis of the agent’s apparent authority). For a fuller account, see Gerard McMeel, Philosophical foundations of the law of agency, 116 L.Q.R. 387 (2000).

I turn now to (2), and how the law could or should change in response to AAAs. Many of the specifics detailed in the book. In particular, the questions in chapter 3 about attribution of knowledge are thought-provoking and novel. It’s not clear to me, though, why or how ascribing legal personality to AAAs would be a good solution. More generally, at points (explicitly on p. 43) the book may assume that increasing the usage of AAAs is self-evidently attractive, and thus that legal rules should be modified “to limit the potential moral and/or legal responsibility of principals for agents’ behavior.” Why this should be so is not clear. On this point, the history and pragmatics of ascribing legal personality to corporations could be informative. More pragmatically, would legal change through legislation be preferable to change though case-by-case litigation?

Of course, there’s much more I could say and much more in the book I admire. My reservations aside, it’s gratifying that agency law has found a lively and ingenious audience!

_________

1   The hypothetical facts underlying this illustration were shared with me as present “in a real case” but diligent research never located a citation.

0

Autonomous artefacts and the intentional stance

The book “A Legal Theory for Autonomous Artificial Agents” by Samir Chopra and Laurence White provides a very comprehensive and well written account of a challenging issue, namely, of how the law should address the creation and deployment of intelligent artefacts capable of goal-oriented action and social interaction. No comparable work is today available, and thus I think that this is a very valuable contribution to the interface of ICT (and in particular AI) and law.

As some commentators have already observed, the title words “A legal theory” may be a bit misleading, since one does not find in the book a new approach to legal theory inspired by artificial agents, but rather a theoretically-grounded analysis of the legal implications of this new socio-technological phenomenon. However, awareness is shown of legal theory and various legal theoretical themes are competently discussed in the book.

The fundamental idea which is developed in the first chapters is that when interacting which such artificial agents we need to adopt the intentional stance, and understand their behaviour as resulting form the agents’ beliefs and goals. Often indeed there is no other strategy available to us: we have no power, no ability and in any case no time, to examine the internal structure and functioning of such artificial entities. The only chance we have to make sense of their behaviour is to assume that they tend to achieve their goals on the basis of information they collect and process, namely, the idea that they endowed to a certain kind and extent of theoretical and practical rationality: they can track the relevant aspects of their (physical or virtual) environment, and adopt plans of actions on how to achieve their goals in such an environment.

As an example quite remote for the domain considered by the authors of the book, consider an autopilot system for an aircraft. The system has a complex goal to achieve (bring the airplane to destination, safely , in time, consuming as less fuel as possible), collects though various sensors information from the environment (height, speed of wind, expected weather conditions, on ground obstacles and incoming aircrafts, etc.) and from the airplane itself (available fuel, temperature, etc), draws theoretical conclusions (the length still to be covered, the speed needed for getting to destination in time, the expected evolution of the weather, etc.) and makes choices on various matters (speed, path, etc.) on this basis. Moreover, it receives and sends messages concerning the performance of its task, interacting, with pilots, with air traffic systems, and with other manned and unmanned aircrafts. Clearly, the pilot has little idea of the internal structure of the autopilot (probably he or she has only a vague idea of the autopilot’s architecture, and does not even know what are the procedures included in its software, let alone the instructions composing each such procedures) and has no direct access to the information being collected by automatic sensors and processed by the system. The only way to sensibly understand what the autopilot is doing, and the messages it is sending, is indeed to assume that it is performing a cognitive goal-directed activity, namely, adopting actions on the basis of its goals and its representations of the context of its action, as well as communicating what it assumes to be hold in its environment (what it believes), the objectives it is currently pursuing (its goals) and what it is going to do next (its intentions or commitments). As autopilot systems become more and more sophisticated (approaching the HAL of 2001 Space Odyssey), take on new functions (such as controlling distances, avoiding collisions, governing take off and landing) and use an increasing amount of information, their autonomy increases as well as their communication capacities. Thus it becomes more natural and useful (inevitable, I would say) to adopt the intentional stance toward them.

I have addressed myself the need to adopt the intentional stance toward certain artificial entities (Cognitive Automata and the Law), where the intentional stance was discussed to some extent, and the legal relevance of Daniel Dennett’s distinction of physical, design and intentional stance was considered. An aspect I have considered there, that it is not addressed in the book (though being quite significant for legal theory) is whether the cognitive states we attribute to an artificial entity only exist in the eye of the observer, according to a behaviouristic approach to intentionality (only the behaviour of a system verifies or falsifies any assertions concerning its intentional states, regardless of the system’s internal conditions) or whether such cognitive states states also concern specific internal features of the entity to which they are attributed. I have sided with the second approach, on the basis of a functional understanding of mental states. For instance, a belief may be viewed as an internal state that co-variates with environmental condition, in such a way that co-variation enables approbate reactions to such conditions. Having such a realist approach to cognitive states of artificial agents enables us to distinguish ontologically cases when agents have a cognitive state from cases where they only appear to have it (a distinction which is different from the issue of what evidence may justifiably support such conclusions, and what behaviour justifies one’s reliance on the existence of certain mental states). This is not usually relevant in private law and in particular with regard to contracts (we are entitled to assume that people have the mental states they appear to have, for the sake of reliance, regardless of whether they really have such states), but may be significant in some contexts, such as criminal law or even some parts of civil liability (intentional torts).

Another idea I find useful for distinguishing agents from mere tools is the idea of cognitive delegation (also discussed in the above contribution). While we can delegate various simple tasks to our tools (e.g. we use a spreadsheet for making calculations or a a thermometer for measuring temperature), we can delegate only to agents tasks pertain to the deployment of practical cognition (determining what to do, given certain goals, in inga certain environment). It is since agents engage in practical cognition, as they have been required to do, that we can (and should) understand their action according to the intentional stance.

In conclusion, not only I fully agree with the book’s idea of adopting the intentional stance with regard to artificial agents, but I think that this idea should be further developed and that this may lead to a better understanding of how the law takes into accounts both human and artificial minds. I think that this may indeed be the way in which the book can most contribute to legal theory.

3

Speaking of Automated Systems

Thanks so much to everyone participating in the LTAAA symposium: what a terrific discussion.  Given my work on Technological Due Process, I could not help but think about troubled public benefits system in Colorado known as CBMS.  Ever since 2004, the system has been riddled with delays, faulty law embedded in code, and system crashes.  As the Denver Post reports, the state has a $44 million contract with Deloitte consultants to overhaul the system–its initial installation cost $223 million with other private contractors.  CBMS is a mess, with thousands of overpayments, underpayments, delayed benefits, faulty notices, and erroneous eligibility determinations.  And worse.  In the summer of 2009, 9-year-old Zumante Lucero died after a pharmacy — depending upon the CBMS system — wouldn’t fill his asthma prescription despite proof the family qualified for Medicaid help.  In February 2011, CBMS failed eight different tests in a federal review, with auditors pointing to new “serious” problems while saying past failures are “nearly the same” despite five years of fixes.  The federal Centers for Medicare and Medicaid Services (CMS), which provides billions of dollars each year for state medical aid, said Colorado risks losing federal money for programs if it doesn’t make changes from the audit.  All of this brings to mind whether a legal theory of automated personhood moves this ball forward.  Does it help us sort through the mess of opacity, insufficient notice, and troubling and likely unintended delegation of lawmaking to computer programmers?  Something for me to chew on as the discussion proceeds.

Image: Wikimedia Commons

2

Our Bots, Ourselves

In an extremely forward-looking and thought-provoking book, Samir Chopra and Lawrence F. White rekindle important legal questions with respect to autonomous artificial agents or bots.  It was a pleasure to engage with the questions that the authors raise in A Legal Theory for Autonomous Artificial Agents, and the book is a valuable scholarly contribution.  In particular, because of my own research interests, Chapter 2 Artificial Agents and Contracts was of special interest to me.

In Chapter 2, the authors apply the agency theory that they advocate in Chapter 1 to the context of contracts.  They challenge the view that bots are “mere tools” used for extension of the self by contracting parties.[1]    In doing so, they assert differences between “closed” and “open”[2] systems and various theoretical types of bots, arguing that parties who use bots as part of contracting should be protected from contract liability in some cases of bot error or malfunction.   From my reading, they argue in favor of using principles of agency law to replace some traditional contract law constructs when bots are involved in contracts.

Their argument is nuanced and thoughtful from an economic and agency law perspective.  In the comments that follow, I raise five sets of questions for thought, admittedly from the perspective of my own research on contract law, consumer privacy in technology contexts, and information security law.

1. Private ordering and accepting responsibility for imprudent technology risks.   The authors are concerned with providing better liability protection to contracting parties who use bots.  They assert that “[a] strict liability principle [which views bots as mere tools or means of communication] may not be fair to those operators and users who do not anticipate the contractual behavior of the agent in all possible circumstances and cannot reasonably be said to be consenting to every contract entered into by the agent.”[3]   As I was reading this chapter, I pondered whether bots do indeed warrant special contract law rules.  How is a failure to anticipate the erratic behavior of a potentially poorly-coded bot not simply one of numerous categories of business risk that parties may fail to foresee?   Applying a contract law perspective, one might argue that the authors’ approach usurps for law what should be left to private ordering and risk management.  No one forces a party to use a bot in contracting; perhaps choosing to do so is simply an information risk that should be planned around with insurance?[4]

2. Traditional contract law breach and damages analysis and the expectations of the harmed party.  The authors opt away from a discussion of traditional breach analysis and damages remedies when addressing bot failures.  Instead, they apply a tort-like calculation of a lowest cost avoider principle, which they argue “correctly allocate[s] the risk of specification or induction errors to the principle/operator and of malfunction to the principle or user, depending on which was the least–cost avoider “[5]  However, should we perhaps temper this analysis by recognizing that contract law as embodied by the UCC and caselaw is not concerned solely or even primarily with efficiency in contractual relationships?  How does the authors’ efficiency analysis square with traditional consideration sufficiency (versus adequacy) analysis, where courts enforce contracts with bad deal terms regularly, choosing not to question the choices of the parties?   A harmed consumer who was not using a bot in a contract pitted against a sophisticated company using a poorly-coded bot (because it chose to hire a bargain programmer) may indeed have inefficient needs, but is not the consumer the party more in need of the court’s protection as a matter of equity?[6]

The authors note that, for example, prices quoted by a bot are akin to pricing details provide by a third party – a scenario that they assert may make it unfair to bind the bot-using party to terms of a contract executed by his bot when he does not pre-approve each individual deal.  “In many realistic settings involving medium–complexity agents, such as modern shopping websites, the principal cannot be said to have a pre-existing “intention” in respect of the particular contract that is “communicated “to the user.”[7]  Again, to what extent are such bot dynamics truly unforeseeable?  Can it be argued that coding up your bot to offer very specific deal terms when a consumer clicks on something constitutes an indication of actual knowledge and intention similar to a price list?  Is a coding error with a wrong price not simply akin to a mismarked pricetag in real space? But even assuming that we agree with the argument that coding up a bot is a relinquishment of control to a third party of sorts, how would the bot dynamics at issue differ from those in real space contracts where prices are specified using a variable third party index or where performance details are left variable – dynamics that have been found unproblematic in real space contract cases? [8]

3.  The bot problems that currently exist in contract law.  The authors take us through two cases with respect to bots – eBay v. Bidder’s Edge and Register v. Verio.com, arguing them primarily through the lens of tort, particularly trespass to chattels.    I found myself wondering about the authors’ agency analysis in the contract-driven bot cases where the trespass to chattels line of argument was deemed unpersuasive.  For example, how would the authors’ agency analysis apply in the context of the two Ticketmaster v. Tickets.com bot cases, particularly the second where the trespass to chattels claim was dismissed and the contract count was the only count to survive summary judgment?   Also, I would be curious to hear more about the extrapolation of their agency approach to the current wave of bot cases that blend contract formation questions with allegations of computer intrusion, such as Facebook v. Power Ventures and United States v. Lowson.

4.  Duties of information security.  Turning to information security, the authors point out that a party may try to hack a bot used by the other party in order to gain contracting advantage.[9]  While this is a valid computer intrusion concern, another pressing contract concern is that a malicious third-party (who is not one of the parties seeking to be in privity) will chooses to hack the bot to steal money on an ongoing basis.  If the bot is vulnerable to easy attack because of information security deficits of its coding, should the party using it get a free pass for its failure to exercise due care in information security?  Is it fair to impose information security losses on the other contracting party who was prudent enough not to use a vulnerable bot in contracting?   Would a straightforward ‘your vulnerability, your responsibility’ approach create better incentives for close monitoring and better information security practices, a goal already recognized by Congress as a social good?

5. The broader implications for “b0rked” code.  The separateness of bots from their creators came across to me as an underlying premise for the entirety of the authors’ conversation.   For example, the authors reference situations where the bot autonomously “misrepresents” information that its wielding party would not approve.[10]   Is it not perhaps more accurate to say that the bot contains programming bugs its wielding party failed to catch and rectify?   Is not a bot simply lines of code written by a human (who may or may not be skilled in a particular code language) that will always be full of errors (because a human authored it)?   Is perhaps the appropriate goal not to protect bots but to incentivize bot creators to make fewer errors and rectify errors once they are found after “shipping” the code?

The authors argue that holding a contracting party accountable for bot malfunction is “unjust”[11] in some circumstances.    Is this consonant with the contract law approach that drafting errors and ambiguities are construed against the drafter?[12]  Is the author/operator of the error-ridden code considered the drafter here?  How is choosing a bad programmer to build your flawed bot different from choosing a bad lawyer to draft the flawed language of your contract?

The analogy of a bot to a rogue beer-fetching dog I found to be a particularly apt one.[13]  Many scholars would argue that, much like having a pet, using a code-based creation such as a bot in contracting is a choice and an assumption of responsibility.  Both dogs and bots are things that are optional and limited in their capacities: we choose to unleash them on the rest of the world.   If a dog or a bot causes harm, even when the owner has not expressly directed it to do so, isn’t it always the owner’s failure to supervise that is to blame?  I fear that comparing a bot to a human of any sort – slave, child, employee – at the current juncture for purposes of crafting law may be premature.  No machine is capable of replicating human behavior smoothly at present.  Will one arrive in the future?  Yes, it is likely.  However, I fear that aggressive untethering of the legal responsibility of the coder from her coded creation may send us down an undesirable path of uncompensated consumer harms in our march toward our brave new cyborg world.[14]

The book’s purposes are ambitious, and I truly enjoyed pondering the questions it raises.  I thank the organizers for allowing me to participate in this symposium.

 


[1] p. 36

[2] p. 31-32

[3] p. 35-6

[4] The authors appear to argue from the perspective that encouraging the use of bots in contracting is a good thing and, as such merits special legal protection.   While it is clear that digital contracts and physical space contracts are to be afforded legal parity, is it indeed clear that our legislatures and courts have decided to encourage parties to use bots instead of humans in contracting?  Perhaps encouraging the use of more humans in operations and contracting is instead the preferable policy goal and the one that warrants the more protective legal regime?

[5] p. 48

[6] Indeed the consumer protection analysis that is omnipresent in contract law does not seem to be a dominant thread in the authors’ analysis.  When a sophisticated company using a bot is contracting with a consumer, the power and balance that already exists between these parties – a traditional concern of contract law – is exacerbated by the presence of the bot and arguably favors protecting the consumer more aggressively in any technology malfunction related to the formation of the contract.

[7] p. 36

[8] See, e.g., UCC §2-305; Eastern Air Lines, Inc. v. Gulf Oil Corporation, 415 F. Supp. 429 (1975).

[9] The situation where a party seeks to gain advantage and a contracting relationship by hacking the other party’s bot, I would argue, is not primarily a contract law question. This is arguably an active computer intrusion best left for analysis under the Computer Fraud and Abuse Act.

[10] p. 50

[11] p.55

[12] I have argued that it is the responsibility of businesses who use code to interact with consumers and other entities to warn protect and repair the unsafe code environments to which they subject others.

[13] p. 55

[14] As upcoming work with my coauthor Miranda Mowbray  will explain, the most sophisticated Twitter bots have now become quite good at approximating the speech patterns of humans, and humans seem to like interacting with them; however, even they eventually give themselves away as mere code-based creations.  When a Cylon-like code creation finally arrives, it may be nothing like what we expect it to be.

0

LTAAA Symposium: How Law Responds to Complex Systems

In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.

Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.

Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.

A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.

When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.

Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.

When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.

And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.

My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.

2

Camel, Weasel, Whale

Samir Chopra—whom I consider to be something of a pioneer in thinking through the philosophic and legal issues around artificial intelligence—did not much care for my initial thoughts about his and Lawrence White’s new book, A Legal Theory For Autonomous Agents. The gist of my remarks was that, while interesting and well researched, the book does not deliver on its promise of advancing “a legal theory.” Mostly what the book does (I read the book cover to cover, as you can see!) is identify new and old ways the law might treat complex software to advance various, seemingly unrelated goals. The book is largely about removing conceptual obstacles to treating software as “agreeing,” “knowing,” or “taking responsibility,” should we be inclined to do so in particular cases for independent policy reasons.

In the second chapter of the book, for instance, Chopra and White argue that treating software capable of calculating, offering, and appearing to accept terms as legal agents is not only coherent, but results in greater economic efficiency. The upshot is lesser contractual liability than treating the software as a mere instrument because, in instances where software makes the right kind of mistake, the entity that deployed the software—usually a sophisticated corporation—will not be held to the agreement. In the third chapter, the authors abandon economic efficiency entirely. Here the argument is that we ought to look to agency law in order to attribute more information to corporations because “[o]nly such a treatment would do justice to the reality of the increased power of the corporation, which is a direct function of the knowledge at its disposal.” In other words, by treating its software as agents rather than tools, we can either limit corporate liability for reasons of efficiency, or expand it for reasons of fairness. Read More

3

The Law Of The Fire

A corporation, it is said, “is no fiction, no symbol, no piece of the state’s machinery, no collective name for individuals, but a living organism and a real person with a body and members and a will of its own.” A ship, described as a “mere congeries of wood and iron,” on being launched, we are told, takes on a personality of its own, a name, volition, capacity to contract, employ agents, commit torts, sue and be sued.” Why do lawyers and judges assume thus to clothe inanimate objects and abstractions with the qualities of human beings?

The answer, in part at least, is to be found in characteristics of human thought and speech not peculiar to the legal profession. Men are not realists either in thinking or in expressing their thoughts. In both processes they use figurative terms. The sea is hungry, thunder rolls, the wind howls, the stars look down at night, time is not an abstraction, rather it is “father time” or the “grim reaper”…

Bryant Smith, Legal Personality, 37 Yale Law Journal 283, 285 (1928) Read More

0

Personhood to artificial agents: Some ramifications

Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.

The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand,  the argument for according  “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong  impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and  do not have clearly defined identities), and then argue how they might be overcome legally.

Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent  independent, is not too far into the future. In fact, the aftermath of  Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”

We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project  founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if)  artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?

And  when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?

These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network?  What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent?  What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?

Are these scenarios too far away for us to worry about, or close enough? I wonder…

-Ramesh Subramanian