Tagged: artificial agents

1

Robots in the Castle

In thinking about what Samir and Lawrence offer us in their new book, A Legal Theory for Autonomous Artificial Agents, I am reminded of the old Gothic castle described in Blackstone’s Commentaries, whose “magnificent and venerable” spaces had been badly neglected and whose “inferior apartments” had been retro-fitted “for a modern inhabitant”.

Feel me, here, I am not dissing the book but, rather, sympathizing about law’s sometimes feeble ability to adapt to modern times and its need to erect what Blackstone described as mass of legal “fictions and circuities”, leaving the law not unlike the stairways in its castle—“winding and difficult.”

Understanding this predicament all too well, I am not surprised to see Ryan Calo’s disappointment in light of the title and description of the book, which seemed to me also to promise something much more than a mere retrofitting of the castle—offering up instead a legal theory aimed at resurrecting the magnificent and venerable halls of a jurisprudence unmuddled by these strange new entities in a realm no longer populated exclusively by human agents.

Samir and Lawrence know full well that I am totally on board in thinking that the law of agency has plenty to offer to the legal assessment of the operations of artificial entities. I first wrote about this in 1999, when Canada’s Uniform Law Commission asked me to determine whether computers could enter into contracts which no human had reviewed or, for that matter, even knew existed. In my report, later republished as an article called “Spirits in the Material World,” I proposed a model based on the law of agency as a preferable approach to the one in place at the time (and still), which merely treats machine systems as an extension of the human beings utilizing them.

At the time, I believed the law of agency held much promise for software bots and robots. The “slave morality” programmed into these automatic beasts seemed in line with those imagined in the brutal jus civile of ancient Rome, itself programmed in a manner that would allow brutish Roman slaves to interact in commence with Roman citizens despite having no legal status. The Roman system had no problem with these non-status entities implicating their owners. After all: Qui facit per alium facit per se (A fancy Latin phrase designating the Roman Law fiction that treats one who acts through another as having himself so acted). What a brilliant way to get around capacity and status issues! And the modern law of agency, as it subsequently developed, offers up fairly nuanced notions like the “authority” concept that can also be used to limit the responsibility of the person who acts through an (artificial) other.

The book does a great job at carrying out the analysis in various domains and, much to my delight, extends the theory to a range of situations beyond contracting bots.

In my view, the genius of agency law as means of resurrecting the castle is that it can recognize and respond to the machine system without having to worry about or even entertain the possibility that the machine is a person. (For that reason, I would have left out the chapter on personhood, proposals for which I think have been the central reason why this relatively longstanding set of issues has yet to be taken seriously by those who have not taken the blue pill). Agency law permits us to simply treat the bot like the child who lacks the capacity to contract but still manages to generate an enforceable reliance interest in some third party when making the deal purporting to act on the authority of a parent.

But in my view—I thought it then and I think it still—using agency rules to solve the contracting problem is still little more than scaffolding used to retrofit the castle. As my fave American jurist, Lon Fuller, might have described it, the need to treat bots and robots as though they were legal agents in and of itself represents the pathology of law:

“When all goes well and the established legal rules encompass neatly the social life they are intended to regulate, there is little occasion for fictions. There is also little occasion for philosophizing, for the law then proceeds with a transparent simplicity suggesting no need for reflective scrutiny. Only in illness, we are told, does the body reveal its complexity. Only when legal reasoning falters and reaches out clumsily for help do we recognize what a complex undertaking the law is.”

The legal theory of both Blackstone and Fuller tell me that there is good reason to be sympathetic to the metaphors and legal fictions that Samir and Lawrence offer us—even if they are piecemeal. To be clear: although the “legal fiction” label is sometimes pejorative, I am not using it in that sense. Rather, I am suggesting that the approach in the book resembles a commonly used juridical device of extremely high value. Legal fictions of this sort exhibit what Fuller recognized as an “exploratory” function; they allow a kind of intellectual experimentation that will help us inch towards a well-entrenched legal theory.

Exploring the limits of the agency rules may indeed solve a number of doctrinal problems associated with artificial entities.

But (here I need a new emoticon that expresses that the following remark is offered in the spirit of sincerity and kindness) to pretend that the theory offered in this book does more than it does or to try to defend its approach as a cogent, viable, and doctrinally satisfying unified field theory of robotics risks missing all sorts of important potential issues and outcomes and may thwart a broader multi-pronged analysis that is crucial to getting things right.

I take it that Samir is saying in his replies to Ryan that he in fact holds no such pretense and that he does not claim to have all of the answers. But that, in my view, was not Ryan’s point at all.

My take-away from that exchange, and from my own reflections on the book, is that it will be also very important to consider various automation scenarios where agency is not the right model and ask ourselves why it is not. This is something I have not yet investigated or thought about very deeply. Still, I am willing to bet a large pizza (at the winner’s choice of location) that there are at least as many robo-scenarios where thinking of the machine entity as an artificial agent in the legal sense does more harm than good. If this is correct, agency law may offer some doctrinal solutions (as my previous work suggests) but that doesn’t in and of itself provide us with a legal theory of artificial agents.

When asked to predict the path of cyberlaw in 1995, Larry Lessig very modestly said that if he had to carve the meaning of the 1st Amendment into silicon, he was certain that he would get it fundamentally wrong. There hadn’t been enough time for the culture of the medium to evolve to be sure of right answers. And for that very reason, he saw the slow and steady march of common law as the best possible antidote.

I applaud the bravery of Chopra and White in their attempt to cull a legal theory for bots, robots and the like. But I share Ryan’s concerns about the shortcomings in the theory of artificial agents as offered. And in addressing his concerns, rather than calling Ryan’s own choice of intellectual metaphors “silly” or “inappropriate,” it might be more valuable to start thinking about scenarios in which the agency analysis offered falls short or is inapplicable and what other models we also might consider and for what situations.

I surely do not fault the authors for failing to come up with the unified field theory of robotics—we can save that for Michael Froomkin’s upcoming conference in Miami!!!—but I would like us to think also about what the law of agency cannot not tell us about a range of legal and ethical implications that will arise from the social implementation of automation, robotic and artificial intelligence across various sectors.

0

Autonomous Artificial Agents: Contracting or Expanding?

Is this the book to separate the legal issues of “autonomous artificial agents” from the more controversial questions of whether code or silicon can function as “people”? The one that can stick to the practical issues of contract formation, tort liability and the like, without blurring the boundaries between legal personhood and personhood in a fuller sense?

I think this was the intention of the authors (C&W). And I certainly agree with other participants in the forum that they’ve done a wonderful job of identifying and analyzing many key legal and philosophical issues in this field; no doubt the book will be framing the debate about autonomous artificial “agents” (AAAs) for years to come. But the style of C&Ws’ argument and the philosophical positions they take may make it hard to warm up to some of their analysis and recommendations unless you’re happy to take a rather expansive view of the capabilities of artificial intelligence — such as imputing a moral consciousness to programs and robots. And even if you’re happy to do so, what about everyone else? I’ll explain below the fold.
Read More

0

Reflections on Autonomous Artificial Agents and the law of agency

Many thanks to the organizers for asking me to comment on Samir Chopra and Lawrence White’s book, A Legal Theory for Autonomous Artificial Agents. I enjoyed thinking about the issues the book raises. My focus as a reader was the common law of agency. I served as the Reporter for the American Law Institute’s Restatement (Third) of Agency (2006), to which the book frequently refers. My immersion in agency law of course shapes my reading.

One concern I had as I read through the book is its possible conflation of two different kinds of claims: (1) the status of an autonomous artificial agent (hereinafter “AAA”) under present law; and (2) how the law should or could change in response to AAAs. At points I wondered whether the book implicitly flirted with a romance of the “ideal legal agent” (p. 23), an alluring prospect because “incapable of the kinds of misconduct associated with human agents.”As a scholar of the law, I was struck that the authors explicitly rejected the possibility that AAAs might best be termed “constructive” agents (p. 24) and that they do not explicitly engage with the large literature on fictions in the law. For one might read the book as an intriguing exercise in thinking “as if,” or as an extended construction of a metaphor or an analogy.

I’ll turn first to points concerning claims in category (1). The book might have benefited from a more robust account early on of the requisites for a relationship of common-law agency. Although agency does not require a contract between principal and agent, agency is a relationship grounded in mutual consent. It appears the book may discard consent as a requirement on p. 18, but mutual consent underpins much that follows in the specifics of agency doctrine. Indeed, the law recognizes non-consensual relationships in which one person has power to represent another and take action affecting the represented person’s legal position-such as the designation by statute of a secretary of state as an agent for service of process-but these relationships are not within the ambit of common-law agency. Consent, a concept that carries operative significance in many bodies of law, could be defined as an uncoerced expression of a person’s will. Thus, including AAAs within the ambit of present-day agency law requires that they be persons that can meaningfully said to have wills that can make expressions in an uncoerced fashion. Late in the book (p. 175) an AAA may be “said to possess” free will, but many assumptions precede this claim. Separately, AAAs are said to have duties to their principals early in the book (p. 21) but meaningful liabilities follow only much later, once AAAs hold dependent or independent legal personality.

To be sure, parallels can be drawn between AAAs and agents as the law conventionally understands them; one might delegate tasks to an AAA just as one might delegate tasks to a human agent or a legal-person agent such as a corporation (p.23). But the fact that task delegation is possible does not establish agency. I delegate the task of mobility to my car, much as the dog-owner in the Restatement illustration delegates to his trained pet the task of fetching beer1 (p. 55), but the fact of delegation does not itself make either my agent. I think this is so even if the car is equipped with computer-enabled functions and the dog can learn from experience (perhaps that beer has a delectable taste).

Chapter 2 on contracting problems, somewhat to my surprise, does not deal with the fundamental challenge of accommodating agency relationships within conventional accounts of how contractual obligations are formed. Just as it is difficult to understand how a contract could be formed via an AAA when the parties’ intentions are not referable to a particular offer and acceptance (p. 34), so it seems to be a broader predicament how a principal could be bound by a contract entered into by an agent when the principal was unaware of the specifics of the offer or acceptance. How could the principal be bound when the principal has not consented to the particular transaction? Agency resolves this predicament not by demanding transaction-by-transaction assent from the principal, but in characterizing the principal’s conferral of authority on the agent as an advance expression of the principal’s willingness to be bound which thereafter lurks in the background of the agent’s dealings with third parties. (Or appears so to do, when the principal can be bound only on the basis of the agent’s apparent authority). For a fuller account, see Gerard McMeel, Philosophical foundations of the law of agency, 116 L.Q.R. 387 (2000).

I turn now to (2), and how the law could or should change in response to AAAs. Many of the specifics detailed in the book. In particular, the questions in chapter 3 about attribution of knowledge are thought-provoking and novel. It’s not clear to me, though, why or how ascribing legal personality to AAAs would be a good solution. More generally, at points (explicitly on p. 43) the book may assume that increasing the usage of AAAs is self-evidently attractive, and thus that legal rules should be modified “to limit the potential moral and/or legal responsibility of principals for agents’ behavior.” Why this should be so is not clear. On this point, the history and pragmatics of ascribing legal personality to corporations could be informative. More pragmatically, would legal change through legislation be preferable to change though case-by-case litigation?

Of course, there’s much more I could say and much more in the book I admire. My reservations aside, it’s gratifying that agency law has found a lively and ingenious audience!

_________

1   The hypothetical facts underlying this illustration were shared with me as present “in a real case” but diligent research never located a citation.

0

Autonomous artefacts and the intentional stance

The book “A Legal Theory for Autonomous Artificial Agents” by Samir Chopra and Laurence White provides a very comprehensive and well written account of a challenging issue, namely, of how the law should address the creation and deployment of intelligent artefacts capable of goal-oriented action and social interaction. No comparable work is today available, and thus I think that this is a very valuable contribution to the interface of ICT (and in particular AI) and law.

As some commentators have already observed, the title words “A legal theory” may be a bit misleading, since one does not find in the book a new approach to legal theory inspired by artificial agents, but rather a theoretically-grounded analysis of the legal implications of this new socio-technological phenomenon. However, awareness is shown of legal theory and various legal theoretical themes are competently discussed in the book.

The fundamental idea which is developed in the first chapters is that when interacting which such artificial agents we need to adopt the intentional stance, and understand their behaviour as resulting form the agents’ beliefs and goals. Often indeed there is no other strategy available to us: we have no power, no ability and in any case no time, to examine the internal structure and functioning of such artificial entities. The only chance we have to make sense of their behaviour is to assume that they tend to achieve their goals on the basis of information they collect and process, namely, the idea that they endowed to a certain kind and extent of theoretical and practical rationality: they can track the relevant aspects of their (physical or virtual) environment, and adopt plans of actions on how to achieve their goals in such an environment.

As an example quite remote for the domain considered by the authors of the book, consider an autopilot system for an aircraft. The system has a complex goal to achieve (bring the airplane to destination, safely , in time, consuming as less fuel as possible), collects though various sensors information from the environment (height, speed of wind, expected weather conditions, on ground obstacles and incoming aircrafts, etc.) and from the airplane itself (available fuel, temperature, etc), draws theoretical conclusions (the length still to be covered, the speed needed for getting to destination in time, the expected evolution of the weather, etc.) and makes choices on various matters (speed, path, etc.) on this basis. Moreover, it receives and sends messages concerning the performance of its task, interacting, with pilots, with air traffic systems, and with other manned and unmanned aircrafts. Clearly, the pilot has little idea of the internal structure of the autopilot (probably he or she has only a vague idea of the autopilot’s architecture, and does not even know what are the procedures included in its software, let alone the instructions composing each such procedures) and has no direct access to the information being collected by automatic sensors and processed by the system. The only way to sensibly understand what the autopilot is doing, and the messages it is sending, is indeed to assume that it is performing a cognitive goal-directed activity, namely, adopting actions on the basis of its goals and its representations of the context of its action, as well as communicating what it assumes to be hold in its environment (what it believes), the objectives it is currently pursuing (its goals) and what it is going to do next (its intentions or commitments). As autopilot systems become more and more sophisticated (approaching the HAL of 2001 Space Odyssey), take on new functions (such as controlling distances, avoiding collisions, governing take off and landing) and use an increasing amount of information, their autonomy increases as well as their communication capacities. Thus it becomes more natural and useful (inevitable, I would say) to adopt the intentional stance toward them.

I have addressed myself the need to adopt the intentional stance toward certain artificial entities (Cognitive Automata and the Law), where the intentional stance was discussed to some extent, and the legal relevance of Daniel Dennett’s distinction of physical, design and intentional stance was considered. An aspect I have considered there, that it is not addressed in the book (though being quite significant for legal theory) is whether the cognitive states we attribute to an artificial entity only exist in the eye of the observer, according to a behaviouristic approach to intentionality (only the behaviour of a system verifies or falsifies any assertions concerning its intentional states, regardless of the system’s internal conditions) or whether such cognitive states states also concern specific internal features of the entity to which they are attributed. I have sided with the second approach, on the basis of a functional understanding of mental states. For instance, a belief may be viewed as an internal state that co-variates with environmental condition, in such a way that co-variation enables approbate reactions to such conditions. Having such a realist approach to cognitive states of artificial agents enables us to distinguish ontologically cases when agents have a cognitive state from cases where they only appear to have it (a distinction which is different from the issue of what evidence may justifiably support such conclusions, and what behaviour justifies one’s reliance on the existence of certain mental states). This is not usually relevant in private law and in particular with regard to contracts (we are entitled to assume that people have the mental states they appear to have, for the sake of reliance, regardless of whether they really have such states), but may be significant in some contexts, such as criminal law or even some parts of civil liability (intentional torts).

Another idea I find useful for distinguishing agents from mere tools is the idea of cognitive delegation (also discussed in the above contribution). While we can delegate various simple tasks to our tools (e.g. we use a spreadsheet for making calculations or a a thermometer for measuring temperature), we can delegate only to agents tasks pertain to the deployment of practical cognition (determining what to do, given certain goals, in inga certain environment). It is since agents engage in practical cognition, as they have been required to do, that we can (and should) understand their action according to the intentional stance.

In conclusion, not only I fully agree with the book’s idea of adopting the intentional stance with regard to artificial agents, but I think that this idea should be further developed and that this may lead to a better understanding of how the law takes into accounts both human and artificial minds. I think that this may indeed be the way in which the book can most contribute to legal theory.

3

Speaking of Automated Systems

Thanks so much to everyone participating in the LTAAA symposium: what a terrific discussion.  Given my work on Technological Due Process, I could not help but think about troubled public benefits system in Colorado known as CBMS.  Ever since 2004, the system has been riddled with delays, faulty law embedded in code, and system crashes.  As the Denver Post reports, the state has a $44 million contract with Deloitte consultants to overhaul the system–its initial installation cost $223 million with other private contractors.  CBMS is a mess, with thousands of overpayments, underpayments, delayed benefits, faulty notices, and erroneous eligibility determinations.  And worse.  In the summer of 2009, 9-year-old Zumante Lucero died after a pharmacy — depending upon the CBMS system — wouldn’t fill his asthma prescription despite proof the family qualified for Medicaid help.  In February 2011, CBMS failed eight different tests in a federal review, with auditors pointing to new “serious” problems while saying past failures are “nearly the same” despite five years of fixes.  The federal Centers for Medicare and Medicaid Services (CMS), which provides billions of dollars each year for state medical aid, said Colorado risks losing federal money for programs if it doesn’t make changes from the audit.  All of this brings to mind whether a legal theory of automated personhood moves this ball forward.  Does it help us sort through the mess of opacity, insufficient notice, and troubling and likely unintended delegation of lawmaking to computer programmers?  Something for me to chew on as the discussion proceeds.

Image: Wikimedia Commons

2

Our Bots, Ourselves

In an extremely forward-looking and thought-provoking book, Samir Chopra and Lawrence F. White rekindle important legal questions with respect to autonomous artificial agents or bots.  It was a pleasure to engage with the questions that the authors raise in A Legal Theory for Autonomous Artificial Agents, and the book is a valuable scholarly contribution.  In particular, because of my own research interests, Chapter 2 Artificial Agents and Contracts was of special interest to me.

In Chapter 2, the authors apply the agency theory that they advocate in Chapter 1 to the context of contracts.  They challenge the view that bots are “mere tools” used for extension of the self by contracting parties.[1]    In doing so, they assert differences between “closed” and “open”[2] systems and various theoretical types of bots, arguing that parties who use bots as part of contracting should be protected from contract liability in some cases of bot error or malfunction.   From my reading, they argue in favor of using principles of agency law to replace some traditional contract law constructs when bots are involved in contracts.

Their argument is nuanced and thoughtful from an economic and agency law perspective.  In the comments that follow, I raise five sets of questions for thought, admittedly from the perspective of my own research on contract law, consumer privacy in technology contexts, and information security law.

1. Private ordering and accepting responsibility for imprudent technology risks.   The authors are concerned with providing better liability protection to contracting parties who use bots.  They assert that “[a] strict liability principle [which views bots as mere tools or means of communication] may not be fair to those operators and users who do not anticipate the contractual behavior of the agent in all possible circumstances and cannot reasonably be said to be consenting to every contract entered into by the agent.”[3]   As I was reading this chapter, I pondered whether bots do indeed warrant special contract law rules.  How is a failure to anticipate the erratic behavior of a potentially poorly-coded bot not simply one of numerous categories of business risk that parties may fail to foresee?   Applying a contract law perspective, one might argue that the authors’ approach usurps for law what should be left to private ordering and risk management.  No one forces a party to use a bot in contracting; perhaps choosing to do so is simply an information risk that should be planned around with insurance?[4]

2. Traditional contract law breach and damages analysis and the expectations of the harmed party.  The authors opt away from a discussion of traditional breach analysis and damages remedies when addressing bot failures.  Instead, they apply a tort-like calculation of a lowest cost avoider principle, which they argue “correctly allocate[s] the risk of specification or induction errors to the principle/operator and of malfunction to the principle or user, depending on which was the least–cost avoider “[5]  However, should we perhaps temper this analysis by recognizing that contract law as embodied by the UCC and caselaw is not concerned solely or even primarily with efficiency in contractual relationships?  How does the authors’ efficiency analysis square with traditional consideration sufficiency (versus adequacy) analysis, where courts enforce contracts with bad deal terms regularly, choosing not to question the choices of the parties?   A harmed consumer who was not using a bot in a contract pitted against a sophisticated company using a poorly-coded bot (because it chose to hire a bargain programmer) may indeed have inefficient needs, but is not the consumer the party more in need of the court’s protection as a matter of equity?[6]

The authors note that, for example, prices quoted by a bot are akin to pricing details provide by a third party – a scenario that they assert may make it unfair to bind the bot-using party to terms of a contract executed by his bot when he does not pre-approve each individual deal.  “In many realistic settings involving medium–complexity agents, such as modern shopping websites, the principal cannot be said to have a pre-existing “intention” in respect of the particular contract that is “communicated “to the user.”[7]  Again, to what extent are such bot dynamics truly unforeseeable?  Can it be argued that coding up your bot to offer very specific deal terms when a consumer clicks on something constitutes an indication of actual knowledge and intention similar to a price list?  Is a coding error with a wrong price not simply akin to a mismarked pricetag in real space? But even assuming that we agree with the argument that coding up a bot is a relinquishment of control to a third party of sorts, how would the bot dynamics at issue differ from those in real space contracts where prices are specified using a variable third party index or where performance details are left variable – dynamics that have been found unproblematic in real space contract cases? [8]

3.  The bot problems that currently exist in contract law.  The authors take us through two cases with respect to bots – eBay v. Bidder’s Edge and Register v. Verio.com, arguing them primarily through the lens of tort, particularly trespass to chattels.    I found myself wondering about the authors’ agency analysis in the contract-driven bot cases where the trespass to chattels line of argument was deemed unpersuasive.  For example, how would the authors’ agency analysis apply in the context of the two Ticketmaster v. Tickets.com bot cases, particularly the second where the trespass to chattels claim was dismissed and the contract count was the only count to survive summary judgment?   Also, I would be curious to hear more about the extrapolation of their agency approach to the current wave of bot cases that blend contract formation questions with allegations of computer intrusion, such as Facebook v. Power Ventures and United States v. Lowson.

4.  Duties of information security.  Turning to information security, the authors point out that a party may try to hack a bot used by the other party in order to gain contracting advantage.[9]  While this is a valid computer intrusion concern, another pressing contract concern is that a malicious third-party (who is not one of the parties seeking to be in privity) will chooses to hack the bot to steal money on an ongoing basis.  If the bot is vulnerable to easy attack because of information security deficits of its coding, should the party using it get a free pass for its failure to exercise due care in information security?  Is it fair to impose information security losses on the other contracting party who was prudent enough not to use a vulnerable bot in contracting?   Would a straightforward ‘your vulnerability, your responsibility’ approach create better incentives for close monitoring and better information security practices, a goal already recognized by Congress as a social good?

5. The broader implications for “b0rked” code.  The separateness of bots from their creators came across to me as an underlying premise for the entirety of the authors’ conversation.   For example, the authors reference situations where the bot autonomously “misrepresents” information that its wielding party would not approve.[10]   Is it not perhaps more accurate to say that the bot contains programming bugs its wielding party failed to catch and rectify?   Is not a bot simply lines of code written by a human (who may or may not be skilled in a particular code language) that will always be full of errors (because a human authored it)?   Is perhaps the appropriate goal not to protect bots but to incentivize bot creators to make fewer errors and rectify errors once they are found after “shipping” the code?

The authors argue that holding a contracting party accountable for bot malfunction is “unjust”[11] in some circumstances.    Is this consonant with the contract law approach that drafting errors and ambiguities are construed against the drafter?[12]  Is the author/operator of the error-ridden code considered the drafter here?  How is choosing a bad programmer to build your flawed bot different from choosing a bad lawyer to draft the flawed language of your contract?

The analogy of a bot to a rogue beer-fetching dog I found to be a particularly apt one.[13]  Many scholars would argue that, much like having a pet, using a code-based creation such as a bot in contracting is a choice and an assumption of responsibility.  Both dogs and bots are things that are optional and limited in their capacities: we choose to unleash them on the rest of the world.   If a dog or a bot causes harm, even when the owner has not expressly directed it to do so, isn’t it always the owner’s failure to supervise that is to blame?  I fear that comparing a bot to a human of any sort – slave, child, employee – at the current juncture for purposes of crafting law may be premature.  No machine is capable of replicating human behavior smoothly at present.  Will one arrive in the future?  Yes, it is likely.  However, I fear that aggressive untethering of the legal responsibility of the coder from her coded creation may send us down an undesirable path of uncompensated consumer harms in our march toward our brave new cyborg world.[14]

The book’s purposes are ambitious, and I truly enjoyed pondering the questions it raises.  I thank the organizers for allowing me to participate in this symposium.

 


[1] p. 36

[2] p. 31-32

[3] p. 35-6

[4] The authors appear to argue from the perspective that encouraging the use of bots in contracting is a good thing and, as such merits special legal protection.   While it is clear that digital contracts and physical space contracts are to be afforded legal parity, is it indeed clear that our legislatures and courts have decided to encourage parties to use bots instead of humans in contracting?  Perhaps encouraging the use of more humans in operations and contracting is instead the preferable policy goal and the one that warrants the more protective legal regime?

[5] p. 48

[6] Indeed the consumer protection analysis that is omnipresent in contract law does not seem to be a dominant thread in the authors’ analysis.  When a sophisticated company using a bot is contracting with a consumer, the power and balance that already exists between these parties – a traditional concern of contract law – is exacerbated by the presence of the bot and arguably favors protecting the consumer more aggressively in any technology malfunction related to the formation of the contract.

[7] p. 36

[8] See, e.g., UCC §2-305; Eastern Air Lines, Inc. v. Gulf Oil Corporation, 415 F. Supp. 429 (1975).

[9] The situation where a party seeks to gain advantage and a contracting relationship by hacking the other party’s bot, I would argue, is not primarily a contract law question. This is arguably an active computer intrusion best left for analysis under the Computer Fraud and Abuse Act.

[10] p. 50

[11] p.55

[12] I have argued that it is the responsibility of businesses who use code to interact with consumers and other entities to warn protect and repair the unsafe code environments to which they subject others.

[13] p. 55

[14] As upcoming work with my coauthor Miranda Mowbray  will explain, the most sophisticated Twitter bots have now become quite good at approximating the speech patterns of humans, and humans seem to like interacting with them; however, even they eventually give themselves away as mere code-based creations.  When a Cylon-like code creation finally arrives, it may be nothing like what we expect it to be.

0

LTAAA Symposium: How Law Responds to Complex Systems

In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.

Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.

Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.

A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.

When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.

Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.

When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.

And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.

My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.

2

Camel, Weasel, Whale

Samir Chopra—whom I consider to be something of a pioneer in thinking through the philosophic and legal issues around artificial intelligence—did not much care for my initial thoughts about his and Lawrence White’s new book, A Legal Theory For Autonomous Agents. The gist of my remarks was that, while interesting and well researched, the book does not deliver on its promise of advancing “a legal theory.” Mostly what the book does (I read the book cover to cover, as you can see!) is identify new and old ways the law might treat complex software to advance various, seemingly unrelated goals. The book is largely about removing conceptual obstacles to treating software as “agreeing,” “knowing,” or “taking responsibility,” should we be inclined to do so in particular cases for independent policy reasons.

In the second chapter of the book, for instance, Chopra and White argue that treating software capable of calculating, offering, and appearing to accept terms as legal agents is not only coherent, but results in greater economic efficiency. The upshot is lesser contractual liability than treating the software as a mere instrument because, in instances where software makes the right kind of mistake, the entity that deployed the software—usually a sophisticated corporation—will not be held to the agreement. In the third chapter, the authors abandon economic efficiency entirely. Here the argument is that we ought to look to agency law in order to attribute more information to corporations because “[o]nly such a treatment would do justice to the reality of the increased power of the corporation, which is a direct function of the knowledge at its disposal.” In other words, by treating its software as agents rather than tools, we can either limit corporate liability for reasons of efficiency, or expand it for reasons of fairness. Read More

3

The Law Of The Fire

A corporation, it is said, “is no fiction, no symbol, no piece of the state’s machinery, no collective name for individuals, but a living organism and a real person with a body and members and a will of its own.” A ship, described as a “mere congeries of wood and iron,” on being launched, we are told, takes on a personality of its own, a name, volition, capacity to contract, employ agents, commit torts, sue and be sued.” Why do lawyers and judges assume thus to clothe inanimate objects and abstractions with the qualities of human beings?

The answer, in part at least, is to be found in characteristics of human thought and speech not peculiar to the legal profession. Men are not realists either in thinking or in expressing their thoughts. In both processes they use figurative terms. The sea is hungry, thunder rolls, the wind howls, the stars look down at night, time is not an abstraction, rather it is “father time” or the “grim reaper”…

Bryant Smith, Legal Personality, 37 Yale Law Journal 283, 285 (1928) Read More

0

Personhood to artificial agents: Some ramifications

Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.

The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand,  the argument for according  “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong  impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and  do not have clearly defined identities), and then argue how they might be overcome legally.

Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent  independent, is not too far into the future. In fact, the aftermath of  Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”

We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project  founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if)  artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?

And  when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?

These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network?  What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent?  What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?

Are these scenarios too far away for us to worry about, or close enough? I wonder…

-Ramesh Subramanian