Author: Harry Surden

2

Predicting the Supreme Court Using Artificial Intelligence

Predicting Supreme Court Outcomes Using AI ?

Is it possible to predict the outcomes of legal cases – such as Supreme Court decisions – using Artificial Intelligence (AI)?  I recently had the opportunity to consider this point at a talk that I gave entitled “Machine Learning Within Law” at Stanford.

At that talk, I discussed a very interesting new paper entitled “Predicting the Behavior of the Supreme Court of the United States” by Prof. Dan Katz (Mich. State Law),  Data Scientist Michael Bommarito,  and Prof. Josh Blackman (South Texas Law).

Katz, Bommarito, and Blackman used machine-learning AI techniques to build a computer model capable of predicting the outcomes of arbitrary Supreme Court cases with an accuracy of about 70% – a strong result.  This post will discuss their approach and why it was an improvement over prior research in this area.

Quantitative Legal Prediction

The general idea behind such approaches is to use computer-based analysis of existing data (e.g. data on past Supreme Court cases) in order to predict the outcome of future legal events (e.g. pending cases).  The approach to using data to inform legal predictions (as opposed to pure lawyerly analysis) has been largely championed by Prof. Katz – something that he has dubbed  “Quantitative Legal Prediction” in recent work.

Legal prediction is an important function that attorneys perform for clients. Attorneys predict all sorts of things, ranging from the likely outcome of pending cases, risk of liability, and estimates about damages, to the importance of various laws and facts to legal decision-makers.   Attorneys use a mix of legal training, problem-solving, analysis, experience, analogical reasoning, common sense, intuition and other higher order cognitive skills to engage in sophisticated, informed assessments of likely outcomes.

By contrast, the quantitative approach takes a different tack:  using analysis of data employing advanced algorithms to produce data-driven predictions of legal outcomes (instead of, or in addition to traditional legal analysis).  These data-driven predictions can provide additional information to support attorney analysis.

Predictive Analytics: Finding Useful Patterns in Data

Outside of law, predictive analytics has widely applied to produce automated, predictions in multiple contexts.   Real world examples of predictive analytics include: the automated product recommendations made by Amazon.com, movie recommendations made by Netflix, and the search terms automatically suggested by Google.

Scanning Data for Patterns that Are Predictive of Future Outcomes

In general, predictive analytics approaches use advanced computer algorithms to scan large amounts of data to detect patterns.  These patterns can be often used to make intelligent, useful predictions about never-before-seen future data.  Many of these approaches employ “Machine Learning” techniques to engage in prediction. (I have written about some of the ways that machine-learning based analytical approaches are starting to be used within law and the legal system here).

Read More

0

Computable Contracts – Part 2

Computable Contracts Explained – Part II

This is the second part of a series explaining “computable contracts.”   For more about what a computable contract is, please see the first part here

Overview

In the last post I defined computable contracts as contracts that were designed so that a computer could  “understand” what was being promised and under what terms and conditions (metaphorically).

We can think of a computable contract as a partial workaround to a particular technological limitation: computers can not reliably understand “traditional” English language contracts.

The goal of this second part is to explain the intuition behind how an ordinary contract can become a computable contract.

Three Steps to Computable Contracting

There are three steps to creating a computable contract:

1) Data-Oriented Contracting

2) Semantic Contract terms

3) Automated assessment of contract terms

I will discuss each of these steps in turn.

 

Step 1 – Data-Oriented Contracting

Recall the primary problem of Part I – computers cannot understand traditional contracts because they are written in languages like English. Computers are not nearly as good at people at understanding documents expressed in “natural” written language (i.e. the “Natural Language Processing Problem”).

What is the solution?  Write the contract in the language of computers, not in the language of people.

This is known as data-oriented contracting (my terminology).  In a data-oriented contract, the contracting parties deliberately express contract terms, conditions and promises, not in English, but as data – the language of computers.

This partially gets around the natural language processing problem – because we are not asking the computer to understand English language contracts.  Rather, the parties are deliberately expressing their contract in a format amenable to computers – structured data.

Example of Data-Oriented Contracting

What does it mean to express a contract term as computer data?

Let’s continue with our earlier example: an option contract with an expiration date. Let’s imagine that one contracting party has the option to purchase 100 shares of Apple stock for $400 per share from the other, and that this option to buy the Apple stock expires on January 15, 2015.

Recall that one of the impediments to computers understanding traditional contracts was the flexibility of natural languages.  A party crafting this provision in English could express the idea of an expiration date in innumerable ways.  One might write,  “This contract expires on January 15, 2015”, or “This contract is no longer valid after January 15, 2015”, or “The expiration date of this option is January 15, 2015.”

These are all reasonably equivalent formulations of the same concept – an expiration date. People are very adaptable at understanding such linguistic variations.  But computers, not so much.

A Data-Oriented Term Equivalent?

Is there a way that we can express essentially the same information, but also make it reliably extractable by a computer?   In some cases, the answer is yes.   The parties need to simply express their contract term (the option expiration date) as highly structured computer data.

For instance, the equivalent of an “expiration provision”, expressed as structured data, might look something like this:

<Option_Expiration_Date :  01-15-2015>

The parties made this contract term readable by a computer by agreeing to always express the concept of an “expiration date” in one, specific, rigid, highly-structured way (as opposed the linguistic flexibility of natural languages like English).

If contracting parties can agree to express data in such a standard way, a computer can be given a set of rules by which it can reliably extract contract data such as expiration dates, buyers, sellers, etc.

Endowing Data with Legal Meaning

You might wonder, how does a piece of computer data like <Option_Expiration_Date : 01-15-2015> can acquire the legally significant meaning necessary for contracting?

There are essentially two ways that this happens.  First, the contracting parties might get together ahead of time and agree that computer data in the format “<Option_Expiration_Date : Date>” should always be interpreted as “the option contract will expire after the date listed.”

Alternatively the parties, might agree to adhere to a pre-existing data standard in which contract terms have been previously well defined. Standardization groups often design standards and protocols that others can adhere to.  For example, many modern, electronically-traded financial contracts are expressed as data according to the predefined FIX protocol and other data standards.

Pretty much any computer data format or language can be used for this purpose as long at it is structured (has a well-defined, consistent format).  For example, others have written about using the structured data format XML for this purpose.

Note that data-oriented contracting is not always conducted completely as computer data, but rather can involve a mix of “traditional” English contracts and data-oriented contracts.  Data-oriented contracting is sometimes built upon traditional, English language “master agreements which serve as the foundation for subsequent electronic, data-oriented contracting.

In sum, the first step to creating a computable contract is data-oriented contracting.  This means that contracting parties express some or all of their contract terms as data (the language of computers), rather than in legal-English (the language of people).

 

Step 2 – Semantic Contract Terms

We just discussed how people come to understand the meaning of contract terms expressed as data.   How do computers come to understand the meaning of such contract terms? The second step to creating computable contracts is to create “Semantic Contract Terms.”

Read More

2

Computable Contracts Explained – Part 1

Computable Contracts Explained – Part 1

I had the occasion to teach “Computable Contracts” to the Stanford Class on Legal Informatics recently.  Although I have written about computable contracts here, I thought I’d explain the concept in a more accessible form.

I. Overview: What is a Computable Contract?

What is a Computable Contract?   In brief, a computable contract is a contract that a computer can “understand.” In some instances, computable contracting enables a computer to automatically assess whether the terms of a contract have been met.

How can computers understand contracts?  Here is the short answer (a more in-depth explanation appears below).  First, the concept of a computer “understanding” a contract is largely a metaphor.   The computer is not understanding the contract at the same deep conceptual or symbolic level as a literate person, but in a more limited sense.  Contracting parties express their contract in the language of computers – data – which allows the computer to reliably identify the contract components and subjects.  The parties also provide the computer with a series of rules that allow the computer to react in a sensible way that is consistent with the underlying meaning of the contractual promises.

Aren’t contracts complex, abstract, and executed in environments of legal and factual uncertainty?  Some are, but some aren’t. The short answer here is that the contracts that are made computable don’t involve the abstract, difficult or relatively uncertain legal topics that tend to occupy lawyers.  Rather (for the moment at least), computers are typically given contract terms and conditions with relatively well-defined subjects and determinable criteria that tend not to involve significant legal or factual uncertainty in the average case.

For this reason, there are limits to computable contracts: only small subsets of contracting scenarios can be made computable.  However, it turns out that these contexts are economically significant. Not all contracts can be made computable, but importantly, some can.

Importance of Computable Contracts 

There are a few reasons to pay attention to computable contracts.   For one, they have been quietly appearing in many industries, from finance to e-commerce.  Over the past 10 years, for instance, many modern contracts to purchase financial instruments (e.g. equities or derivatives) have transformed from traditional contracts, to electronic, “data-oriented” computable contracts.   Were you to examine a typical contract to purchase a standardized financial instrument these days, you would find that it looked more like a computer database record (i.e. computer data), and less like lawyerly writing in a Microsoft Word document.

Computable contracts also have new properties that traditional, English-language, paper contracts do not have.  I will describe this in more depth in the next post, but in short, computable contracts can serve as inputs to other computer systems.  These other systems can take computable contracts and do useful analysis not readily done with traditional contracts. For instance, a risk management system at a financial firm can take computable contracts as direct inputs for analysis, because, unlike traditional English contracts, computable contracts are data objects themselves.

II. Computable Contracts in More Detail

Having had a brief overview of computable contracts, the next few parts will discuss computable contracts in more detail.

A. What is a Computable Contract?

To understand computable contracts, it is helpful to start with a simple definition of a contract generally. 

A contract (roughly speaking) is a promise to do something in the future, usually according to some specified terms or conditions, with legal consequences if the promise is not performed.   For example, “I promise to sell you 100 shares of Apple stock for $400 per share on January 10, 2015.”

computable contract is a contract that has been deliberately expressed by the contracting parties in such a way that a computer can:

1) understand what the contract is about;

2) determine whether or not the contract’s promises have been complied with (in some cases).

How can a computer “understand” a contract, and how can compliance with legal obligations be “computed” electronically?

To comprehend this, it is crucial to first appreciate the particular problems that computable contracts were developed to address.

Read More

0

Supreme Court Gives Patent Law New Bite (Definiteness)

I want to thank Danielle Citron and the other folks at Concurring Opinions for inviting me to blog.  As Danielle mentioned in her introduction, I am a law professor at the University of Colorado Law School focused on technology and law.  (More info about me is here: http://harrysurden.com; Twitter: @Harry Surden).

Patent Law’s Definiteness Requirement Has New Bite

The Supreme Court may have shaken up patent law quite a bit with its recent opinion in the Nautilus v. Biosig case (June 2, 2014).

At issue was patent law’s “definiteness” requirement, which is related to patent boundaries. As I (and others) have argued, uncertainty about patent boundaries (due to vague, broad and ambiguous claim language), and lack of notice as to the bounds of patent rights, is a major problem in patent law.

I will briefly explain patent law’s definiteness requirement, and then how the Supreme Court’s new definiteness standard may prove to be a significant change in patent law. In short – many patent claims – particularly those with vague or ambiguous language – may now be vulnerable to invalidity attacks following the Supreme Court’s new standard.

Patent Claims: Words Describing Inventions

In order to understand “definiteness”, it’s important to start with some patent law basics.  Patent law gives the patent holder exclusive rights over inventions – the right to prevent others from making, selling, or using a patented invention.  How do we know what inventions are covered by a particular patent?  They are described in the patent claims. 

Notably, patent claims describe the inventions that they cover using (primarily) words.

For instance, in the Supreme Court case at issue, the patent holder – Biosig – patented an invention – a heart-rate monitor.  Their patent used the following claim language to delineate their invention :

I claim a heart rate monitor for use in association with exercise apparatus comprising…

live electrode

and a first common electrode mounted on said first half

 In spaced relationship with each other…”

Screen Shot 2014-06-06 at 9.32.30 AM

So basically, the invention claimed was the kind of heart rate monitor that you might find on a treadmill.   The portion of the claim above described one part of the overall invention – two electrodes separated by some amount of space.  Presumably the exercising person holds on to these electrodes as she exercises, and the device reads the heart rate.

( Note: only a small part of the patent claim is shown – the actual claim is much longer)

Patent Infringement: Comparing Words to Physical Products

So what is the relationship between the words of a patent claim and patent infringement?

In a typical patent infringement lawsuit, the patent holder alleges that the defendant is making or selling some product or process (here a product) that is covered by the language of a patent claim (the “accused product”).  To determine literal patent infringement, we compare the words of the patent claim to the defendant’s product, to see if the defendant’s product corresponds to what is delineated in the plaintiff’s patent claims.

For instance, in this case, Biosig alleged that Nautilus was selling a competing, infringing heart-rate monitor.  Literal patent infringement would be determined by comparing the words of Biosig’s patent claim (e.g. “a heart rate monitor with a live electrode…”) to a physical object –  the competing heart-rate monitor product that Nautilus was selling (e.g. does Nautilus’ heart rate monitor have a part that can be considered a “live electrode”)?

Literal patent infringement is determined by systematically marching through each element (or described part) in Biosig’s patent claim, and comparing it to Nautilus’s competing product. If Nautilus’ competing product has every one of the “elements” (or parts) listed in Biosig’s patent claim, then Nautilus’s product would literally infringe Biosig’s patent claim.

If patent infringement is found, a patent holder can receive damages or in some cases, use the power of the court  to prevent the competitor from selling the product through an injunction.

Patent Claims – A Delicate Balance with Words

Writing patent claims involves a delicate balance.  On the one hand, a patent claim must be written in broad enough language that such a patent claim will cover competitors’ future products.

Why?  Well, imagine that Biosig had written their patent claim narrowly.  This would mean that in place of the broad language actually used (e.g. “electrodes in a spaced relationship”), Biosig had instead described the particular characteristics of the heart-rate monitor product that Biosig sold.  For instance, if Biosig’s heart-rate monitor product had two electrodes that were located exactly 4 inches apart, Biosig could have written their patent claim with language saying, “We claim a heart rate monitor with two electrodes exactly 4 inches apart” rather than the general language they actually used, the two electrodes separated by a “spaced relationship”

However, had Biosig written such a narrow patent, it might not be commercially valuable.  Competing makers of heart rate monitors such as Nautilus could easily change their products to “invent around” the claim so as not to infringe. A competitor might be able to avoid literally infringing by creating a heart-rate monitor with electrodes that were 8 inches apart.  For literal infringement purposes, a device with electrodes 8 inches apart would not literally infringe a patent that claims electrodes “exactly 4 inches apart.”

From a patent holder’s perspective, it is not ideal to write a patent claim too narrowly, because for a patent to be valuable, it has to be broad enough to cover the future products of your competitors in such a way that they can’t easily “invent around” and avoid infringement.  A patent claim is only as valuable (trolls aside) as the products or processes that fall under the patent claim words.  If you have a patent, but its claims do not cover any actual products or processes in the world because it is written too narrowly, it will not be commercially valuable.

Thus, general or abstract words (like “spaced relationship”) are often beneficial for patent holders, because they are often linguistically flexible enough to cover more variations of competitors’ future products.

Patent Uncertainty – Bad for Competitors (and the Public)

By contrast, general, broad, or abstract claim words are often not good for competitors (or the public generally).  Patent claims delineate the boundaries or “metes-and-bounds” of patent legal rights  Other firms would like to know where their competitors’ patent rights begin and end.  This is so that they can estimate their risk of patent liability, know when to license, and in some cases, make products that avoid infringing their competitors’ patents.

However, when patent claim words are abstract, or highly uncertain, or have multiple plausible interpretations, firms cannot easily determine where their competitor’s patent rights end, and where they have the freedom to operate.  This can create a zone of uncertainty around research and development generally in certain areas of invention, perhaps reducing overall inventive activity for the public.

Read More

2

Autonomous Agents and Extension of Law: Policymakers Should be Aware of Technical Nuances

This post expands upon a theme from Samir Chopra and Lawrence White’s excellent and thought-provoking book – A Legal Theory for Autonomous Artificial Agents.  One question pervading the text: to what extent should lawmakers import or extend existing legal frameworks to cover the activities of autonomous (or partially autonomous) computer systems and machines?   These are legal frameworks that were originally created to regulate human actors.  For example, the authors query whether the doctrines and principles of agency law can be mapped onto actions carried out by automated systems on behalf of their users?  As the book notes, autonomous systems are already an integral part of existing commercial areas (e.g. finance) and may be poised to emerge in others over the next few decades (e.g. autonomous, self-driving automobiles). However, it is helpful to further expand upon one dimension raised by the text: the relationship between the technology underlying autonomous agents, and the activity or results produced by the technology.

Two Views of Artificial Intelligence

The emergence of partially autonomous systems – computer programs (or machines) carrying out activities at least partially in a self-directed way, on behalf of their users, is closely aligned with the field of Artificial Intelligence (AI) and developments therein. (AI is a sub-discipline of computer science.) What is the goal of AI research? There is probably no universally agreed upon answer to this question – as there have been a range of approaches and criteria for systems considered to be successful advances in the field. However, some AI researchers have helpfully clarified two dimensions along which we can think about AI developments. Consider a spectrum of possible criteria under which one might label a system to be a “successful” AI product:

View 1) We might consider a system to be artificially intelligent only if it produces “intelligent” results based upon processes that model, approach or replicate the high-level cognitive abilities or abstract reasoning skills of humans ;or

View 2) We might most evaluate a system primarily based upon the quality of the output it produces – if it produces results that humans would consider accurate and helpful – even if the results or output came about through processes that do not necessarily model , approach, or resemble actual human cognition, understanding, or reasoning.

We can understand the first view as being concerned with creating systems that replicate to some degree something approaching human thinking and understanding, whereas the second is more concerned with producing results or output from computer agents that would be considered “intelligent” and useful, even if produced from systems which likely do not approach human cognitive processes. (Russell and Norvig, Artificial Intelligence: A Modern Approach, 3 Ed, 2009, 1-5). These views represent poles on a spectrum, and many actual positions fall in between. However, this distinction is more than philosophical.  It has implications on the sensibility of extending existing legal doctrines to cover the activities of artificial agents. Let us consider each view briefly in turn, and some possible implications upon law.

View 1 – Artificial Intelligence as Replicating Some or All Human Cognition

The first characterization – that computer systems will be successful within AI when they produce activities resulting from processes approaching the high-level cognitive abilities of humans, is considered an expansive and perhaps more ambitious characterization of the goals of AI. It also seems to be the one most closely associated with the view of AI research in the public imagination. In popular culture, artificially intelligent systems replicate and instantiate – to varying degrees – the thinking facilities of humans (e.g. the ability to engage in abstract thought, carry on an intelligent conversation, or to understand or philosophize concerning concepts at a depth associated with intelligence). I raise this variant primarily to note that despite   (what I believe is a) common lay view of the state of the research- this “strong” vision of AI is not something that has been realized (or is necessarily near realization) within the existing state-of-the art systems that are considered successful products of AI research. As I will suggest shortly, this nuance may not be something within the awareness of lawmakers and judges who will be the arbiters of such decisions concerning systems that are labeled artificially intelligent.  Although AI research has not yet produced artificial human-level cognition, that does not mean that AI research has been unsuccessful.  Quite to the contrary – over the last 20 years AI research has produced a series of more limited, but spectacularly successful systems as judged by the second view.

View 2 – “Intelligent” Results (Even if Produced by Non-Cognitive Processes)

The second characterization of AI is perhaps more modest, and can be considered more “results oriented.”  This view considers a computer system (or machine) to be a success within artificial intelligence based upon whether it produces output or activities that people would agree (colloquially speaking) are “good” and “accurate” and “look intelligent.”  In other words, a useful AI system in this view is characterized by results or output are likely to approach or exceed  that which would have been produced by a human performing the same task.  Under this view, if the system or machine produces useful, human-like results, this is a successful AI machine – irrespective as to whether these results were produced from a computer-based process instantiating or resembling human cognition, intelligence or abstract reasoning.

In this second view, AI “success” is measured based upon whether the autonomous system produces “intelligent” (or useful) output or results.  We can use what would be considered “intelligent” conduct of a similarly situated human as a comparator. If a modern auto-pilot system is capable of landing airplanes in difficult conditions (such as thick fog) at a success rate that meets or exceeds human pilots under similar conditions, we might label it a successful AI system under this second approach. This would be the case even if we all agreed that the autonomous autopilot system did not have a meaningful understanding of the concepts of “airplanes”, “runways”, or “airports.” Similarly, we might label IBM’s Jeopardy playing “Watson” computer system to be a successful AI system since it was capable of producing highly accurate answers, to a surprisingly wide and difficult range of questions – the same answers that a strong, human Jeopardy champions would have produced. However, there is no suggestion that Watson’s results were the result of the same high-level cognitive understanding and processes that likely animated the result of the human champions like Ken Jennings. Rather, Watson’s accurate output came from techniques such as highly sophisticated statistical machine-learning algorithms that were able to quickly rank possible candidate answers through immense parallel processing of large amounts of existing written documents that happened to contain a great deal knowledge about the world.

Machine-Translation: Automated Translation as an Example

To understand this distinction between AI views rooted in computer-based cognition and those in “intelligent” or accurate results, it is helpful to examine the history of computer-based language translation (e.g. English to French). Translation (at least superficially) appears to be a task deeply connected to the human understanding of the meaning of language, and the conscious replication of that meaning in the target language. Early approaches to machine translation followed this cue, and sought to convey aspects to computer system – like the rules of grammar in both languages, and the pairing of words with the same meanings in both language – that might mimic the internal structures undergirding human cognition and translation. However, this meaning and rules-based approach to translation proved limited and surprised researchers by producing somewhat poor results based upon the rules of matching and syntactical construction. Such system had difficulty in determining whether the word “plant” in English should be translated to the equivalent of “houseplant” or “manufacturing plant” in French. Further efforts attempted to “teach” the computer rules about how to understand and make more accurate distinctions for ambiguously situated words but still did not produce marked improvements in translation quality.

Machine Learning Algorithms: Using Statistics to Produce Surprisingly Good Translations

However, over the last 10-15 years, a markedly different approach to computer translation occurred – made famous by Google and others. This approach was not primarily based upon top-down communication of the basics of constructing and conveying knowledge to a computer system (e.g. language pairing and rules of meaning). Rather, many of the successful translation techniques developed were largely statistical in nature, relying on machine-learning algorithms to scour large amounts of data and create a complex representation of correlations between languages. Google translate – and other similar statistical approaches – work in part by leveraging vast amounts of data that has previously been translated by humans. For example, the United Nations and the European Union frequently translate official documents into multiple languages using professional translators. This “corpus” of millions of paired and translated documents became publicly available electronically over the last 20 years to researchers. Systems such as Google Translate are able to process vast numbers of documents and leverage these paired, translated translation to create statistical models which are able to produce surprisingly accurate translation results using probabilities – for arbitrary new texts.

Machine-Learning Models: Producing “intelligent”, highly useful results 

The important point is that these statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition (e.g. view 1). (It is important to note that the machine-learning translation approach does not achieve translation on its own but “leverages: previous human cognition through the efforts of the original UN translators that made the paired translations.)  Thus, for certain, limited tasks,  these systems have shown that it is possible for contemporary autonomous agents to produce “intelligent” results without relying upon what we would consider processes approaching human-level cognition.

Distinguishing “intelligent results” and actions produced via cognitive intelligence

The reason to flag this distinction, is that such successful AI systems (as judged by their results), will pose a challenge to the task of importing and extending of existing legal doctrinal frameworks – (which were mostly designed to regulate people) into the domain of autonomous computer agents.  Existing “type 2″ systems that produce surprisingly sophisticated, useful, and accurate results without approaching human cognition are the basis of many products now emerging from earlier AI research and are becoming integrated (or are poised to become ) integrated into life.    These include IBM’s Watson, Apple’s SIRI, Google Search – and in perhaps the next decade or two – Stanford’s/Google’s Autonomous self-driving cars, and autonomous music composing software.  These systems often use statistics to leverage existing, implicit human knowledge.  Since these systems produce output or activities that in some cases appear to approach or exceed humans in particular tasks, and the results that are autonomously produced are often surprisingly sophisticated, and seemingly intelligent – such “results-oriented”, task specific (e.g. driving, answering questions, landing planes) systems seem to be the near path of much AI research.

However, the fact that these intelligent-seeming results do not result from systems approaching human-cognition is a nuance that should not be lost on policymakers (and judges) seeking to develop doctrine in the area of autonomous agents. Much – perhaps most of law – is designed and intended to regulate the behavior of humans (or organizations run by humans).  Thus embedded in many existing legal doctrines are underlying assumptions about cognition and intentionality that are implicit and are so basic that they are often not articulated.   The implicitness of such assumptions may make these assumptions easy to overlook.

Given current trends, many contemporary (and likely future) AI systems that will be integrated into society (and therefore more likely the subject of legal regulation) will use algorithmic techniques focused upon producing “useful results” (view 2), rather than focusing on systems aimed at replicating human-level cognition, self-reflection, and abstraction (view 1).  If lawmakers merely follow the verbiage (e.g. a system that has been labeled “artificially intelligent” did X or resulted in Y) and employ only a superficial understanding of AI research, without more closely understanding these technical nuances, there is the possibility of conflation in extending existing legal doctrines to circumstances based upon “intelligent seeming” autonomous results.   For example, the book authors explore the concept of requiring fiduciary duties on the part of autonomous systems in some circumstances. But it will take a careful judge or lawmaker to distinguish existing fiduciary/agency doctrines with embedded (and often unarticulated) assumptions of human-level intentionality among agents (e.g. self-dealing) from those that may be more functional in nature (e.g. duties to invest trust funds). In other words, an in-depth understanding of the technology underlying particular autonomous agents should not be viewed as a technical issue.   Rather it is a serious consideration which should be understood in some detail by lawmakers in any decisions to extend or create new legal doctrine from our existing framework to cover situations involving autonomous agents.