posted by Ryan Calo
What do I and the Yale Law Journal have in common? Not much. Except that we both recently blogged about artificial intelligence on Concurring Opinions. My post was about the capacity of so-called weak artificial intelligence to affect society in the near term. The Yale Law Journal just posted an interesting thought piece by Yale Law student Betsy Cooper on whether Watson—IBM’s artificial intelligence system that recently bested two Jeopardy champions—could beat the courts (well, new textualists) at their own game. Her answer: yes, with qualifications. Thanks to Concurring Opinions, by the way, for inviting me to blog for a second month.
Cooper’s online essay has much to recommend it. She captures both new textualism and Watson’s decision-engine succinctly and well. She argues convincingly that Watson-like analysis of legal text has certain advantages over analysis by humans.1 And she concludes, reasonably enough, that the limitations of artificial intelligence mean that the tool is perhaps better deployed as an aid to decision-making than a full solution.
Indeed, you could walk away from the essay thinking that the application of artificial intelligence to judging is much less messy than it actually is. For instance: we tend to give undue weight to computer-generated results. Peter Singer recounts how a solider once took out a commercial jetliner that a computer weapons system misidentified as a military aircraft. So I confess I read Cooper’s essay looking in vain for a reference to the work of Robert Cover, something about legal interpretation “tak[ing] place in a field of pain and death,” or how “a judge articulates her understanding of a text, and as a result, somebody loses his freedom, his property, his children, even his life.”
Another interesting angle—perhaps beyond the scope of Cooper’s essay—is whether norms might ultimately be analyzed right alongside textual meaning. It is not as though you cannot express norms in words. This question is distinct, by the way, from the question of whether artificial intelligence will ever “get” human values in some deep sense; they do not need to in order to point to the right result. But if normative principles can indeed be converted into inputs, then perhaps Ronald Dworkin’s famous thought experiment could become a reality. Not Watson, but Hercules.2
Please read Cooper’s fine essay for yourself. This is an emerging area of study and she is asking, in my view, many of the right questions. As my one-year-old son is fond of saying: “more!”
Next up: should the law require that food made from bugs come with warnings?
 For instance, like Elizabeth Joh, Cooper notes that at least automated errors are not the product of bias.
 For a rigorous look at what laws can and cannot be rendered machine-readable, see new work by Harry Surden.