Site Meter

Artificial Clerks

You may also like...

8 Responses

  1. PrometheeFeu says:

    “For instance: we tend to give undue weight to computer-generated results.”

    There is a solution to this: Teach people computer programming. I can tell you that once you know what goes in the sausage factory, you start giving a lot less weight to computer-generated results.

  2. Orin Kerr says:

    I worry that Cooper’s paper may be premised on a misunderstanding of textualism. As I see it, textualists don’t think the meaning of statutes can be discerned helpfully from mathematics or from large datasets. Rather, they think the meaning of statutes must be discerned only from the very human process of reading text. Put another way, there is nothing mechanical about textualism; it just wants the human process of divining meaning to focus on the actual words and structure of the statute and not other sources.

    If I’m right about that, then I’m not sure how having a computer crunch numbers would help. The computer can only crunch the numbers that are inputted through the arbitrary choice of a human being in charge of gathering inputs, and then run them through an algorithm designed by a human being. I don’t see how any of that can somehow tell you how a typical person would read a particular piece of text better than an actual person can.

    Cooper’s comparison to Jeopardy seems inapt to me because Jeopardy questions have true and false answers. The existence of true and false answers allows the designers of Watson to design the software and the database to answer correctly the highest percentage of known past questions. With the input and algorithm of Watson tweaked accordingly, there’s a good chance Watson can successfully answer new questions going forward.

    But how do you do that for textualism, given that the “correct” readings of statutes are always subject to debate in every nontrivial case? How do you determine what the input database should be, and how do you tweak the algorithm, without knowing whether any output from Watson can ever be right or wrong?

  3. Daniel Katz says:

    While I really appreciate the spirit of this article, I have to say that the question posed by the author is not actually the critical one.

    The question of our times (with the well documents problems in the legal employment market) is what Soft to Medium AI means for the market for legal services.

    For those who are interested you can see the balance of my comments here –

    http://computationallegalstudies.com/2011/09/08/judges-in-jeopardy-actually-it-is-lawyers-in-jeopardy/

  4. Ryan Calo says:

    Orin,

    Thanks for your note. I am likely a bad proxy for the author here but I don’t think the technical challenge is always as difficult as you assume.

    Let’s say we ask Watson whether the phrase “cruel and unusual” encompasses the act of placing a person in solitary confinement for a week. At first blush, the inquiry seems awfully subjective—a classic “matter of interpretation.” But Watson could look for patterns of activity in court decisions or newspaper stories that tend to accompany the adjective “cruel.” Watson could also assess how “unusual” the action is by detecting the frequency with which the event has occurred in the past.

    Or take Cooper’s example of the “ordinary meaning” of the phrase “carries a firearm.” Watson can interpret “ordinary” to mean “most frequent” and then set about determining how often courts or others have used the word “carry” in reference to an object actually found in a vehicle, rather than on the owner’s person.

    The trick, I think, is posing the question in a way the computer is capable of answering by leveraging external facts about the world, including the actions and words of real people. (This is also the way the Netflix algorithm recommends movies to me without any deep understanding of my tastes.) Where, as often, the question cannot be posed in a way that an algorithm can shed light, probably Watson should not be consulted. I don’t see Cooper arguing otherwise.

    Ryan

  5. Ryan Calo says:

    Daniel,

    Interesting post. Thanks for sharing it.

    Ryan

  6. Orin Kerr says:

    Ryan writes:

    *******
    Watson could look for patterns of activity in court decisions or newspaper stories that tend to accompany the adjective “cruel.” Watson could also assess how “unusual” the action is by detecting the frequency with which the event has occurred in the past.
    *******

    Isn’t this quite easy to calculate without Watson? It just requires a Westlaw search. And yet no one bothers to even conduct the Westlaw search, because the answers a Westlaw search would reveal are at worst just noise and at best legally irrelevant.

    Ryan next says:

    ********
    Or take Cooper’s example of the “ordinary meaning” of the phrase “carries a firearm.” Watson can interpret “ordinary” to mean “most frequent” and then set about determining how often courts or others have used the word “carry” in reference to an object actually found in a vehicle, rather than on the owner’s person.
    *****

    Again, isn’t this easy to do with a Westlaw search? No one does it because the results would be at best noise and at worst legally irrelevant.

    To be clear, I’m not saying that all things involving computers are always useless. Westlaw is great, and I’ve used to do conduct empirical studies about decided cases and the like. I like to think the empirical analysis I did using Westlaw was rather useful. But the key question is whether Watson can generate some useful insight that that a Westlaw query can’t, and at this point I can’t imagine what that useful insight would be.

  7. Ryan Calo says:

    Thanks, Orin. I blame my poor hypothetical for failing to spark your imagination.

    I will say that computers are really good at spotting patterns where people see only noise or irrelevance. That is how MIT researchers were able to write a program that can guess a person’s sexual orientation by looking at a list of his Facebook friends. It is also part of the reason why computer algorithms do the majority of stock trading today (the other being far greater speed).

    Where, as often, questions of interpretation turn on facts about the world, sophisticated platforms like Watson capable of making sense of enormous data sets could very well have a role. Please don’t let my clumsy examples suggest otherwise.

  8. Brett Bellmore says:

    It seems to me that one of the huge advantages of computer aided judging, is that you could conceivably allow people to run things past the system in advance. Including the legislators…

    Be nice if interpreting the law to avoid bad results didn’t require deliberately attributing non-obvious meanings to words and phrases. We OUGHT to be writing the laws so that they could be mechanically applied without horrible results.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image