The Law Of The Fire
posted by Ryan Calo
A corporation, it is said, “is no fiction, no symbol, no piece of the state’s machinery, no collective name for individuals, but a living organism and a real person with a body and members and a will of its own.” A ship, described as a “mere congeries of wood and iron,” on being launched, we are told, takes on a personality of its own, a name, volition, capacity to contract, employ agents, commit torts, sue and be sued.” Why do lawyers and judges assume thus to clothe inanimate objects and abstractions with the qualities of human beings?
The answer, in part at least, is to be found in characteristics of human thought and speech not peculiar to the legal profession. Men are not realists either in thinking or in expressing their thoughts. In both processes they use figurative terms. The sea is hungry, thunder rolls, the wind howls, the stars look down at night, time is not an abstraction, rather it is “father time” or the “grim reaper”…
Bryant Smith, Legal Personality, 37 Yale Law Journal 283, 285 (1928)
What are the qualities of artificial agents that make them different from the howling wind, the rolling thunder, the starring stars? In A Legal Theory for Autonomous Agents, Samir Chopra and Lawrence White adopt an “intentional stance” toward certain categories of software. According to the authors, “an artificial agent could, and should, be understood as acting for reasons that are the causes of its actions, if such an understanding leads to the best interpretation and prediction of its behavior.” The title “theory”—the one that “underpins the book”—is that “an agency law approach to artificial agents is cogent, viable, and doctrinally satisfying.” The authors retain this commitment right through the final chapter on personhood: “The best legal argument for denying or granting artificial agents legal personality will be pragmatic rather than conceptual.”
Interesting stuff. But what is the “best interpretation”? What counts as “doctrinally satisfying”? Say I think the state ought to punish a man who sets a fire causing the death of another, even where it cannot be established that the man’s action was willful and malicious. I check the books; there is no crime of negligent arson in my jurisdiction.
And yet… fire has a life and purpose all its own. Predicting fire’s complex behavior means thinking about fire as acting not blindly, but for reasons. “It’s a living thing, Brian.” Robert De Niro’s character tells that of William Baldwin in the 1991 film Backdraft. “It breathes, it eats, and it hates. The only way to beat it is to think like it. To know that this flame will spread this way across the door and up across the ceiling, not because of the physics of flammable liquids, but because it wants to.”
In other words, say I take an intentional stance toward fire. Perhaps I am now free to conclude on this basis that the person who started the fire—unlawfully, yet without an intent to kill anyone—is, in fact, the fire’s guardian or accomplice.
Or say I agree with Matthew Tokson that a user has not shared his email with Google for purposes of the third party doctrine merely because company software automatically filters spam or targets ads. Chopra and White believe that treating this software as an agent of Google will better protect privacy. But were a court to hold that Google’s algorithms “know” my emails the way a human employee would—and, accordingly, that I no longer have a reasonable expectation of privacy under United States v. Miller because I happen to run the spell checker—I might be very unsatisfied indeed.
Chopra asks in his opening post: “Are autonomous robots really just the same as tools like hammers?” I don’t know. That’s the work of a legal theory for autonomous agents. Such a theory requires a full set of criteria for when mere congeries of code become an agent. And it needs some yardstick by which to assess pragmatic or functional goals. Meanwhile, neither the criteria nor the yardstick can rest overly on those “characteristics of human thought” that lead people to anthropomorphize the sea, thunder, wind, the stars, and so on.
Don’t get me wrong. There is a lot to like in this book and I recommend it to anyone interested in liability for complex software. I hope it gets the conversation going at the upcoming We, Robot conference in Miami. But I’m not convinced that the book advances a theory.