Author Archive for james-grimmelmann
posted by James Grimmelmann
In my Jotwell review of Coding Freedom, I commented that “Coleman’s portrait of how hackers become full-fledged members of Debian is eerily like legal education.”
[T]he hackers who are trained in it go through a prescribed course of study in legal texts, practice applying legal rules to new facts, learn about legal drafting, interpretation, and compliance, and cultivate an ethical and public-spirited professional identity. There is even a written examination at the end.
This is legal learning without law school. Coleman’s hackers are domain-specific experts in the body of law that bears on their work. It should be a warning sign that a group of smart and motivated lay professionals took a hard look at the law, realized that it mattered intensely to them, and responded not by consulting lawyers or going to law school but by building their own parallel legal education system. That choice is an indictment of the services lawyers provide and of the relevance of the learning law schools offer. A group of amateurs teaching each other did what we weren’t.
Their success is an opportunity as well as a challenge. The inner sanctums of the law, it turns out, are more accessible to the laity than sometimes assumed. One response to the legal services crisis would be to give more people the legal knowledge and tools to solve some of their own legal problems. The client who can’t afford a lawyer’s services can still usually afford her own. More legal training for non-lawyers might or might not make a dent in law schools’ budget gaps. But it is almost certainly the right thing to do, even if it reduces the demand for lawyers’ services among the public. There is no good reason why law schools can only impart legal knowledge to by way of lawyers and not directly.
Hacker education, however, also shows why lawyers and the traditional missions of law schools are not going away. Law is a blend of logic and argument, a baseball game that depends on persuading the umpire to change the rules mid-pitch. Hacker legal education, with its roots in programming, is strong on formal precision and textual exegesis. But it is notably light on legal realism: coping with the open texture of the law and sorting persuasive from ineffective arguments. The legal system is not a supercomputer that can be caught in a paradox. The professional formation of lawyers is absent in hacker education, because theirs is a different profession.
Legal academics also play a striking role in hacker legal education. Richard Stallman was of course the driving personality behind free software. But Columbia’s Eben Moglen had an absolutely crucial role in
crafting the closest thing the free software movement has to a constitution: the GNU GPL. And Coleman documents the role that Larry Lessig‘s consciousness-raising activism played in politicizing hackers about copyright policy. They, and other professors who have helped the free software community engage with the law, like Pamela Samuelson, in turn, drew heavily on the legal scholarly tradition even as they translated it into more practical terms. The freedom to focus on self-chosen projects of long-term importance to society is a right and responsibility of the legal academic. Even if not all of us have used it as effectively as these three, it remains our job to try.
posted by James Grimmelmann
Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.
posted by James Grimmelmann
In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.
Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.
Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.
A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.
When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.
Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.
When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.
And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.
My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.
posted by James Grimmelmann
The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.
One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.
But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.
Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.
In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.
As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.
posted by James Grimmelmann
I particularly liked two things about The Master Switch. The first is that Wu’s history of information networks in the 20th century (though sadly not before) has a meaningful theory of corporate ideology, and uses it effectively. The book opens with a 1916 banquet in Washington, D.C. honoring Theodore Vail and the Bell system. The highlight of the evening was a mildly absurd demo: a phone call to General Pershing in El Paso:
“Hello, General Pershing!”
“Hello, Mr. Carty.”
“How’s everything on the border?”
“All’s quiet on the border.”
“Did you realize you were talking with eight hundred people?”
“No, I did not,” answered General Pershing. “If I had known it, I might have thought of something worthwhile to say.”
It’s a great scene, and it captures the spirit of particular company and a moment in history. The Bell system was as Establishment as you can get; the event was shot through with patriotic symbolism. The tech demos were gifts from a benevolent, stabilizing, centralizing AT&T to the American people, with Vail both basking in accomplishment and promising the future.
Wu’s point, here as throughout the book, is that you can’t understand AT&T, or its economic and social impact, or the way it shaped and struggled with the legal system, without appreciating the way it saw itself and the world. Plenty of writers have described the endless [back-and-forth(http://www.inforules.com/)] between the forces of openness and the forces of closure. Wu’s history shows, repeatedly, how the different companies taking part in the struggle justified themselves — and how those essentially ideological justifications in turn frequently drove key corporate decisions.
The Master Switch doesn’t assert, as too many people who should know better do, that corporations simply act in the interests of their shareholders. Nor is this a work of hagiography or demonization; one does not walk away with the impression that Theodore Vail built the Bell system with his bare hands. Instead, Instead, it gives examples of companies so in thrall to a vision of their inevitable triumph or their social role that they dove headlong off a marketplace or regulatory cliff — and also examples of executives who won their companies’, their industries’, and their regulators’ support only through the subtle arts of persuasion.
Wu’s discussion of the Hush-a-Phone brings out the way in which AT&T’s “One System, One Policy, Universal Service” philosophy drove it into a legal fight it would have been better off ignoring. And who helped Hush-a-Phone poke the first, critical hole in AT&T’s policy against foreign attachments? Leo Beranek and J.C.R. Licklider, major figures in the development of the Internet. In another example, after successfully shaking off Edison’s control of film patents, the Independent movie companies fractured. Some of them were thrilled to entrench themselves as a new cartel controlling distribution; others much less so. Wu’s portraits of monopolists, insurgents, and particularly of insurgents-turned-monopolists illustrate the power of a compelling vision of how information can or should be distributed to shape, and sometimes to warp, the design of information empires.
posted by James Grimmelmann
When The Future of the Internet was published, I knew immediately it was a big deal. Paul Ohm had very much the same thought. And so we got together, called ourselves an institute, and jointly wrote a book review, which we titled “Dr. Generative Or: How I Learned to Stop Worrying and Love the iPhone.” I wish I could link to it, but it’s not quite out yet–it went to the Maryland Law Review’s publishers about a month ago, and isn’t back yet. In its place, though, I thought I’d run down the main points Paul and I make in our review.
The book’s gerat contribution, the reason it will stay on shelves as long as we Internet academics still believe in printed books, can be boiled down to one word: “generativity.” In the Lessig/Reidenberg/Kapor tradition of thinking about computer code as a kind of regulation, one of the central questions has always been which features of the Internet’s architecture make it THE INTERNET, and thus worth caring about. People have proposed a lot of different virtues. “Openness,” as Adam discusses below, is a disconcertingly capacious and imprecise term. But most of the more concrete alternatives–”end-to-end”-ianness, “neutrality,” “layering,” “standardization,” “decentralization,” “tinkerability,” “free-as-in-freedom” software, and the “commons”–turn out to be near misses. They focus too narrowly on one part of a much bigger puzzle. For example, as Laura’s work demonstrates, even though standardization makes the Internet possible, it can also be a tool of political control and repression.
In contrast, Paul and I call generativity “the right theory.” The Internet’s capacity to support large and unanticipated creativity and innovation on a wide variety of levels is remarkable. Focusing on generativity allows us to sum up, in one simple concept, what makes the Internet distinctive, and distinctively valuable. That alone is a serious achievement. One can dispute–as this symposium is already showing–perhaps everything else in the book. But there really is no arguing with the theory of generativity itself.
That said, however, Paul and I express somewhat more skepticism about some of Zittrain’s applications of generativity. Our problem with the book–or, really, our reason to look forward to the sequel–is that only in a few places does the carefully worked out theory really make contact with his practical recommendations. The final third of the book consists of some very clever case studies and proposals, but there’s something of a missing link: the proposals don’t always clearly follow from the theory of generativity.
Our central example, and the backbone of our review, is Zittrain’s discussion of the iPhone. It, and other “tethered appliances” feature what he calls “contingent generativity“: they can be programmed and extended for now, but Apple can always pull the plug on anything it doesn’t like. He’s afraid of that future–but the reasons he gives to worry about it aren’t really concerns about generativity as such. They implicate other values, like free speech and individual autonomy, and one must do more work than Zittrain has to link these values up with generativity. Indeed, it’s easy to make arguments that the iPhone and iPad have been massive improvements for generativity; recall Apple’s ad campaign that other phones have “the kinda sorta looks like the Internet” but the iPhone has “the Internet” itself.
Whether this and similar compromises–such as Google’s ability to turn off its cloud, or Wikipedia’s ability to revert your edits and ban your IP block–are worthwhile restrictions or not has to come from a richer, multivalued theory. That is, we think Zittrain has really and truly pinned down the fundamental architectural virtue of the Internet, but only just started on the long road of harnessing that theory to give advice for practical policy problems. In The Fourth Quadrant, Zittrain has started in on that important work–and we hope it’s a down payment on that sequel.
posted by James Grimmelmann
I’d like to take up Orin Kerr’s question: what do we gain from using a “civil rights” frame that ordinary tort and criminal law frames don’t provide? As I suggested in my earlier post, I think the answer is closely linked to the the dead bodies–that is, to the factual specifics of the kinds of harassment Cyber Civil Rights discusses.
posted by James Grimmelmann
For me, the most important part of Danielle Citron’s paper is right there in the title: the way she frames online harassment specifically as a civil rights problem. It’s one of those moves that’s so seemingly simple that the reader may be tempted to say yeah, yeah, so what? But then Citron shows what, directly and carefully. Online harassment isn’t just about individual bullies and victims–though it’s about that, too. It’s also about pervasive patterns of abuse, directed at vulnerable groups, that effectively deprive them of the ability to participate in important social institutions.
Another commentator at this symposium, Ann Bartow, has argued that some legal scholarship has “too much doctrine, and not enough dead bodies.” Cyber Civil Rights has plenty of dead bodies, especially the virtual effigies of women targeted by anonymous individuals–or worse, anonymous mobs–for online abuse. The paper opens with the story of Kathy Sierra, threatened with rape and strangulation, including the delightful comment, “The only thing Kathy has to offer me is that noose in her neck size.” The footnotes of the first part of Cyber Civil Rights give a grim tour through some some of online harassment’s greatest and most appalling hits.
Then–and this is the point of Bartow’s argument that scholars need to be willing to point out where the bodies are buried–Citron uses these unsettling stories to make a familiar doctrinal story strange. In the Internet law world, we’re accustomed to talking about harassment as an issue that combines two of our favorite Internet hobbyhorses: anonymity and Section 230‘s immunity for intermediaries. The result is that many serious, important debates about responses to harassment have run into the well-worn ruts of very old arguments (on Internet time, that is) about the legal standard for unmasking anonymous individuals online and about how much to make intermediaries liable for harmful content.
Shifting from there to civil rights frame, however, allows Citron to point out important but often-ignored features of harassment online, ones that suggest different doctrinal moves. Civil rights discourse helps us see the victims of harassment as members of a consistently subordinated group, rather than as just unlucky individuals. It helps us see the mob dynamics at work in these simulacra of lynchings, rather than thinking about each insult in isolation. It reminds us that there’s a long tradition of using law creatively to prevent personal bias from becoming societal discrimination.
Indeed, when you go back to the online harassment cases after reading Cyber Civil Rights, it’s striking how many of them are really civil rights cases. True, few of them buy into that frame, and few have provided much redress for victims, but they’re directly engaged with classic civil rights issues. Take Noah v. AOL, a 2003 case dismissing on section 230 grounds a lawsuit against AOL for doing nothing about anti-Muslim comments its chat rooms like “well allah can suck my dick you peice of ass” and “SMELLY TOWEL HEADS,” or, more recently, the Craigslist and Roommates.com cases about discriminatory online housing ads. The law in these cases is all about the ins and outs of interpreting section 230, but the facts are all about religious intolerance and racial segregation. Cyber Civil Rights suggests that when we think about cross-cutting issues in Internet law–such as anonymity or intermediary liability–we might do well to pause before diving into the technical specifics of the communications at stake and instead ask, “Why do we want to know?”
posted by James Grimmelmann
Steven Teles’s The Rise of the Conservative Legal Movement features a clever cover design. It shows a white man in a suit, wearing a maroon Federalist Society tie, and holding a book instantly recognizeable as a legal text, whose title is Teles’s subtitle: “The Battle for Control of the Law.”
So here’s the lazy Saturday question. The book is instantly recognizeable as a legal text because it uses the instantly-recognizeable trade dress of the Aspsn series of casebooks. It has the same red cover, the same pair of black boxes, the same five golden stripes (one above the boxes, five between, and four below), and the same golden lettering. The typeface and layout of the text are admittedly different: Teles’s book has a sans-serif, which any self-respecting conservative would disdain as a modernist liberal fad. Also, the upper box, where the authors’ names go on an Aspen casebook, is empty. The Aspen/WoltersKluwer names and logos don’t appear in the image. Does or should Aspen have any right to object to the use of its trade dress in this manner?
posted by James Grimmelmann
Cato the Younger made his name by tirelessly advertising his high personal morals. His public career as a senator and tribune of the Roman republic was distinguished by a use of obstructionist tactics whose mixture of pig-headed stubbornness and improvisation may sound humorously familiar to modern Congress-watchers. Here are some highlights, based on Adrian Goldsworthy’s fantastic biography of Julius Caesar, Cato’s principal political enemy.
Cato was a filibusterer par excellence. The rules of debate in the Roman senate forbade cutting off a speaker. When asked his opinion on an issue he opposed, Cato gave it, and gave it, and gave it, talking all day, without notes, until the Senate would adjourn with the issue unresolved. Most notably, in 59 BC, Cato so infuriated Caesar by trying to run out the clock on a land-reform bill that Caesar simply ordered him jailed. It was a (short-lived) political triumph for Cato: one senator walked out, saying he’d rather join Cato in prison than remain with Caesar, and Cato was rapidly released.
In 62 BC, while serving as tribune, Cato used his veto powers to block another bill being proposed by Quintus Metellus Nepos with Caesar’s support. As Goldsworthy describes it:
Nepos ordered a clerk to read the bill aloud. Cato used his veto to forbid this, and when Nepos himself took up the document and started to read, he snatched it from his hands. Knowing the text by heart, [Nepos] then began to recite it, until [Cato’s ally] Thermius slapped his hand over his mouth to stop him.
A riot ensued, again embarrassing Cato’s political enemies. But not everything he did to embarrass Caesar worked out so well. During a critical debate over how to punish the Catiline conspirators, a note was brought in and handed to Caesar. Cato, who was speaking, proclaimed that it must be a secret communication from those conspirators still at large. When Cato demanded that the note be read aloud, Caesar instead passed it to Cato. It turned out to be a love letter from Cato’s half-sister.
You can keep your blood-drenched fictionalizations; good legislative floor fights are timeless.
posted by James Grimmelmann
I’ve been thinking recently about social networking services and privacy. Certainly, they raise profiling and investigation concerns that seem quite familiar from debates about ISP and search engine surveillance. I’m becoming increasingly convinced, however, that they also present some quite distinctively social privacy issues. The flow of information within a Facebook or a LiveJournal both is deeply embedded in a particular set of social relationships and also regularly defies the expectations of the participants in those relationships. Hilarity, or rather privacy trouble, regularly ensues.
One of things I did when starting to ponder these privacy problems was to make a list of the ways in which social networking services encourage users to supply personal information. There are actually quite a few. Here’s an incomplete list:
- Explicit appeals to reciprocity: If someone tries to add you as a friend, it seems impolite to refuse.
- Implicit appeals to reciprocity: If friends have pictures on their pages, you’re spurning their social advances if you don’t have pictures on your page.
- Norming the network as “private” space: Facebook started on a college campus; people use it in ways that recreate the informality of students scribbling jokes on whiteboards posted to each others’ dorm-room doors.
- Norming the network as “safe” space: It’s hard to estimate the risk that releasing a little private information now will bite you later, so we use our peers’ actions as a heuristic to tell us whether it’s safe to speak freely here. If they share, you share.
- Creating a barter economy in personal information: By affiliating with new groups and adding more friends, you decrease the distance between you and others. That means more access: it opens up more profiles to your inspection (and vice-versa).
- Encouraging status competition: Facebook helpfully lists how many friends your friends have; can you blame Robert Scoble for wanting to have more than 5,000?
I could go on, but have you noticed the common pattern? All of these mechanisms use other people’s personal information to convince you to supply more of your own. Facebook is a privacy virus: an organism that reproduces itself within a social network by convincing infected hosts to use their own replication mechanisms to spread it to others. And the way it gets past our privacy defense mechanisms is to turn them against us: social network service interactions have almost all the indicia we look for in reassuring ourselves that we’re in a private setting, rather than out in public.
posted by James Grimmelmann
Summer means different things to different people. For academics, the end of grading means a new opportunity for pleasure reading. My beach-reading recommendation for the relaxing law professor is C.J. Sansom’s historical mysteries: Dissolution, Dark Fire, Sovereign, and Revelation.
The novels are set in the latter part of Henry VIII’s reign, after the break from Rome. It’s a world of religious reformation, enormous new wealth, painful social dislocations, and ugly corruption. The protagonist, Matthew Shardlake, is a reform-minded protestant, a skilled lawyer, and a sour-tempered “crookback.” He starts in the service of Henry’s chief minister, Thomas Cromwell, but the dark deeds he witnesses lead Shardlake to try to pull a Jack Goldsmith: to return to private life while keeping both his principles and his loyalties intact.
The first great pleasure of the Shardlake mysteries is that they do excellently what any mystery should do: take the reader inside the distinctive forms of corruption of a particular time and place. Shardlake is caught up in conspiracies that pit bad people against worse ones. There’re money and power everywhere for the taking, and Shardlake faces some especially unscrupulous attempts to seize both. Sansom is particularly good at working the old multiple-plots magic: more than one person is up to something, and part of the fun is trying to figure out which crime a given clue relates to.
Even better, though, is Sansom’s treatment of Shardlake himself. He’s a wholly credible lawyer. The cases he handles ring true with what I know of Tudor legal history (this is especially telling, because it would have been all to easy to fudge the legal details). He also solves cases like a lawyer: splitting his time between careful book research and dogged cross-examination. Shardlake isn’t a Holmesian genius; he’s just a sharp, diligent lawyer who trudges back and forth from one witness to another, looking for inconsistencies and working them relentlessly. His physical deformity also contributes to his interestingly complex personality and narrative voice: cranky, a little self-pitying, and determined to look beyond appearances. The books are bleak affairs, but reading them is an absolute joy.
posted by James Grimmelmann
In my first post about DeCSS, I gave the conventional law professor’s description of how it works, and then pointed out an obvious-in-hindsight problem with that description. In my second post, I delved (a little) deeper into the specifics of how DVDs work and showed how the explanatory hole can be plugged with some facts not normally in evidence. Along the way, we saw that the effectiveness of DVD anti-copying protections depends just as much on patent-enforced standards as it does on copyright and the DMCA.
Here are the results of some searches I ran on Lexis’s “US Law Reviews and Journals” database:
- DVD and “title key”: 2 results, neither relevant
- DVD and “disc key”: 0 results
- DVD and “disk key”: 1 result, a student note (Peter Moore, Notes & Comments: Steal This Disk: Copy Protection, Consumers’ Rights, and the Digital Millennium Copyright Act, 97 Nw. U.L. Rev. 1437 (2003)), containing the following text in a footnote: “One might wonder why a DVD burner capable of copying the disk key table could not be produced. It is likely that the owners of patents on DVDs are very careful to ensure, with licenses, that such devices are not made.”
- DVD and CSS and pressing: 34 results, only one of which distinguishes “pressing” from “burning.” That one, also by a student (Nika Aldrich, An Exploration of Rights Management Technologies Used in the Music Industry, 2007 B.C. Intell. Prop. & Tech. F. 624), points out, again in a footnote: “‘Burning’ compact discs actually requires a different technology than ‘pressing’ (replicating) discs, which is used in commercial manufacturing plants. ‘Burning’ involves putting the pits and lands on the disc by burning holes in a layer of substrate with a laser. In a ‘pressed’ disc the pits and lands are molded into the disc.’”
- DVD and CSS and (press! w/p burn!): 18 results, the only one of which using the words in this sense is the same article from the previous search.
- DVD and CSS and lead-in: 20 results, only one of which uses is talking about the location of CSS disc keys. That article—yet another student piece (Eric W. Young, Note: Universal City Studios Inc. v. Remeirdes: Promoting the Progress of Science and the Useful Arts by Demoting the Progress of Science and the Useful Arts?, 28 N. Ky. L. Rev. 847 (2001))—proceeds to assert: “These types of pirates do bitwise copies, which means that their pirate copies are precise duplicates of the originals, including the CSS encryption. The DVD player will notice no difference between such a copy and the original version. CSS cannot stop this kind of piracy.”
- DVD and leadin: 0 results
- DVD and DMCA: 731 results
- DeCSS: 390 results
This disproportion is not healthy. We’ve collectively spilled a lot of ink over DeCSS. One might think it worthwhile to make sure that CSS actually matters, first. It does, but that fact is not at all obvious from the conventional stories. Even the exercise I’ve gone through here is itself a fairly half-assed effort. Bruce caught an important fact I didn’t get quite right. Just in doing the research for this series of posts I’ve learned all sorts of things that seem awfully relevant to any careful analysis of the role of law in controlling the distribution of media on shiny discs, and I’ve barely even scratched the surface, so to speak.
We law professors who regularly opine on high technology are often dangerously blasé about the details of the technology we’re opining on. We get caught up in the minutiae of 1201(a)(1) versus 1201(a)(2) versus 1201(b), and we don’t pay anywhere near as much attention to the surrounding web of other kinds of IP, business arrangements, and especially technical specifications as we ought to. Consider these posts another plea for better interdisciplinarity. Our students are doing a better job of it than we are.
posted by James Grimmelmann
Yesterday, I told a simplistic story about DeCSS—indeed, the self-same simplistic story about DeCSS that I told my classes this year, and that I suspect a lot of other professors tell their classes—and asked what was wrong with it. The way I put it, if DeCSS really is about preventing only decryption of DVDs, what’s to stop pirates from simply making copies of discs in their encrypted forms? The story simply doesn’t make sense without some additional fact.
Sarah L. (“[T]he CSS disk’s descrambling keys are in sectors that aren’t copied when you make a copy of the disk using a noncompliant player.”) and Bruce Boyden (“[T]he whole scheme depends on licensed drives, which must play by the licensing rules.”) both had important parts of the answer, but what I was looking for is that it is physically impossible to produce CSS-encoded DVDs using home equipment. Sarah’s and Bruce’s points are both true, but even taken together, they wouldn’t explain why DVD Jon or someone else similarly disinclined to care about licensing doesn’t just write a program that writes the descrambling keys to the special sectors. They don’t because they can’t.
To decrypt a CSS-encrypted DVD, you actually need two kinds of keys. One is universal but nominally secret; it’s baked into every DVD player. This is the one that DVD Jon found. The other is different for every disc. But this second key isn’t really secret; it’s written out on the disc, plain as day for anyone to see, in a special “lead-in” sector. Ordinarily, your DVD player reads the public disc key, combines it with its own secret player key, and uses the two together to decrypt the disc contents.
Here’s the twist. There are two ways to make readable DVDs, and they use completely different technology. The large-scale industrial method is to “press” the DVD: that involves encoding the data as a series of tiny three-dimensional bumps on a mold used to stamp a corresponding pattern of pits into metal blanks, which are then encased in a layer of lacquer to make DVDs. This process, as you might imagine, has high fixed costs; the equipment alone will run you upwards of a million dollars. In contrast, the home method is to “burn” the DVD. Here, the blank disc comes from the factory prelacquered and containing an optically sensitive dye on the surface of the metal. Focus the right kind of laser on the dye and its transparency changes. From the perspective of the DVD player that will later read the disc’s patterns of opaque and transparent regions, the results are much the same as if the disc had pits and non-pits. Some areas reflect; others don’t. Ones and zeroes, more or less.
The trick that makes CSS “work” is that you can’t burn lead-in sectors. DVD-Rs (and DVD+Rs) come from the factory with the lead-in sectors zeroed out. Thus, a would-be pirate can easily read an entire encrypted disc, disc key and all, but can only burn back the data portion of the disc, without the disc key. The resulting disc is useless in a standard DVD player; there’s no disc key to be read, which means the player is at a loss in trying to decrypt it. While one could manufacture and distribute home-copied DVDs without having to bust CSS, those DVDs are only going to work on specially-coded software DVD players, not on the mass-produced home players most people have.
That’s why everything does in fact depend on CSS, and why DeCSS really is a big deal. It goes back to the control that the DVD cartel has over their hardware platform, specifically over the manufacturing format of blank media. And that control, in turn, is backed up by patent pools. Yes, you could in theory press (not burn) exact-copies of encrypted discs, or mass-produce your own non-standard blank DVD-Rs with writable lead-in areas, but to do either, you’d need some significant (and hard-to-move) capital, which makes you vulnerable if the cartel comes after you. It’s an ingenious technologico-legal trap.
Tomorrow: Some thoughts on the implications (including responses to comments).
posted by James Grimmelmann
Like a good many law professors, I teach and write about digial rights management: the technological “locks” copyright owners use to keep people from getting at digital media without authorization. Exhibit A in any discussion of DRM is the DeCSS saga. CSS, the “Content Scramble System,” is the encryption system that keeps you, the home user, from watching DVDs without permission. The way it works is that some DVDs (the ones Hollywood cares about) come encrypted. The decryption key is stored in each and every DVD player, but manufacturers can’t get a license to make DVD players (and thereby get authorized access to the key) unless they sign an extensive license agreement with the DVD Copy Control Association. By obvious linguistic principles, DeCSS is the thing that makes CSS not do its thing. In particular, a Norwegian teen (fun fact: seven of the first ten Google hits for “Norwegian teen” are about him), frustrated at the lack of software DVD players that run on the open-source operating system Linux, wrote a program that decrypts CSS-protected DVDs. The idea is that one could then take the unencrypted version from your computer, burn it to a blank DVD, and then view the DVD on a Linux computer.
As normally told, this story illustrates all sorts of useful points. It shows how a classic DRM-based business model works: sell individual copies with DRM that keeps them from turning into lots of copies. It shows how painfully insecure such business models can be: DVD Jon was easily able to find the super-seekrit CSS decryption key in the code of a Windows DVD player (every DVD player in existence, after all, must contain a copy of the key). And it shows the might of the law descending with fury and malice in response: lawsuits under the Digital Millenium Copyright Act soon followed.
But there’s a gaping technological hole in this story. You see, CSS as I’ve described it above, tries to block one specific attack vector: copying an encrypted DVD onto a computer and decrypting it, then using the computer’s DVD burner to make a new, unencrypted DVD version. DeCSS opens up this attack again. But why would anyone bother with this slow, clumsy way of making copies? Why not just read the encrypted contents of the DVD onto the computer, keep the bits encrypted, and burn them back onto a new DVD in exactly the same form? You wind up with a new DVD, exactly identical to the old. And, of course, thanks to the convenient fact that every DVD player in existence has a copy of the decryption key, that new DVD is playable on any DVD player in existence.
In other words, CSS sounds like a gigantic dust-up over nothing. Would-be pirates already have a perfectly good way of making any number of perfect copies. Worrying about DeCSS, it would seem, is like worrying about the barn’s windows when the wide-open door is just gaping at you. Hasn’t the legal system—and by extension, the legal academy—just spent who knows how many hours on a massive intellectual boondoggle?
Thus, a question for the readership. What crucial fact is missing from the story above? I’ll post the answer tomorrow, along with some pointed observations about the implications.
posted by James Grimmelmann
Last week’s release of Grand Theft Auto IV (actually somewhere between the sixth and ninth game in the series, depending on how you count) was big news in the gaming world (even if some observers questioned the suspiciously universal acclaim). Players cleared their calendars and in some cases emptied their wallets to play the latest installment in this series of open-ended games, which drop the player into a vast city of cars to steal, bystanders to gun down, insane stunt jumps to make, and real-life references to spot.
Among lawyers, the games may be best-known for the regular moral panics they induce over fears of copycat violence, and for attorney Jack Thompson’s increasingly bizarre crusade against them. We might also ask what kind of a legal world the GTA series envisions within its famously capacious in-game universe.
The series’s built-in attitude of rampant lawlessness—it’s named after a crime, after all—might suggest a kind of deliberate criminality. That’s certainly the interpretation that fuels the regular calls for the games to be banned. And yes, the plots typically chart the protagonist’s Scarface-style rise as he carries out errands both murderous and larcenous for an entertaining assortment mob bosses. This interactive representation of lawlessness—the player playing at the role of criminal—puts the Grand Theft Auto games squarely within the tradition of deliberate shockers like Postal.
But this may be an unduly harsh take, and not just because the claim that playing violent games leads to violence in meatspace rests on some dubitable social science. San Andreas may well show us the world as Holmes’s bad man would see it, but consider the lessons he’d learn from it. Crime doesn’t always pay. In fact, offhandedly casual offenses—driving on the sidewalk to circle around traffic, say, and in the process clipping a pedestrian—can put the police on your tail. And the aggresive things you do to try and shake them often wind up making matters worse. Before you know it, you have a six-star wanted rating, they’re sending in the black helicopters, you’re crouched in a doorframe, and there’s pretty much only one way this story can end. Exaggerated though the arc may be, it does illustrate some of the vicious circles trapping the poor, the desperate, and the criminal.
Or consider the in-game depictions of the legal system itself. Get arrested by the police, and you’re back on the streets within seconds—minus some bribe money. Call it an indictment of revolving-door-prison liberalism, or call it an indictment of police more interested in protecting their turf than in doing justice or confronting Liberty City’s very real problems. The lawyers don’t come across much better: Ken Rosenberg is a paranoid cokehead who asks our hero to fix a case by intimidating jurors.
One last thought. Given the games’ increasingly humongous alternate reality, how about building in a penal code? Grand Theft Auto’s legal geekery index would soar if every unlawful act were accompanied by a statement of exactly what crime the player had just committed. “Arson in the second degree!” “Involuntary manslaughter!” “Grand theft garbage truck!” For added fun, the crimes could be correlated with a set of sentencing guidelines, so that the in-game statistics screen would tally up precisely the number of years of imprisonment the protagonist deserved.