Author Archive for paul-ohm
posted by Paul Ohm
Julie Cohen has written a great book, perhaps the most important Cyberlaw book since Code. I say this even though I recognize the many virtues of Cyberlaw books written by Jonathan Zittrain, Tim Wu, Yochai Benkler, and Barbara van Schewick, privacy books written by Dan Solove, Lior Strahilevitz, Viktor Mayer-Schönberger, and many other books published recently. But not since Code has one book challenged the way we conceptualize and try to solve technology problems as much or as well as this book does.
In this post, I want to focus on “semantic discontinuity,” the label Cohen gives to the most novel and interesting construct in the book. Semantic discontinuity is one of three “principles that should inform the design of legal and technical architectures,“ along with “access to knowledge” and “operational transparency.” In her words, “semantic discontinuity is the opposite of seamlessness. . . . It is a function of interstitial complexity within . . . institutional and technical frameworks.” It serves a “vital” function, “creat[ing] space for the semantic indeterminacy that is a vital and indispensable enabler of the play of everyday practice.” (Kindle location 4288)
In other words, semantic discontinuity valorizes noise, inefficiency, constraints, and imperfections. As this list illustrates, the most striking thing about this book is the size of the herd of sacred cows it leads to the slaughter.
posted by Paul Ohm
Thanks to Danielle for inviting me to post my thoughts. I’ll try to come up with some new, original thoughts in a later post, but to start, let me offer an abridged version of what I posted yesterday on my home blog, Freedom to Tinker.
I think the Jones court reached the correct result, and I think that the three opinions represent a near-optimal result for those who want the Court to recognize how its present Fourth Amendment jurisprudence does far too little to protect privacy and limit unwarranted government power in light of recent advances in surveillance technology. This might seem counter-intuitive. I predict that many news stories about Jones will pitch it as an epic battle between Scalia’s property-centric and Alito’s privacy-centric approaches to the Fourth Amendment and quote people expressing regret that Justice Alito didn’t instead win the day. I think this would focus on the wrong thing, underplaying how the three opinions–all of them–represent a significant advance for Constitutional privacy, for several reasons:
- Justice Alito?
- Justice Scalia and Thomas showed restraint.
- Justice Sotomayor does not like the third-party doctrine.
- The wrong case for a privacy overhaul of the Fourth Amendment.
Maybe I’m not a savvy court watcher, but I did not see this coming. The fact that Justice Alito wrote such a strong privacy-centric opinion suggests that future Fourth Amendment litigants will see a well-defined path to five votes, especially since it seems like Justice Sotomayor will likely provide the fifth vote in the right future case.
The majority opinion goes out of its way to highlight that its focus on property is not meant to foreclose privacy-based analyses in the future. It uses the words “at bottom” and “at a minimum” to hammer home the idea that it is supplementing Katz not replacing it. Maybe Justice Scalia did this to win Justice Sotomayor’s vote, but even if so, I am heartened that neither Justice Scalia nor Justice Thomas thought it necessary to write a separate concurrence arguing that Katz’s privacy focus should be replaced with a focus only on property rights.
It’s probably best here just to quote from the opinion:
More fundamentally, it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742; United States v. Miller, 425 U.S. 435, 443 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers. Perhaps, as JUSTICE ALITO notes, some people may find the “tradeoff” of privacy for convenience “worthwhile,” or come to accept this “dimunition of privacy” as “inevitable,” post, at 10, and perhaps not. I for one doubt that people would accept without complaint the warrantless disclosure to the Government of a list of every Web site they had visited in the last week, or month, or year. But whatever the societal expectations, they can attain constitutionally protected status only if our Fourth Amendment jurisprudence ceases to treat secrecy as a prerequisite for privacy. I would not assume that all information voluntarily disclosed to some member of the public for a limited purpose is, for that reason alone, disentitled to Fourth Amendment protection.
Wow. And Amen. Set your stopwatches: the death watch for the third-party doctrine has finally begun.
Most importantly, I’ve had misgivings about using Jones as the vehicle for fixing what is broken with the Fourth Amendment. GPS vehicle tracking comes laden with lots of baggage–practical, jurisprudential and atmospheric–that other actively litigated areas of modern surveillance do not. GPS vehicle tracking happens on public streets, meaning it runs into dozens of Supreme Court pronouncements about assumption of risk and voluntarily disclosure. It faces two prior precedents, Karo and Knotts, that need to be distinguished or possibly overturned. It does not suffer (as far as we know) from a long history of use against innocent people, but instead seems mostly used to track fugitives and drug dealers.
For all of these reasons, even the most privacy-minded Justice is likely to recognize caveats and exceptions in crafting a new rule for GPS tracking. Imagine if Justice Sotomayor had signed Justice Alito’s opinion instead of Justice Scalia’s. We would’ve been left with a holding that allowed short-term monitoring but not long-term monitoring, without a precise delineation between the two. We would’ve been left with the possible new caveat that the rules change when the police investigate “extraordinary offenses,” also undefined. These unsatisfying, vague new rules would have had downstream negative effects on lower court opinions analyzing URL or search query monitoring, or cell phone tower monitoring, or packet sniffing.
Better that we have the big “reinventing Katz” debate in a case that isn’t so saddled with the confusions of following cars on public streets. I hope the Supreme Court next faces a surveillance technique born purely on the Internet, one in which “classic trespassory search is not involved.” If the votes hold from Jones, we might end up with what many legal scholars have urged: a retrenchment or reversal of the third-party doctrine; a Fourth Amendment jurisprudence better tailored to the rise of the Internet; and a better Constitutional balance in this country between privacy and security.
posted by Paul Ohm
Since launching the Network Neutrality debate, Tim Wu has continued to play an invaluable role, constantly reminding us that the debate is about more than just economics. Too many experts on both sides of the debate view things solely through an economic lens, which has led us to intractable differences. As I have argued elsewhere, because respected economists line up on both sides, it is very hard to tell whether mandatory network neutrality will, on net, enhance or reduce innovation.
In The Master Switch, a fascinating and important book, Wu argues powerfully that policies like net neutrality are necessary also to protect noneconomic ideals like free speech. (He highlights other benefits of neutrality, most importantly the way it helps us resist tyranny, in his chapter on AT&T’s role in the NSA wiretapping program, but he left me wanting more from this example.) Although free speech is of paramount importance, I think this book provides a welcome opportunity to focus on other noneconomic benefits and values beyond free speech that are also today at risk in the battlefields of neutrality.
posted by Paul Ohm
Now that Verizon and AT&T have pledged not to track customer web behavior without explict consent, I feel like my work here is done. (Too bad DOJ still has yet to indict anybody for the Palin e-mail breach.)
Thanks again to Dan and the other Concurrers (?) for allowing me to visit again. There is much more I wanted to say, but I’ll save it for next time.
In the meantime, I have signed on to blog permanently over at Ed Felten’s Freedom to Tinker, so if you’re interested in tech policy, please add us to your RSS feed reader. (Although Ed introduced me over a week ago, I’ve been too busy to introduce myself to the ftt readers yet.)
I’d be interested to hear from anybody who has thoughts about the relative pros and cons of blogging on a website read mostly by non-lawyers. Although I’ll miss the deep comments section conversations about ECPA, I welcome the opportunity to speak directly to (and learn from) the computer science community reading Ed’s blog. Besides, I hope I can come back here from time to time to scratch my ECPA itch.
posted by Paul Ohm
The odds that the Feds will find the person who broke into Sarah Palin’s e-mail account are considerably better than I had thought they would have been, because someone who claims to have committed the crime has bragged about it to the infamous 4chan image hosting site. (Quick CoOp aside, every day I better appreciate how the paper by new permablogger Danielle Citron–who first introduced me to 4chan–on Cyber Civil Rights will be a must-read in this day of 4chan and Jason Fortuny.) Although the posts have been deleted, Kim Zetter has reproduced them for Wired’s Threat Level blog. First, the user known as “Rubico” bragged about how he had breached the Yahoo account by providing Governor Palin’s supposedly private answers to the questions posed by Yahoo’s password recovery scheme:
it took seriously 45 mins on wikipedia and google to find the info, Birthday? 15 seconds on wikipedia, zip code? well she had always been from wasilla, and it only has 2 zip codes (thanks online postal service!)
the second was somewhat harder, the question was “where did you meet your spouse?” did some research, and apparently she had eloped with mister palin after college, if youll look on some of the screenshits that I took and other fellow anon have so graciously put on photobucket you will see the google search for “palin eloped” or some such in one of the tabs.
I found out later though more research that they met at high school, so I did variations of that, high, high school, eventually hit on “Wasilla high” I promptly changed the password to popcorn and took a cold shower…
Oh, and about Rubico’s screenshots? They apparently reveal the URL bar of Rubico’s browser, which in turn reveals that Rubico had not been browsing Yahoo directly but had instead been using an anonymizing proxy service called Ctunnel. Good idea, right?, because Yahoo no doubt captures and preserves the IP addresses used to recover passwords. But although using Ctunnel may have been a good idea, advertising that fact on a screenshot, it turns out, was not:
Gabriel Ramuglia who operates Ctunnel, the internet anonymizing service the hacker used to post the information from Palin’s account to the 4chan forum, told Threat Level this morning that the FBI had contacted him yesterday to obtain his traffic logs. Ramuglia said he had about 80 gigabytes of logs to process and hadn’t yet looked for the information the FBI was seeking but planned to be in touch with the agents today.
Apparently, providing the screenshot in this case was a particularly dumb move. In another interview Ramuglia notes:
Usually, this sort of thing would be hard to track down because it’s Yahoo email, and a lot of people use my service for that . . . . Since they were dumb enough to post a full screenshot that showed most of the [Ctunnel.com] URL, I should be able to find that in my log.
There are more lessons here than are worth listing. A few, after the jump:
September 20, 2008 at 11:01 pm Posted in: Criminal Procedure, Current Events, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement) Print This Post 2 Comments
posted by Paul Ohm
As has been widely reported, Sarah Palin’s Yahoo e-mail account has been breached, and its contents have been posted to wikileaks. Gawker.com is posting excerpts from the e-mail messages including photographs.
As usual, Orin Kerr (with some assists from his merry band of commenters) is doing a great job fleshing out the legal analysis. A crime has been committed, there can be no doubt, and Yahoo!’s lawyers will probably be kept up late tonight receiving and responding to incoming subpoenas and court orders.
I wanted to come at this story from a slightly different angle: I predict that some day we will look back on this breach as a watershed event in the history of statutory Internet privacy. As Dan and many others have noted in their articles, Congress often enacts privacy protecting legislation only in the wake of salient, sensationalized, harmful privacy breaches. Thus, Judge Bork’s video rental records begat the Video Privacy Protection Act and the murder of actress Rebecca Schaeffer by a stalker with DMV records led, eventually, to the Drivers’ Privacy Protection Act.
Compared to these examples, the breach of Sarah Palin’s e-mail account is on a higher plane of salience and sensationalization. The most scrutinized woman in the country has dozens of her private correspondences pasted all over the blogs. Even if nothing is found in these messages which damages her or the campaign, and whether or not the perpetrators are caught, many will call for tougher privacy laws, and Congress and state legislatures will feel great pressure to deliver. And they won’t just be targeting the breachers–many will criticize the Gawkers and Wikileaks for helping disseminate the e-mail messages (if not the Kerrs and Ohms and Washington Posts for linking to Gawker), so expect a fierce First Amendment debate. I can even see calls to make IP addresses easier to track. Mandatory data retention, anyone?
If I am right about this, expect the E-mail Privacy Act of 2009, and expect it to be a blockbuster. If you’re an activist, government lawyer, e-mail provider, or scholar with an interest in information privacy, I advise you to start putting together your statutory wish lists.
posted by Paul Ohm
It appears there are only so many ways to use photos to illustrate tumbling stock markets, because a few moments ago, the front page of the New York Times website carried this photo from Frankfurt taken by Daniel Roland/AP as its main image:
and the Washington Post highlighted this photo of a trader in Shanghai from Reuters:
There’s something particularly Hitchcockian about the photo from Frankfurt, with the menacing line graph creeping up from behind the harried trader.
Maybe this is the start of a new meme? If you spot other “traders in anguish in front of giant, depth-of-field-blurred, plummeting line graphs,” post them here.
posted by Paul Ohm
In a prior post, I began to explain why ISPs pose the greatest threat to privacy in modern life. I argued that many ISPs are likely to begin to experiment with new, more invasive forms of surveillance relying, in part, on so-called Deep-Packet Inspection technology. I am grateful for the vigorous debate which followed in the comments, and I know my article will be much stronger once I incorporate what I have learned reading and responding to these comments.
The last post led only to the conclusion that ISPs pose a great threat to privacy, but to call this the greatest threat in society, I need to answer the question, “compared to what?” In particular, the most common response to my article I have heard is, “Doesn’t Google threaten privacy more?” In this post, let me explain why I worry more about the threat to privacy from ISPs than from Google.
posted by Paul Ohm
The September 1st issue of the New Yorker includes a fascinating article (not yet available online, but here’s the abstract) by John Colapinto about the high-tech, mini-police departments being set up by department store chains to catch shoplifters. The article, which focuses in particular on Target, veers for a brief moment into one of my areas of interest–computer forensics. Target has hired a “senior computer investigator” named Brent Pack, a former Army computer crime investigator who helped analyze the Abu Ghraib photographs. Why does Target need a computer investigator? Mr. Pack
analyzes digital storage devices seized from suspected retail-crime gangs–BlackBerrys, photo memory cards, cell phones, business servers, and desktop computers. . . . At the moment, Pack was analyzing a hard drive seized by the police in a phony-check-writing operation that had victimized Target stores. “I’m going through here and looking for any evidence of check-writing software on any of their hard drives,” he said, pointing to the computer screen, which showed a JPEG of a blank check
Is it proper for the police to delegate its forensic work to Target? The FBI agents I used to work with as a DOJ computer crimes prosecutor kept a tight leash on the data they had seized and were reluctant to share data with state and local cops, much less private parties. They justifiably worried about ensuring that non-FBI analysts were staying within the scope of the warrant, because courts have suppressed electronic evidence obtained outside of the scope of the warrant and have even thrown out all of the evidence obtained if the warrant was executed in flagrant disregard of its terms. I’m not saying that the use of a third-party forensic analyst should automatically result in a flagrant disregard ruling, but it will invite scrutiny.
And even if one can justify the use of private forensics specialists generally, shouldn’t the police refrain from giving 500 gigabytes of personal information to victims of crimes? Because victims–even corporate victims–have a strong incentive to solve the crimes committed against them, might they not feel more pressure than a cop to look beyond the scope of warrants, peering deeply into the private lives of data owners?
I am even more worried about a much more troubling possibility: Is Target seizing cellphones and laptops from suspected shoplifters? Discussing another, anonymous store, not Target, Colapinto describes how suspected shoplifters get hauled into interrogation rooms and questioned at length by former law enforcement agents. In addition to this, are store security personnel frisking suspects and seizing electronic devices? I can understand how a department store might be entitled to engage in a limited search to look for its stolen property, but does this justify the seizure, retention, and subsequent analysis of cell phones and laptops?
Reading this Article kept bringing me back to David Sklansky’s excellent article, The Private Police, 46 UCLA L. Rev. 1165 (1999) (abstract). A decade ago, Sklansky traced the rise of private police forces, focusing in particular on neighborhood patrol services starting with Pinkertonism in the 1800′s. He noted that as these entities play a greater role in policing society, this might give rise to the kind of invasions the Fourth (and Fifth and Sixth) Amendment was intended to prevent. If Target is seizing cell phones from suspected thieves–and I must stress that it is not clear from this article that they are–it realizes Sklansky’s fears.
posted by Paul Ohm
I have recently posted on SSRN the article that ate my summer, The Rise and Fall of Invasive ISP Surveillance. I make many claims in this article, but the principal one, and the one I want to spend a few posts elaborating and defending, is found in the first sentence of the abstract: “Nothing in society poses as grave a threat to privacy as the Internet Service Provider (ISP).” In this first post, let me explain why ISPs pose an enormous threat to privacy:
Simply put, your ISP has the means, motive, and opportunity to scrutinize nearly every communication departing from and arriving to your Internet-connected computer:
Opportunity: Because your ISP serves as the gateway between your computer and the rest of the Internet, every e-mail message, IM, and tweet you send and receive; every web page and p2p-traded file you download; and every VoIP call you place travels first through your ISP’s routers.
Means: A decade ago, your ISP lacked the tools to efficiently analyze every communication crossing its network, because computers were relatively slow and networks were relatively fast. I use the analogy of the policeman on the side of the road, scrutinizing the passing cars. If the policeman is slow and the road is wide and full of speeding cars, the policeman won’t be able to keep up.
Over the past decade, while network bandwidth has increased, computer processing power has increased at a faster rate, and your ISP can now analyze more information, more inexpensively than before. The roads are wider today, but the policemen are smarter and more efficient. An entire industry–the deep-packet inspection industry–has arisen to provide hardware and software tools for massive, widespread, automated surveillance.
Motive: Third-parties are placing pressure on ISPs to spy on users in unprecedented ways. Advertisers are willing to pay higher rates for behavioral advertising. For example, Ikea will pay more to place an ad in front of people who have been recently surfing furniture websites. To enable behavioral advertising, companies like NebuAd and Phorm have been trying to convince ISPs to collect user web-surfing data they do not collect today. Similarly, the copyrighted content industries seem willing to pay ISPs to detect, report, and possibly block the transfer of copyrighted works.
Because of these three factors, ISPs are scrutinizing more information–and different forms of information–than they ever have before. AT&T has begun to consider monitoring for copyright violations; Charter Communications signed up with NebuAd, sparking a firestorm of publicity and legislative interest which pushed Charter to abandon the deal; and a few British ISPs have begun to use Phorm’s services. I predict that these examples presage a coming storm of unprecedented, invasive ISP monitoring.
In the next post, I will compare the threat to privacy from ISP monitoring to the threat from other entities, in particular, Google and Microsoft.
posted by Paul Ohm
In honor of the start of the fall semester, I wanted to share a classroom participation technique I started using last semester with encouraging results. I cold call in my classes, but I give every student the opportunity to pass three times during the semester when they don’t feel prepared. (Because of where I teach, I notice a suspicious uptick in passes on Mondays following fresh snowfall in the mountains!) As long as I’m notified of a student’s desire to pass before class begins, I won’t call on him or her.
Last semester I started giving students the option of using the reverse of a pass, which I punnily dubbed a “catch.” When a student feels especially prepared for a given class–perhaps she has had a lot of time to read the night before or maybe she has already read the case before for another class–she can put herself on call by sending me a “catch” before class begins. In return, I promise students who catch that I will not call on them for at least three subsequent classes.
Very few students caught (catched?) last semester, but on those occasions when they did, it led to some of the most productive Q&A I’ve had with students in five-plus years (including two years as an adjunct) of law teaching. The students who caught no doubt benefited by regaining some control over their fate; their classmates benefited from hearing good discussions of the days’ topics; and I gained the benefits of an on-call system without having the rest of the class skip the reading.
If you cold call already, try out this tweak this semester, and let me know how it goes.
posted by Paul Ohm
When I need to edit an article, I will sometimes park myself at a booth at the local Panera Bread, sipping the decent coffee, snacking on the beautiful (notice I didn’t say tasty) pastries, and using the free WiFi. Long ago, I noticed that Panera had made a stupid technological mistake that probably strips it of the right to manage its network lawfully.
Panera tries to extract consent from its users using what is known as a captive portal, the same method used by most hotel and airport WiFi network providers. When a Panera WiFi user first tries to connect to any website, Panera’s computers redirect her instead to its own web page with a link to its terms of service (ToS). Only when the user clicks “I agree” may she start surfing.
But if Panera ever tried to enforce its WiFi ToS–say it got caught monitoring user communications and had to defend against a wiretapping lawsuit or say it was sued for banning a user suspected of downloading porn in violation of the ToS–a court should probably hold that its ToS are unenforceable. Panera has made a simple web design mistake that introduces doubt about what terms are being agreed to by its users.
posted by Paul Ohm
Hearing Sarah Lawsky crack wise so often and so hilariously about the Internal Revenue Code during her visit made me think of a little joke I have used many times when lecturing about the Electronic Communications Privacy Act (ECPA). After warning listeners that ECPA is complex and confusing, I will often say something like, “And I challenge any tax experts in the room to go head-to-head with me in a battle for the title of ‘most confusing part of the U.S. Code.’” The comment usually inspires a few polite titters–from the kind of people who find jokes about comparative statutory complexity funny–so I keep using it.
The problem is, I have no idea whether I have a leg to stand on. Can ECPA really hold a candle to the infamous complexity of the IRC? Is there another part of the U.S. Code that makes both of these seem lucid in comparison?
This connects to James Grimmelmann’s recent series of posts about a new lawyer being a menace to his or her clients. He has been developing the point that mere book larnin’ isn’t enough to prepare a lawyer to represent a client competently, at least not in certain substantive areas, and he offers wills & trusts, bankruptcy, and copyright as examples. What makes a substantive area of law more complicated than another?
Keeping it focused on legislation, what factors conspire to make a statute complex and confusing (and, as an aside, can a statute be complex but not confusing or confusing but not complex?) Within my areas of expertise, here are a few factors that make ECPA complex:
- ECPA defines many terms, and it defines many terms in ways that are disconnected from ordinary meaning. (I’m looking at you, “electronic storage”!)
- ECPA (and more generally speaking, the Wiretap Act which predates ECPA) has many parallel definitions that Congress may not have intended to treat alike (yes, I’m talking about you two, “wire communication” and “electronic communication.”).
- ECPA interacts in mysterious ways with other laws (try to figure out what “readily accessible to the general public” means!)
- ECPA is rarely litigated. Orin Kerr explains how this has made a mess of the law in Lifting the ‘Fog’ of Internet Surveillance: How a Suppression Remedy Would Change Computer Crime Law, 54 Hastings Law Journal 805 (2003).
- ECPA regulates technology, so its meaning often shifts as technology changes. This problem is exacerbated because the basic structure and essential definitions are unchanged from 1986, so a law written to regulate mainframes is today applied to Web 2.0 and cloud computing.
So to all of the tax experts out there, what makes the tax code so complicated? Do all of the factors listed above apply to the IRC as well? The IRC is much longer than ECPA, and it is supplemented with reams of CFRs and other regs, but that can’t be enough alone to earn it the title, can it?
And what say you bankruptcy and copyright experts?
And even more generally, what are the objective metrics we can use to calculate comparative statutory complexity. (Yes, I’m picturing a NCAA-style tourney bracket right now.)
posted by Paul Ohm
Thanks to Dan and company for agreeing to let me blog here again. During my stint, I promise to talk about the law (and in particular, the threat to privacy posed by Internet Service Providers) but let me warm up with some lighter, more navel-gazing fare:
I’m serving for the first time on our Appointments committee this year, which means I get to look at the FAR form database from the other end of the microscope. Rick Garnett asks about the weaknesses of the form itself, but I wanted to comment instead on the awful user interface AALS provides for those of us perusing the forms.
The FAR form database’s user interface recalls the aesthetic of most of the phishing scam websites I have ever seen. It is ugly, which itself is not much of a sin for such a utilitarian site, but it makes me wonder whether AALS is putting care into other aspects of the database, such as privacy and security. It is also very hard to use, and I will venture to guess that schools are missing some candidates they might otherwise want to interview because of the lousy interface. Here are some specific criticisms:
posted by Paul Ohm
I’ve overstayed my welcome, so I’ll be signing off with this post. Thanks to Dan and the other permabloggers for letting me participate.
Point a video camera at a television screen, aim a microphone at a speaker, or run a cable from the “line out” to the “line in” ports on the back of your computer, and you’re ready to exploit the so-called analog hole. Just press “play” on one device and “record” on the other, and you can copy a movie, television show, or song, even if the original is supposedly protected by digital rights management technology designed to prevent copying.
The analog hole–which arises from the fact that relatively-easy-to-protect digital content must be converted into harder-to-protect analog signals if we humans are to see or hear them–has given Hollywood and the recording industry a fair amount of heartache, has led them to displays of public consternation, and has even resulted in some proposed legislation.
Despite its frequent appearance in DRM debates, the analog hole is suprisingly unexplored in legal scholarship. Westlaw’s JLR database contains a mere thirty-seven articles that use the phrase, most in passing, and SSRN returns only three hits. Most of the commentary relies on an empirical assumption that has never before been rigorously tested: Exploiting the analog hole creates copies of such low quality as not to be good substitutes for the originals.
Doug Sicker, an Assistant Professor of Computer Science at my University, together with Shannon Gunaji, a grad student, have tried empirically to test this assumption by conducting a series of surveys assessing, among other things, what the analog hole means for the typical music consumer. Doug asked me to help bring the early results to the legal academy, and our little article, entitled The Analog Hole and the Price of Music: an Empirical Study, has been posted to SSRN and will appear soon in the Journal of Telecommunications & High Technology Law.
Our results after the jump.
posted by Paul Ohm
Everybody knows that the Internet is teeming with super-powerful and nefarious miscreants who are almost impossible to stop and who can cause catastrophic harms. If you need proof, simply pick up any newspaper or watch any “hacker” movie. The problem is, what everybody knows is wrong. Or, at least so I argue in my most recent article, The Myth of the Superuser: Fear, Risk, and Harm Online, which I have posted to SSRN and submitted to a law review intake inbox near you. Here’s the abstract:
Fear of the powerful computer user, “the Superuser,” dominates debates about online conflict. This mythic figure is difficult to find, immune to technological constraints, and aware of legal loopholes. Policymakers, fearful of his power, too often overreact, passing overbroad, ambiguous laws intended to ensnare the Superuser, but which are used instead against inculpable, ordinary users. This response is unwarranted because the Superuser is often a marginal figure whose power has been greatly exaggerated.
The exaggerated attention to the Superuser reveals a pathological characteristic of the study of power, crime, and security online, which springs from a widely-held fear of the Internet. Building on the social science fear literature, this Article challenges the conventional wisdom and standard assumptions about the role of experts. Unlike dispassionate experts in other fields, computer experts are as susceptible as lay-people to exaggerate the power of the Superuser, in part because they have misapplied Larry Lessig’s ideas about code.
The experts in computer security and Internet law have failed to deliver us from fear, resulting in overbroad prohibitions, harms to civil liberties, wasted law enforcement resources, and misallocated economic investment. This Article urges policymakers and partisans to stop using tropes of fear; calls for better empirical work on the probability of online harm; and proposes an anti-Precautionary Principle, a presumption against new laws designed to stop the Superuser.
posted by Paul Ohm
Law Professors who write about the Internet tend to develop facts through a combination of anecdote and secondary-source research, through which information about the conduct of computer users, the network’s structure and architecture, and the effects of regulation on innovation are intuited, developed through stories, or recounted from others’ research. Although I think a lot of legal writing about the Internet is very, very good, I’ve long yearned for more “primary source” analysis.
In other words, there is room and need for Internet law scholars who write code. Although legal scholars aren’t about to break fundamental new ground in computer science, the hidden truths of the Internet don’t run very deep, and some very simple code can elicit some important results. Also, there is a growing cadre of law professors with the skills needed to do this kind of research. I am talking about a new form of empirical legal scholarship, and empiricists should embrace the perl script and network connection as parts of their toolbox, just as they adopted the linear regression a few decades ago.
I plan to talk about this more in a subsequent post or two, but for now, let me give some examples of what I’m describing. Several legal scholars (or people closely associated with legal scholarship) are pointing the way for this new category of “empirical Internet legal studies”.
- Jonathan Zittrain and Ben Edelman, curious about the nature and extent of filtering in China and Saudi Arabia, wrote a series of scripts to “tickle” web proxies in those countries to analyze the amount of filtering that occurs.
- Edelman has continued to engage in a particularly applied form of Internet research, for example see his work on spyware and adware.
- Ed Felten—granted, a computer scientist not a law professor—and his graduate students at Princeton have investigated DRM and voting machines with a policy bent and a particular focus on applied, clear results. Although the level of technical sophistication found in these studies is unlikely to be duplicated in the legal academy soon, his methods and approaches are a model for what I’m describing.
- Journalist Kevin Poulsen created scripts that searched MySpace’s user accounts for names and zip codes that matched the DOJ’s National Sex Offender Registry database, and found more than 700 likely matches.
- Finally, security researchers have set up vulnerable computers as “honeypots” or “honeynets” on the Internet, to give them a vantage point from which to study hacker behavior.
What are other notable examples of EILS? Let’s keep with the grand Solovian tradition, and call this a Census. Is this sub-sub-discipline ready to take off, or should we mere lawyers leave the coding to the computer scientists?
posted by Paul Ohm
Imagine you give an exam with two questions, each supposedly worth 50% of the final grade. Imagine further you grade both questions and properly normalize the scores for each one to a 50 point scale. (I’m not so sure all professors normalize properly, but that’s a different problem.)
What do you do if the standard deviations in the two normalized grade populations vary widely? In other words, imagine that question one elicits a long, flat curve: the lowest score is much lower than the highest score, and there is a lot of variation in the scores in between, while question two elicits a compact curve with a very high peak that drops off quickly in both directions.
Is it legitimate (fair, proper) simply to add the normalized scores for questions one and two to derive the final score? Does this cause the first question to exert an unfairly disproportionate effect on the final curve? First, consider the extreme case. In a class of 50 students, every student gets a different normalized score for question one–from one to fifty points–while every student in the class gets the exact same normalized score–say 20 points–for question two. Simply adding the scores together means the final curve will match the curve for question one exactly, and question two will have been written out of the exam.
posted by Paul Ohm
I first wanted to thank Dan and the rest for allowing me to use a little of their space.
Among the many pleasures of teaching where I do is the opportunity to be on the sidelines for interesting debates about telecomm law and policy, thanks to the presence of scholars like Phil Weiser and Dale Hatfield (among many others). For example, for those of you who can’t get enough of the Net Neutrality debate, this weekend we’re offering two opportunities to hear more about it:
First, Micah Schwalb, a 3L and the EIC of the Journal on Telecomm and High Tech Law noticed that you could trace the history of the Net Neutrality debate by reading the Journal’s back issues and watching footage from our past Silicon Flatirons conferences. So he has put together a new website, neutralitylaw.com, that pulls all of these resources together. Here you’ll find videos of talks by Larry Lessig, Vint Cerf, and others (many of which have never been available online before now), and articles by Tim Wu, Chris Yoo, Barbara van Schewick, Phil, and more.
Second, on Sunday and Monday we are hosting our annual marquee Silicon Flatirons event, the Digital Broadband Migration conference. Every panel is stacked with interesting people, but none is as deep as the one I’m thrilled to moderate, entitled “Network Management: Beyond Net Neutrality.” The panelists include: Jerry Kang, Ed Felten, Howard Shelanski, Robert Pepper, Jim Speta, and Jon Nuechterlein. I know when I’m outclassed, so I’ll do my best to stay out of the way, but in honor of the blog, I may try to ask a question about the role of culture. If you’re anywhere near Boulder, please stop by and say hello.
And in case you can’t make it out, you’ll be able to find the video on neutralitylaw before too long. In the coming weeks, we’ll be adding many other videos from past conferences.
posted by Paul Ohm
Lately, I’ve been thinking a lot about legal and extra-legal responses to fear, so I’ve followed last week’s commentary about the Boston Mooninite scare with some interest.
The media’s influence on public fears is well documented, and it will be interesting to see how the “new media” play into or help defuse these fears. Some blogs are not handling this story well, and in particular I disagree with what many techie/lefty/civil-libertarian bloggers have had to say. Many of these bloggers are people I tend to agree with a lot of the time, which has led me to wonder why I don’t this time.
First, some have said that the Boston Police overreacted by shutting down parts of the city. These were kids publicizing a cartoon, after all! I admit that I’m untrained in bomb identification, but I’m guessing so are most of the other people who have commented. Why is it so hard to believe that a circuit board with batteries, wires, and a few other components (pictured above) might look like a bomb to a reasonable bomb expert? Shouldn’t Turner Broadcasting have even considered the possibility? Shouldn’t they have thought of consulting the authorities before taking three dozen of these things and attaching them to public places (including a bridge)? Is it really a surprise that the police assumed the worst?
(And yes, I know that some other cities’ police departments didn’t react this way when faced with the same devices. Less publicity has been given to the police departments that have corroborated Boston’s reaction. It proves to me only that reasonable police departments may differ.)
To their credit, some bloggers recognized that criticizing the immediate police response might reflect a hindsight bias. But convinced that something worthy of criticism or ridicule happened here, many went in search of other critiques.