Author: Paul Ohm

2

A Classroom Participation Technique for Cold-Callers: The “Catch”

In honor of the start of the fall semester, I wanted to share a classroom participation technique I started using last semester with encouraging results. I cold call in my classes, but I give every student the opportunity to pass three times during the semester when they don’t feel prepared. (Because of where I teach, I notice a suspicious uptick in passes on Mondays following fresh snowfall in the mountains!) As long as I’m notified of a student’s desire to pass before class begins, I won’t call on him or her.

Last semester I started giving students the option of using the reverse of a pass, which I punnily dubbed a “catch.” When a student feels especially prepared for a given class–perhaps she has had a lot of time to read the night before or maybe she has already read the case before for another class–she can put herself on call by sending me a “catch” before class begins. In return, I promise students who catch that I will not call on them for at least three subsequent classes.

Very few students caught (catched?) last semester, but on those occasions when they did, it led to some of the most productive Q&A I’ve had with students in five-plus years (including two years as an adjunct) of law teaching. The students who caught no doubt benefited by regaining some control over their fate; their classmates benefited from hearing good discussions of the days’ topics; and I gained the benefits of an on-call system without having the rest of the class skip the reading.

If you cold call already, try out this tweak this semester, and let me know how it goes.

15

How Not to Obtain Online Consent, or Why Panera Bread Owes Me Free Muffins

panera-logo.PNG

When I need to edit an article, I will sometimes park myself at a booth at the local Panera Bread, sipping the decent coffee, snacking on the beautiful (notice I didn’t say tasty) pastries, and using the free WiFi. Long ago, I noticed that Panera had made a stupid technological mistake that probably strips it of the right to manage its network lawfully.

Panera tries to extract consent from its users using what is known as a captive portal, the same method used by most hotel and airport WiFi network providers. When a Panera WiFi user first tries to connect to any website, Panera’s computers redirect her instead to its own web page with a link to its terms of service (ToS). Only when the user clicks “I agree” may she start surfing.

Compared to some of the other methods Internet providers use for attempting to obtain consent, a captive portal deserves some praise. It is much more likely to be noticed and read than a ToS or privacy policy link buried on a home page (or, as the case may be, not even on the home page). It is better than the paper privacy policies my credit card companies send with their monthly bills, usually along with a half-dozen ads. Unlike either of these methods, a captive portal acts like a virtual stop sign–until you click “I agree,” you can go no further. (Of course, calling even a captive portal meaningful consent seems to stretch things if the ToS offered are dozens of pages long.)

But if Panera ever tried to enforce its WiFi ToS–say it got caught monitoring user communications and had to defend against a wiretapping lawsuit or say it was sued for banning a user suspected of downloading porn in violation of the ToS–a court should probably hold that its ToS are unenforceable. Panera has made a simple web design mistake that introduces doubt about what terms are being agreed to by its users.

Read More

12

Which is More Confusing: ECPA or the Tax Code?

Hearing Sarah Lawsky crack wise so often and so hilariously about the Internal Revenue Code during her visit made me think of a little joke I have used many times when lecturing about the Electronic Communications Privacy Act (ECPA). After warning listeners that ECPA is complex and confusing, I will often say something like, “And I challenge any tax experts in the room to go head-to-head with me in a battle for the title of ‘most confusing part of the U.S. Code.'” The comment usually inspires a few polite titters–from the kind of people who find jokes about comparative statutory complexity funny–so I keep using it.

The problem is, I have no idea whether I have a leg to stand on. Can ECPA really hold a candle to the infamous complexity of the IRC? Is there another part of the U.S. Code that makes both of these seem lucid in comparison?

This connects to James Grimmelmann’s recent series of posts about a new lawyer being a menace to his or her clients. He has been developing the point that mere book larnin’ isn’t enough to prepare a lawyer to represent a client competently, at least not in certain substantive areas, and he offers wills & trusts, bankruptcy, and copyright as examples. What makes a substantive area of law more complicated than another?

Keeping it focused on legislation, what factors conspire to make a statute complex and confusing (and, as an aside, can a statute be complex but not confusing or confusing but not complex?) Within my areas of expertise, here are a few factors that make ECPA complex:

  1. ECPA defines many terms, and it defines many terms in ways that are disconnected from ordinary meaning. (I’m looking at you, “electronic storage”!)
  2. ECPA (and more generally speaking, the Wiretap Act which predates ECPA) has many parallel definitions that Congress may not have intended to treat alike (yes, I’m talking about you two, “wire communication” and “electronic communication.”).
  3. ECPA interacts in mysterious ways with other laws (try to figure out what “readily accessible to the general public” means!)
  4. ECPA is rarely litigated. Orin Kerr explains how this has made a mess of the law in Lifting the ‘Fog’ of Internet Surveillance: How a Suppression Remedy Would Change Computer Crime Law, 54 Hastings Law Journal 805 (2003).
  5. ECPA regulates technology, so its meaning often shifts as technology changes. This problem is exacerbated because the basic structure and essential definitions are unchanged from 1986, so a law written to regulate mainframes is today applied to Web 2.0 and cloud computing.

So to all of the tax experts out there, what makes the tax code so complicated? Do all of the factors listed above apply to the IRC as well? The IRC is much longer than ECPA, and it is supplemented with reams of CFRs and other regs, but that can’t be enough alone to earn it the title, can it?

And what say you bankruptcy and copyright experts?

And even more generally, what are the objective metrics we can use to calculate comparative statutory complexity. (Yes, I’m picturing a NCAA-style tourney bracket right now.)

5

AALS FAR Form Database or Elaborate Phishing Scam?

omnicontests.JPG

Thanks to Dan and company for agreeing to let me blog here again. During my stint, I promise to talk about the law (and in particular, the threat to privacy posed by Internet Service Providers) but let me warm up with some lighter, more navel-gazing fare:

I’m serving for the first time on our Appointments committee this year, which means I get to look at the FAR form database from the other end of the microscope. Rick Garnett asks about the weaknesses of the form itself, but I wanted to comment instead on the awful user interface AALS provides for those of us perusing the forms.

The FAR form database’s user interface recalls the aesthetic of most of the phishing scam websites I have ever seen. It is ugly, which itself is not much of a sin for such a utilitarian site, but it makes me wonder whether AALS is putting care into other aspects of the database, such as privacy and security. It is also very hard to use, and I will venture to guess that schools are missing some candidates they might otherwise want to interview because of the lousy interface. Here are some specific criticisms:

Read More

3

What’s the Analog Hole worth? Twenty-Four Cents

audioports.jpgI’ve overstayed my welcome, so I’ll be signing off with this post. Thanks to Dan and the other permabloggers for letting me participate.

Point a video camera at a television screen, aim a microphone at a speaker, or run a cable from the “line out” to the “line in” ports on the back of your computer, and you’re ready to exploit the so-called analog hole. Just press “play” on one device and “record” on the other, and you can copy a movie, television show, or song, even if the original is supposedly protected by digital rights management technology designed to prevent copying.

The analog hole–which arises from the fact that relatively-easy-to-protect digital content must be converted into harder-to-protect analog signals if we humans are to see or hear them–has given Hollywood and the recording industry a fair amount of heartache, has led them to displays of public consternation, and has even resulted in some proposed legislation.

Despite its frequent appearance in DRM debates, the analog hole is suprisingly unexplored in legal scholarship. Westlaw’s JLR database contains a mere thirty-seven articles that use the phrase, most in passing, and SSRN returns only three hits. Most of the commentary relies on an empirical assumption that has never before been rigorously tested: Exploiting the analog hole creates copies of such low quality as not to be good substitutes for the originals.

Doug Sicker, an Assistant Professor of Computer Science at my University, together with Shannon Gunaji, a grad student, have tried empirically to test this assumption by conducting a series of surveys assessing, among other things, what the analog hole means for the typical music consumer. Doug asked me to help bring the early results to the legal academy, and our little article, entitled The Analog Hole and the Price of Music: an Empirical Study, has been posted to SSRN and will appear soon in the Journal of Telecommunications & High Technology Law.

Our results after the jump.

Read More

0

The Myth of the Superuser

superuser.jpg

Everybody knows that the Internet is teeming with super-powerful and nefarious miscreants who are almost impossible to stop and who can cause catastrophic harms. If you need proof, simply pick up any newspaper or watch any “hacker” movie. The problem is, what everybody knows is wrong. Or, at least so I argue in my most recent article, The Myth of the Superuser: Fear, Risk, and Harm Online, which I have posted to SSRN and submitted to a law review intake inbox near you. Here’s the abstract:

Fear of the powerful computer user, “the Superuser,” dominates debates about online conflict. This mythic figure is difficult to find, immune to technological constraints, and aware of legal loopholes. Policymakers, fearful of his power, too often overreact, passing overbroad, ambiguous laws intended to ensnare the Superuser, but which are used instead against inculpable, ordinary users. This response is unwarranted because the Superuser is often a marginal figure whose power has been greatly exaggerated.

The exaggerated attention to the Superuser reveals a pathological characteristic of the study of power, crime, and security online, which springs from a widely-held fear of the Internet. Building on the social science fear literature, this Article challenges the conventional wisdom and standard assumptions about the role of experts. Unlike dispassionate experts in other fields, computer experts are as susceptible as lay-people to exaggerate the power of the Superuser, in part because they have misapplied Larry Lessig’s ideas about code.

The experts in computer security and Internet law have failed to deliver us from fear, resulting in overbroad prohibitions, harms to civil liberties, wasted law enforcement resources, and misallocated economic investment. This Article urges policymakers and partisans to stop using tropes of fear; calls for better empirical work on the probability of online harm; and proposes an anti-Precautionary Principle, a presumption against new laws designed to stop the Superuser.

11

Law Profs Who Code

perl.gif

Law Professors who write about the Internet tend to develop facts through a combination of anecdote and secondary-source research, through which information about the conduct of computer users, the network’s structure and architecture, and the effects of regulation on innovation are intuited, developed through stories, or recounted from others’ research. Although I think a lot of legal writing about the Internet is very, very good, I’ve long yearned for more “primary source” analysis.

In other words, there is room and need for Internet law scholars who write code. Although legal scholars aren’t about to break fundamental new ground in computer science, the hidden truths of the Internet don’t run very deep, and some very simple code can elicit some important results. Also, there is a growing cadre of law professors with the skills needed to do this kind of research. I am talking about a new form of empirical legal scholarship, and empiricists should embrace the perl script and network connection as parts of their toolbox, just as they adopted the linear regression a few decades ago.

I plan to talk about this more in a subsequent post or two, but for now, let me give some examples of what I’m describing. Several legal scholars (or people closely associated with legal scholarship) are pointing the way for this new category of “empirical Internet legal studies”.

  • Jonathan Zittrain and Ben Edelman, curious about the nature and extent of filtering in China and Saudi Arabia, wrote a series of scripts to “tickle” web proxies in those countries to analyze the amount of filtering that occurs.
  • Edelman has continued to engage in a particularly applied form of Internet research, for example see his work on spyware and adware.
  • Ed Felten—granted, a computer scientist not a law professor—and his graduate students at Princeton have investigated DRM and voting machines with a policy bent and a particular focus on applied, clear results. Although the level of technical sophistication found in these studies is unlikely to be duplicated in the legal academy soon, his methods and approaches are a model for what I’m describing.
  • Journalist Kevin Poulsen created scripts that searched MySpace’s user accounts for names and zip codes that matched the DOJ’s National Sex Offender Registry database, and found more than 700 likely matches.
  • Finally, security researchers have set up vulnerable computers as “honeypots” or “honeynets” on the Internet, to give them a vantage point from which to study hacker behavior.

What are other notable examples of EILS? Let’s keep with the grand Solovian tradition, and call this a Census. Is this sub-sub-discipline ready to take off, or should we mere lawyers leave the coding to the computer scientists?

14

Exam Grading and Standard Deviations

bellcurves.gifDave’s recent posts about grading have me wondering. Whenever I grade, I encounter the following mathematical choice, and I am often torn about which is the proper, fair choice to make.

Imagine you give an exam with two questions, each supposedly worth 50% of the final grade. Imagine further you grade both questions and properly normalize the scores for each one to a 50 point scale. (I’m not so sure all professors normalize properly, but that’s a different problem.)

What do you do if the standard deviations in the two normalized grade populations vary widely? In other words, imagine that question one elicits a long, flat curve: the lowest score is much lower than the highest score, and there is a lot of variation in the scores in between, while question two elicits a compact curve with a very high peak that drops off quickly in both directions.

Is it legitimate (fair, proper) simply to add the normalized scores for questions one and two to derive the final score? Does this cause the first question to exert an unfairly disproportionate effect on the final curve? First, consider the extreme case. In a class of 50 students, every student gets a different normalized score for question one–from one to fifty points–while every student in the class gets the exact same normalized score–say 20 points–for question two. Simply adding the scores together means the final curve will match the curve for question one exactly, and question two will have been written out of the exam.

Read More

1

Two New Net Neutrality Resources

I first wanted to thank Dan and the rest for allowing me to use a little of their space.

Among the many pleasures of teaching where I do is the opportunity to be on the sidelines for interesting debates about telecomm law and policy, thanks to the presence of scholars like Phil Weiser and Dale Hatfield (among many others). For example, for those of you who can’t get enough of the Net Neutrality debate, this weekend we’re offering two opportunities to hear more about it:

First, Micah Schwalb, a 3L and the EIC of the Journal on Telecomm and High Tech Law noticed that you could trace the history of the Net Neutrality debate by reading the Journal’s back issues and watching footage from our past Silicon Flatirons conferences. So he has put together a new website, neutralitylaw.com, that pulls all of these resources together. Here you’ll find videos of talks by Larry Lessig, Vint Cerf, and others (many of which have never been available online before now), and articles by Tim Wu, Chris Yoo, Barbara van Schewick, Phil, and more.

Second, on Sunday and Monday we are hosting our annual marquee Silicon Flatirons event, the Digital Broadband Migration conference. Every panel is stacked with interesting people, but none is as deep as the one I’m thrilled to moderate, entitled “Network Management: Beyond Net Neutrality.” The panelists include: Jerry Kang, Ed Felten, Howard Shelanski, Robert Pepper, Jim Speta, and Jon Nuechterlein. I know when I’m outclassed, so I’ll do my best to stay out of the way, but in honor of the blog, I may try to ask a question about the role of culture. If you’re anywhere near Boulder, please stop by and say hello.

And in case you can’t make it out, you’ll be able to find the video on neutralitylaw before too long. In the coming weeks, we’ll be adding many other videos from past conferences.

5

The Boston LED Party

AP.mooninite.image.jpgLately, I’ve been thinking a lot about legal and extra-legal responses to fear, so I’ve followed last week’s commentary about the Boston Mooninite scare with some interest.

The media’s influence on public fears is well documented, and it will be interesting to see how the “new media” play into or help defuse these fears. Some blogs are not handling this story well, and in particular I disagree with what many techie/lefty/civil-libertarian bloggers have had to say. Many of these bloggers are people I tend to agree with a lot of the time, which has led me to wonder why I don’t this time.

First, some have said that the Boston Police overreacted by shutting down parts of the city. These were kids publicizing a cartoon, after all! I admit that I’m untrained in bomb identification, but I’m guessing so are most of the other people who have commented. Why is it so hard to believe that a circuit board with batteries, wires, and a few other components (pictured above) might look like a bomb to a reasonable bomb expert? Shouldn’t Turner Broadcasting have even considered the possibility? Shouldn’t they have thought of consulting the authorities before taking three dozen of these things and attaching them to public places (including a bridge)? Is it really a surprise that the police assumed the worst?

(And yes, I know that some other cities’ police departments didn’t react this way when faced with the same devices. Less publicity has been given to the police departments that have corroborated Boston’s reaction. It proves to me only that reasonable police departments may differ.)

To their credit, some bloggers recognized that criticizing the immediate police response might reflect a hindsight bias. But convinced that something worthy of criticism or ridicule happened here, many went in search of other critiques.

Read More