Site Meter

Author: David Robinson

1

CCR Symposium: Differences Among Online Entities

I’ve described my reactions to the first prong of Danielle’s proposed standard of care (IP logging) as well as the second prong (filtering). I’ll now complete the project with a brief look at the third and final prong of the proposed standard of care: differentiated expectations for different classes of online entities. Regretfully, these thoughts are composed in haste to get them in under the wire of the symposium’s conclusion.

ISPs are different in so many respects from web sites that it is probably best to deal with each in turn. Looking to the ISP side first, a regime of differentiated standards for different classes of service provider could deal effectively with the concerns raised in my first post about home users, public amenities, and other actors poorly positioned to log and authenticate the people to whom they provide service, by exempting them from the IP logging regime. This would make the IP logging far from comprehensive, but logging by commercial ISPs could continue, as it already does, to provide useful information to law enforcement about which broadband customer originated certain traffic.

Danielle’s argument proposes new, harassment-related uses for IP information that is already logged and already routinely used in other legal contexts. This raises the question: In between the IP logging that already does occur, and the IP logging that a real-world implementation of Danielle’s proposal would wisely and reasonably not require, is there any new IP logging that the proposal would introduce for service providers? I’m not sure.

As for web sites: Large and well established sites could be forced to filter content, clumsily and with collateral harm. They could be forced to retain logs of each visit, probably without too much added cost. But what about the periodic tendency of new web sites to become popular overnight? In some cases, the sites aren’t well engineered for their newfound popularity. In others, the very features that make the sites popular may inherently make filtering or logging difficult. Twitter may be an example of both of these phenomena: rather than a well established site, humming along, it has been a growing, unstable, sometimes broken site even when used by millions of users. Implementing filtering or logging requirements is difficult for any site that struggles with prior questions, like staying online while overwhelmed by user demand. And the very high volume of messages means that a tiny added cost for the posting of each new message (CPU cycles to analyze and filter, or a delay while the message waits for its turn to be added to a log) could help bring the whole system to a grinding halt.

There’s much more to say here, and I hope, in time, to be able to develop it further. I’ll end by recording my gratitude to the organizers, my fellow participants, and our readers.

1

CCR Symposium: Screening Software

Yesterday, I introduced some practical considerations that suggest the regime of IP logging proposed in Cyber Civil Rights might be less effective than it sounds as a way to identify anonymous harassers. Today, I want to turn to the second of the three concrete policy elements the paper outlines, namely the use of screening software.

There are many hypothetical kinds of computer filtering software that would, if they existed, be highly valuable. For example, software that could filter out illicit copies of copyrighted works, without impinging on fair use, authorized copying, or the exchange of public domain materials, would be greeted eagerly not only by the content industry, but by ISPs. Software that could protect children from obscene materials online without collateral harm to protected expression (such as health information) would be ideal for libraries. Such software would also, as Danielle writes, be “wholly consistent with the Communications Decency Act’s objectives.” Congress has always been happy to permit the kind of well-done filtering imagined in these hypotheticals.

To her credit, Danielle does not assert that such ideal software exists today, and in that respect she stands above a long and unfortunate tradition of wishful thinkers. In fact, she acknowledges that there will be “inevitable failures of this software to screen out all offensive material.” (I imagine Danielle would also acknowledge the converse, inevitable failure to leave in all of the material that is not offensive in the salient sense.)

Is such software feasible? Danielle’s paper summarizes Susan Freiwald to the effect that “reducing defamation through technological means may be possible if companies invest in code to make it feasible.” Friewald in the original writes: ” If a legal rule demanded it, companies would likely invest in code that made it feasible” (Susan Freiwald, Comparative Institutional Analysis in Cyberspace: The Case of Intermediary Liability for Defamation, 14 Harv. J.L. & Tech. 569, at 629). In other words, if the law required firms to invest in trying to solve this problem, they would invest. Freiwald is, as Danielle is, apparently optimistic about the likely results of such investment. But the citation doesn’t offer authoritative grounds for optimism.

There’s no shortage of demand for the platonically ideal filtering software. And there would be plenty of privately profitable uses for it, if it did exist, as well as publicly beneficial ones. Public libraries may not provide much of a financial incentive for software development, but the content industries, as the conflicts over Digital Rights Management have repeatedly shown, certainly do. So why haven’t software companies created such software yet? One might argue that the potential market is too small, which does not strike me as plausible. Another theory would be that these firms are so ideologically committed to an unfettered Internet that they all choose, all the time, not to make these profitable investments. Yet another would be that they aren’t judging the technical risks and rewards accurately—the task is easier than they believe, or the market larger.

But the explanation that I find most persuasive may also be the simplest: The best we can hope to do, in filtering, is a crude approximation of the Platonic ideal. When software companies offer frustratingly coarse filters, and when they tell us that better ones are not feasible, they are making an admission against interest, and it deserves to be taken seriously.

It’s true that there is a moderate market for in-home filtering software directed at young children, and for some (but not most) workplace environments. These contexts share two important properties: First, the party purchasing the filtering software (parents or business owners or IT staff) does not have to live under its restrictions, and therefore may be less sensitive to the coarseness of those restrictions; and second, the harm from overblocking is low because neither young children nor employees in their work have as strong an interest in being able to send or receive free expression as the median Internet user does.

If ideal filtering were possible—if computers were, or could become, that good at evaluating human expression—then the technology would have applications far beyond the present case of preventing Internet harassment. But consider how hard it is to tell whether something counts as an instance of harassment. Lawyers and judges debate edge cases. Even an example from Danielle’s paper (suggesting that a harasser should be awarded a “Congressional medal”) could plausibly be read in its context as sarcastic reproach, rather than endorsement, of the harasser. A search for antagonizing words might catch harassers but it would also ensnare Danielle’s paper and this symposium.

6

CCR Symposium: Practical Aspects of IP Logging

I’m honored to be taking part in the symposium. Danielle’s article illustrates an important problem and does a great job—as this ongoing symposium itself illustrates—of launching a conversation about how that problem may be addressed.

Thus far, the symposium’s discussion of of “traceable anonymity” has focused on its legal and normative aspects. Danielle writes that traceable anonymity the suitable standard of care for ISPs and web sites has three elements: In this post, I’ll review what those elements are, and discuss the first (mandatory IP address logging, which she calls “traceable anonymity”) in some detail. I’ll save the latter two elements of traceable anonymity the proposed standard of care for subsequent posts.

Working at the boundary between policy scholarship and technical scholarship, one frequently observes a kind of “reciprocal optimism,” in which the lawyers make optimistic assumptions about how well technical solutions will work, and the technologists make optimistic assumptions about how well legal solutions will work. IP logging is, I fear, an instance of the former tendency.

Read More