Archive for the ‘Symposium (Future of Internet)’ Category
posted by Adam Thierer
I’ve really enjoyed the back-and-forth in this symposium about the many issues raised in Jonathan Zittrain’s Future of the Net, and I appreciate that several of the contributors have been willing to address some of my concerns and criticisms in a serious way. I recognize I’m a bit of a skunk at the garden party here, so I really do appreciate being invited by the folks at Concurring Opinions to play a part in this. I don’t have much more to add beyond my previous essay, but I wanted to stress a few points and offer a challenge to those scholars and students who are currently researching these interesting issues.
As I noted in my earlier contribution, I’m very much hung up on this whole “open vs. closed” and “generative vs. sterile/tethered” definitional question. Much of the discussion about these concepts takes place at such a high level of abstraction that I get very frustrated and want to instead shift the discussion to real-world applications of these concepts. Because when we do, I believe we find that things are not so clear-cut. Again, “open” devices and platforms rarely are perfectly so; and “closed” systems aren’t usually completely clamped down. Same goes for the “generative vs. sterile/tethered” dichotomy.
That’s one reason I’ve given Jonathan such grief for making Steve Jobs and his iPhone the villain of his book, which is highlighted in the very first and last line of Future of the Net as the model of what we should hope to avoid. But is it really? Ignore the fact that there are plenty of more “open” or “generative” phones / OSs on the market. The more interesting question here is how “closed” is the iPhone really? And how does it stack up next to, say, Android, Windows Mobile, Blackberry, Palm, etc.? More importantly, how and when do we take the snapshot and measure such things?
I’ve argued that Zittrain’s major failing in FoTN—and Lessig’s in Code—comes down to a lack of appreciation for just how rapid and unpredictable the pace of change in this arena has been and will continue to be. The relentlessness and intensity of technological disruption in the digital economy is truly unprecedented. We’ve had multiple mini-industrial revolutions within the digital ecosystem over the past 15 years. I’ve referred to this optimistic counter-perspective in terms of “evolutionary dynamism” but it’s really more like revolutionary dynamism. Nothing—absolutely nothing—that was sitting on our desks in 1995 is still there today (in terms of digital hardware / software, I mean). Heck, I doubt that much of what was on our desk in 2005 is still there either—with the possible exception of some crusty desktop computers running Windows XP. Read the rest of this post »
posted by Quinn Norton
The project behind writing a book like The Future of the Internet is not only admirable, it should and does inspire people to think about major philosophical and social questions about the inherent politics in technological infrastructure. The project is also hard, and likely to draw critics, both valid and not. When you set out to talk about the future of something as broad and culturally revolutionary as the internet, you can’t possibly hope to succeed, you can only hope to fail better over time.
To continue the evolving failure, I’d like to take an ecological approach to The Future of the Internet and ask what context do Zittrain’s points exist in? We are told that more developers are writing for iPhone and Facebook than Linux, that iPhones dominate the landscape, that iPads might determine something of our political future. But this is only true for something that is already a walled garden– the American socio-economic middle and upper class. Beyond this barrier of perspective the landscape is very different. Is it true that people develop for the iPhone in favor of other platforms in Kenya? What about China or South Korea? Probably not, but the transnational nature of the net means we have to care about those places as well if we want to come up with a true picture of what’s going on, or going to happen.
Skype is one of the most often cited examples of an application people want to protect in the net neutrality debate. It was developed by Estonian hackers previously famous for the illegal file sharing app Kazaa. When Kazaa came out, no analysts and tech pundits were saying “Look to Estonia to revolutionize the telecommunications debate.” But it’s obvious that Skype was informed by the peer-to-peer nature of Kazaa, and by the legal and technical troubles the Kazaa builders wrestled with. Now the walled gardens of the net have to quickly take and maintain a stance on Skype, both on a technical and political level. What these kinds of applications ultimately demonstrate is that the next killer app has no pre-definable vector, and if you lock down one part of the net, chain up one cohort, then some other will be the source of disruption. To imagine that the governments of the world will somehow line up and cooperate on a net policy that universally kills this creative impulse is like waiting for a one world government to solve the problems of climate change. Sure, it seems possible on paper, but don’t hold your breath.
Even if we could reliably regulate the internet, what is the internet? It’s a specific implementation of telecommunication infrastructure. But not terribly specific. It’s easy to say what is definitely the internet, harder to say what isn’t. Is text messaging part of the internet? My first instinct is to say no, but it’s an interface and control on many internet applications. It’s been a key part of monitoring and tightly integrated at administrative levels of the net for as long as it’s been around. So perhaps we have to allow it in the pool. What about phone calls themselves? Again, problematic, as telecom companies will sometimes use the same protocols and wires to transit calls as net traffic. African, Afghani, and Filipino programs that move banking onto cell phones show that generativity moves to the edges of the net/telecom division when you can’t access the net itself for some reason.
What is generative? This is also hard. The telecom infrastructure was built to be non-generative, non-open, and not user friendly. It was built top-down and tightly regulated. But the net was built on top of it, so it ultimately was generative despite the intentions of its builders. The net nested a bottom-up social structure in that top-down architecture. The total generativity of a system can only be determined in retrospect from how it was used, not from how it was architected. To focus only on the protocols as written to understand whether a technology will be generative is like trying to determine whether an artist has a good eye by looking at his DNA.
Generative and non-generative systems have always emerged from strange parents, and given birth to strange children. I’ve seen nothing to make me fear for the future of the net in general, though I think Facebook, Apple, and Zittrain’s points make me fear that the respectable net will be an increasingly boring place. Nevertheless they will fall in time. To keep their captive audience happy Apple has to be right all the time. The general purpose environment only has to be right once. People are not sticky, and getting less sticky by the day, and a change that captures their imagination will drag them away from a platform or a business model or a political system with scary haste. We can’t see these changes coming from looking at how things are structured to work. We have to look at the limits of how they might be messed with.
If you want to understand the future of the internet, or the future in general, you have to look past how technology is used, and see how it’s misused. Can the net go horribly wrong? Oh yes, but not only in the ways we can predict here, now. The radio was key to allied victory in WWII, and to instigating the Rwandan genocide 50 years later. Undoubtedly the net and cell phones will grow closer together, and have their moments of glory and horror in human history.
posted by Steven Bellovin
Zittrain’s book mentioned en passant that unlike the closed, proprietary services, the Internet has no authentication; he also suggests that this is tied to the alleged lack of consideration for security by the Internet’s designers. I won’t go into the latter, save to note that I regard it as a calumny; within the limits of the understanding of security 30 years ago, the designers did a pretty good job, because they felt that what was really at risk — the computers attached to the net — needed to protect themselves, and that there was nothing the network could or should do to help. This is in fact deeply related to Zittrain’s thesis about the open nature of the Internet, but I doubt I’ll have time to write that up before this symposium ends.
The question of identity, though, is more interesting; it illustrates how subtle technical design decisions can force certain policy decisions, much along the lines that Lessig set forth in Code. We must start, though, by defining “identity”. What is it, and in particular what is it in an Internet context? Let me rephrase the question: who are you? A name? A reputation? A fingerprint? Some DNA? A “soul”?
Tolkien probably expressed the dilemma best in a conversation between Frodo and Tom Bombadil in Lord of the Rings:
‘Who are you, Master?’ he asked.
‘Eh, what?’ said Tom sitting up, and his eyes glinting in the gloom. ‘Don’t you know my name yet? That’s the only answer. Tell me, who are you, alone, yourself and nameless?
We are, in some sense, our names, with all the baggage appertaining thereto. For some web sites, you can pick an arbitrary name and no one will know or care if it’s your legal name. For other purposes, though, you’re asked to prove your identity, perhaps via the oft-requested “government-issued photo ID”. In other words, we have a second player: an authority who vouches for someone’s name. This authority has to be mutually trusted — I’m not going to prove my identity to Mafia, Inc., by giving them my social security number, birthdate, etc., and you’re not likely to believe what they say. Who is trusted will vary, depending on the circumstances; a passport issued by the government of Elbonia might be sufficient to enter the US, but MI-6 would not accept such a document even if it were in the name of James Bond. This brings up the third player: the acceptor or verifier.
When dealing with closed, proprietary networks, the vouching authority and the acceptor are one and the same. More to the point, the resources you are accessing all belong to the verifier. The Internet, though, is inherently decentralized. It is literally a “network of networks”; no one party controls them all. Furthermore, the resource of most interest — end-systems — may belong to people who don’t own any networks; they just buy connectivity from someone else. Who are the verifiers?
A biometric — fingerprints, DNA, retina prints, even “soul prints” — doesn’t help over the net. The verifier simply sees a string of bits; it has no knowledge of where they’re from. You may authenticate yourself to a local device via a biometric, but it in turn will just send bits upstream.
Because of the decentralized nature, there is no one verifying party. I somehow have to authenticate to my ISP. In dial-up days, this was done when I connected to the network; today, it’s done by physical connection (e.g., the DSL wire to your house) or at network log-in time in WiFi hotspots. My packets, though, will traverse very many networks on the way to their destination. Must each of them be a verifier? I can’t even tell a priori what networks my packets will use (see the previous discussion on interconnection agreements); I certainly don’t have business relationships with them, nor do I know whom they will consider acceptable identity vouchers.
This isn’t just a performance issue, though I should note that verifying every packet in the core of the Internet was well beyond the state of the art 30 years ago, and may still be impossible. It is an architectural limitation, stemming from the decision in the late 1970s to avoid a centrally-controlled core.
The design of the Internet dictates that you are only strongly authenticated to your local host or site. Anything beyond that is either taken on faith or is done by end-to-end authentication. That, though, it exactly how the Internet was designed to operate, and it doesn’t assume that any two parties even have the same notion of identity. My identity on my phone is a phone number; my login on my computers is “smb”; my university thinks I’m smb2132; Concurring Opinions knows me by my full name. Which is correct? Any and all — the Internet is too decentralized for any one notion of identity. Had the designers created a more centralized network, you might indeed able to authenticate to the core. But there is no core, with all of the consequences, good and bad, that that implies.
(This is my last post of the symposium. I’ll be offline for a few days; when I come back online, I may add a few comments. I’ve very much enjoyed participating.)
posted by Daithi Mac Sithigh
James Grimmelmann’s discussion of the essential theory of generativity and its value as the ‘right theory’ (as opposed to its application, which he suggests needs more discussion for FOI 2.0) is a nice link to something I’m still quite curious about. Since The Future Of The Internet came out, a diverse bunch have been responding to it, and I think those responses are worth considering in this symposium, as a way of adding some further spice to our analysis of a fine book and particularly its role in debates about theory and ideology.
This can start at quite a simple level. I smiled when, in the wonderful bookshop in the Tate Modern gallery in London, I spotted a single paperback copy of The Future Of The Internet in the ‘Critical Theory’ section, completely surrounded by the many works of Slavoj Žižek. Of course, methods of classification in libraries and bookstores can be revealing (even when everything is miscellaneous), and that’s certainly the case here. What sort of impact is Zittrain’s work having outside of cyberlaw – and what does that say about the development of cyberlaw itself? Many will know of the preface to Paul Berman’s reader on Law & Society Approaches To Cyberspace (via SSRN), where he takes a three-generations approach, suggesting that Zittrain (through the 2006 Harvard Law Review generativity article), along with Benkler and others, are a third generation combining aspects of the first (mid-90s debates about exceptionalism and cyberlibertarianism) and the second (sceptical, sober, Lessig, Reidenberg). I wonder if we can now articulate a better version of the third generation in its own right, though – and whether Zittrain himself sees it that way.
posted by Barbara van Schewick
[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet's overall ability to foster innovation) is here.]
In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)
Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.
Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.
This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments  (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.
posted by Frank Pasquale
William Gibson’s essay on “Google’s Earth” deserves to be read by anyone interested in the “future of the internet.” Gibson states that “cyberspace has everted. . . . and [c]olonized the physical”, “[m]aking Google a central and evolving structural unit not only of the architecture of cyberspace, but of the world.” He’s reminded me of James Boyle’s observation that:
Sadly for academics, the best social theorists of the information age are still science fiction writers and, in particular cyberpunks—the originators of the phrase ‘cyberspace’ and the premier fantasists of the Net. If one wants to understand the information age, this is a good place to start.
Some legal academics have taken this idea to heart; for example, Richard Posner apparently began writing Catastrophe in response to Margaret Atwood’s Oryx and Crake. With that in mind, I wanted to point to some speculative fiction that I think ought to inform our sense of “the future of the internet.”
Read the rest of this post »
September 8, 2010 at 3:57 pm Posted in: Philosophy of Social Science, Privacy, Privacy (Electronic Surveillance), Sociology of Law, Symposium (Future of Internet), Technology Print This Post 5 Comments
posted by Ryan Calo
I don’t know that generativity is a theory, strictly speaking. It’s more of a quality. (Specifically, five qualities.) The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives. These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.
The law is in a position to influence this dynamic. Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy. Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.
Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom. I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States.
As van Schewick notes in her post, Zittrain is concerned primarily with yet another incentive, one perhaps less amenable to legal intervention. After all, the incentive to tether and lock down is shaped by a set of activities that are already illegal.
One issue that does not come up in The Future of the Internet (correct me if I’m wrong, Professor Zittrain) or in Internet Architecture and Innovation (correct me if I’m wrong, Professor van Schewick) is that of legal liability for that volatile thing you actually run on these generative platforms: software. That’s likely because this problem looks like it’s “solved.” A number of legal trends—aggressive interpretation of warranties, steady invocation of the economic loss doctrine, treatment of data loss as “intangible”—mean you cannot recover from Microsoft (or Dell or Intel) because Word ate your term paper. Talk about a blow to generativity if you could.
posted by Barbara van Schewick
Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.
As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:
1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture)  to design the architecture of a network creates a network with these characteristics.
2. A sufficient number of general-purpose end hosts  that allowed their users to install and run any application they like.
Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”
In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.
posted by Betsy Masiello
Disclaimer: The views expressed here are mine alone and do not in any way represent those of my employer.
I appreciated Orin Kerr’s suggestion to take Adam Thierer’s seven objections to the Zittrain thesis as a starting point for further discussion. I’m particularly interested in exploring objection #2, that incentives already exist to check closed systems that negatively impacted consumer welfare. In general, I agree with Adam’s assertion that these incentives exist, particularly in market economies. But, I think the core value of the Jonathan’s thesis is not so much an assertion that these incentives do not exist today, but rather a question as to whether we could create even more powerful ones through generative design.
The “perfect enforcement” consequences of tethered design that Jonathan explores seem to be very real in some countries, if you believe recent news about efforts in some countries to shut down services entirely if surveillance and censorship mechanisms are not put in place. As an American who has lived in the US most of her life, I can’t comment extensively on the extent to which incentives exist globally the way they do in the US. Here, I have faith that our right to free speech enshrined in the Constitution would enable a whistleblower to identify behavior of that nature. I also have faith that our competitive marketplace would lead to alternative services springing up quickly. It’s not clear to me that these and other incentives exist globally, so I’d like to broaden Adam’s point and ask how we could think about designing generative systems that would create the types of social and economic incentives required to check bad behavior on the part of powerful actors.
In 2009 Google launched a little project called Measurement Lab, an open platform of servers on which developers and researchers can deploy Internet measurement tools. It’s one example of an attempt to decentralize the power that comes with measurement and open up access to data that has otherwise been available only to a handful of backbone and last-mile providers. M-Lab, as it’s called, couldn’t have been launched by Google alone; it required collaboration between a diverse group of academic researchers, non-profit organizations and companies, few of whom (if any!) had any direct financial interest in the project, not entirely unlike the Internet if at a much smaller scale. The outcome of this project is that policymakers can have access to independent, objective data and research about Internet speed, latency, and accessibility. It’s I think fair to say that it’s a generative approach to solving the Internet accessibility problems.
When I think about the generativity thesis, these are the types of solutions to hard problems that come to mind. As Adam observes, Jonathan doesn’t lay out a concrete proposed solution(s) for tackling the vast array of policy problems he observes, but in my view the primary contribution of the work is not in a proposed solution. It is in a framework for thinking about a possible solution space.
posted by Danielle Citron
In his post, Adam Thierer presses on the question of whether we can distinguish open and closed systems. He suggests that Zittrain overstates the problem, noting that many networks and appliances combine features of generativity and tetheredness and that consumers can always choose products and networks with characteristics that they like.
To be sure, it can be difficult to identify the degree of openness/generativity of systems, but not just because appliances and networks combine them seamlessly. Confusion may arise because providers fail to articulate their positions clearly and transparently regarding certain third party activities. This surely explains some of the examples of contingent generativity that Zittrain highlights: one minute the app you wrote is there, the next it is not, or postings at the content layer appear and then are gone. In the face of vague policies, consumers may have difficulty making informed choices, especially when providers embed decisions into architecture.
Part of Zittrain’s plan to preserve innovation online is to enlist netizens to combat harmful activities that prompt providers to lock down their devices. A commitment to transparency about unacceptable third-party activities can advance that important agenda. For instance, social media providers often prohibit “hateful” speech in their Terms of Service or Community Guidelines without defining it with specificity. Without explaining the terms of, and harms to be prevented by, hate speech policies as well as the consequences of policy violations, users may lack the tools necessary to engage as responsible netizens. Some social media providers inform users when content violating their Terms of Service has been taken down, a valuable step in educating communities about the limits to openness. Users of Facebook can see, for instance, that the Kill a Jew Day group once appeared and has now been removed. This sort of transparency is a first step in an important journey of allowing consumers to make educated choices about the services/appliances/networks they use and to garner change through soft forms of regulation.
posted by Jonathan Zittrain
The Future of the Internet has a lot of worries in it about the state of cybersecurity. I’ve argued against some extremely knowledgeable people in saying that the cyberwarfare threat has not been greatly exaggerated. But there are some security fears that just don’t bother me so much.
In 1996, a physicist named Alan Sokol published an article in Social Text, a cultural studies journal. It was called “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity,” and as the name suggests, it’s pretty impenetrable. You can check it out here. Soon after it came out, he published an article in the now-defunct Lingua Franca, saying that the first article had been a hoax. He said he did it to see if the journal would “publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions.”
I remember feeling pretty sympathetic to the Social Text editors at the time — which was before I was immersed in legal academia, where most of the law reviews are run by students and don’t perform what other fields would recognize as formal peer review. Publishing an article doesn’t mean that the journal editors agree with everything it says, and no doubt the Social Text editors had little experience dealing with physics. Sure, they could have sent it to other physicists, but in the meantime they probably welcomed what looked like a rare attempt by someone from the hard sciences to communicate with an otherwise-alien audience, even if the person was deemed an apostate by his colleagues. Moreover, being of the postmodern deconstructionist bent, they gleaned a lot from the text — no doubt more than what its insincere author had put in. (As Wiki says they put it: “its status as parody does not alter, substantially, our interest in the piece, itself, as a symptomatic document.”)
I was reminded of the Sokal Affair when I read Thomas Ryan’s presentation to the 2010 Black Hat conference about one Robin Sage. This isn’t the U.S. special ops training exercise conducted each year, but rather a fake identity the author created on LinkedIn and elsewhere.
The author says he intentionally chose the photo of a young, attractive woman in order to better do what he did next: friend a bunch of security professionals on LinkedIn. He says that Robin’s success in social networking said something about the security chops of those who friended her.
I’m not so sure. He convincingly writes that her profile’s credibility could be debunked with a little Internet sleuthing, but I don’t think it’s surprising that many social network users regularly go to such lengths. Some people are picky about from whom they allow connections; others are content to accept anything that looks like it’s not a spammer — and Robin was not.
Ryan includes some snippets of messages that Robin received from her new connections. One asked her to review a paper he was writing; another complimented her on her looks; another pointed out a job opportunity. I’m not sure any of these is troublesome. Ryan figures that if the paper were shared and was pre-publication, a malevolent person behind the Robin persona could have passed it off as his or her own. That’s a bit of a reach. Yes, anything can happen, but there are risks in any communication or interaction with a stranger or mere acquaintance. Ryan says in his paper’s summary that Robin was offered “gifts, government and corporate jobs, and options to speak at a variety of security conferences.” But when that’s unpacked in the main text, it’s all very tentative — pointing out a job opportunity is not the same as offering a job, and suggesting interest in a conference is not the same as vetting the presentation should the interest be reciprocated. There’s an intriguing section of the paper about the gender dynamic — Ryan intentionally chose a young, attractive woman as Robin’s avatar, ’and suggests that “Whether these same reactions would have been elicited towards another male is questionable. It can be put forth that Robins appearance and gender played a key role in many people’s comfort level.”
There’s some interesting research on this sort of thing, such as a study by researchers at the University of Wisconsin in which identical resumes were sent for academic jobs with only the names switched from one gender to another. They found that men were given more opportunities than their identical women counterparts. At the very least, gender comfort level can cut both ways, and Ryan’s experiment was, I think even by his own account, as casual as Alan Sokol’s with Social Text. It’s more to make a provocation than to actually investigate gender bias or sloppy intellectual work, respectively.
The Robin Sage experiment — and the lessons we’re supposed to draw from it — interest me because I’m interested in the ways in which kindness among strangers can be crucial to the world being a good place to live — and the Internet functioning at all. It’s not surprising that a security professional would conduct an experiment in which people were duped into friending someone who wasn’t real and then conclude that those people were observing security practices that were too lax. But the more you think about it, the more you can think of all sorts of similar experiments: offer to help someone with his or her shopping bags, and then drop them. See someone taking a picture of his friends in a park, offer to do it so he can join the picture, and then run away with the camera. Hold a door for someone, and then hit them from behind. Should an experimenter do any of these, would the lesson be about the gullibility of the target or the cruelty of the experimenter?
To be sure, Ryan’s experiment was conducted among fellow security professionals. He suggests that Robin’s fake job description suggested that she held a U.S. federal government security clearance — so other people with clearances might be misled into sharing classified information with her. But there’s no reason to think that people would spill secrets under those circumstances any more than you’d write a check for $5,000 or give your home address to a brand new “friend” on Facebook.
The beauty of social networks like LinkedIn or Facebook is that they allow a level of connection with someone that has no easy real-world analogue. LinkedIn can be for colleagues and friends, but it also can include faraway students who want to connect with a professor they’ve never met — and maybe never will — or any number of other configurations. Just because Wikipedia allows anyone to edit most of its pages, doesn’t mean that it innately and permanently trusts every edit. The system is set up to be able to revert the work of vandals, and any example of how “easy” it is to vandalize a Wikipedia page is beside the point. The idea there is that there are more people quickly responding to vandals than there are vandals — so an open system functions. Similarly, so long as we don’t share more than we mean to, the presence of strangers among our LinkedIn colleagues or even Facebook friends shouldn’t be a red flag. More might be gained from “friends we haven’t met” than lost to the occasional bad actor.
So: pleased to meet you, Thomas Ryan — if that’s who you really are. And even if it’s not. …JZ
posted by Joel Reidenberg
I would like to suggest another angle to consider in this dissection of JZ’s wonderful generative book: Do we still care about the ‘rule of law’?
The theory of generativity relies on self-governance through an open market approach and embodies an abhorrence of “governability” by states. This I find troubling. Why is governability by states so abhorrent? If we believe in the ‘rule of law,’ governability by states cannot be anathema. States through their political and legal processes express public values through law. Generativity does not have a mechanism for all of society’s stakeholders to participate in decision-making about the values embedded in technological decisions. Privacy and security are good examples. Transparency may be the choice of some online participants with respect to their personal information, but that choice has important third party implications (e.g. the consensual disclosure of a person’s DNA also reveals information about that person’s non-consenting relative). The political and judicial process arbitrate third party rights and society’s reasonable expectations of privacy, by contrast the technological development and deployment/adoption process impose determinations. With respect to security, JZ recognizes that generativity is self-destructive and looks to individual liability as the solution. Yet, individuals will typically lack sufficient technical knowledge to engage in self-help. This is the classic situation where citizens look to the state to protect the public’s welfare.
Lon Fuller, in his work The Morality of Law, argued that “laws must exist and those laws should be obeyed by all, including government officials.” The future of the internet should not grant an immunity card from accountability with respect to public values. Rejecting governability by states is more precisely a rejection of the rule of law. In this vein, the tethering of appliance may a natural maturity of the internet toward acceptance and re-enforcement of the ‘rule of law.’
posted by Steven Bellovin
I commented earlier that I doubted that a “banking appliance” layered on top of a generic PC would indeed be secure. Examining that statement sheds light on the limitations of computer security.
We first must define what we mean by “secure”. The usual computer science definition is the so-called “CIA trilogy”: Confidentiality, Integrity, and Availability. That is, private data should stay private, no unauthorized changes should be made to anything, and you should always be able to do your banking when you want to, regardless of what the malware is doing. I don’t think that appliances can do this.
It is clear from the start that overlay appliances cannot preserve availability; if nothing else, malware on the base operating system can delete any files used by the appliance. Perhaps those files are encrypted, but that doesn’t protect them from deletion or from being overwritten with garbage. This is probably a minor concern, though; empirically, there have been few recent attacks on the availability of desktop machines because it’s harder to make money that way. There have been a few, though, notably programs that encrypt people’s files but offer to decrypt them if a ransom is paid.
posted by Orin Kerr
I’m not sure if it’s good blog symposium etiquette to make such a suggestion, but I think the seven numbered points Adam Thierer makes in his post might be a helpful starting point for additional debate about The Future of the Internet. Maybe I’m just behind as the reader who doesn’t know what he thinks, but I’d be interested in responses to Adam’s seven objections.
posted by James Grimmelmann
When The Future of the Internet was published, I knew immediately it was a big deal. Paul Ohm had very much the same thought. And so we got together, called ourselves an institute, and jointly wrote a book review, which we titled “Dr. Generative Or: How I Learned to Stop Worrying and Love the iPhone.” I wish I could link to it, but it’s not quite out yet–it went to the Maryland Law Review’s publishers about a month ago, and isn’t back yet. In its place, though, I thought I’d run down the main points Paul and I make in our review.
The book’s gerat contribution, the reason it will stay on shelves as long as we Internet academics still believe in printed books, can be boiled down to one word: “generativity.” In the Lessig/Reidenberg/Kapor tradition of thinking about computer code as a kind of regulation, one of the central questions has always been which features of the Internet’s architecture make it THE INTERNET, and thus worth caring about. People have proposed a lot of different virtues. “Openness,” as Adam discusses below, is a disconcertingly capacious and imprecise term. But most of the more concrete alternatives–”end-to-end”-ianness, “neutrality,” “layering,” “standardization,” “decentralization,” “tinkerability,” “free-as-in-freedom” software, and the “commons”–turn out to be near misses. They focus too narrowly on one part of a much bigger puzzle. For example, as Laura’s work demonstrates, even though standardization makes the Internet possible, it can also be a tool of political control and repression.
In contrast, Paul and I call generativity “the right theory.” The Internet’s capacity to support large and unanticipated creativity and innovation on a wide variety of levels is remarkable. Focusing on generativity allows us to sum up, in one simple concept, what makes the Internet distinctive, and distinctively valuable. That alone is a serious achievement. One can dispute–as this symposium is already showing–perhaps everything else in the book. But there really is no arguing with the theory of generativity itself.
That said, however, Paul and I express somewhat more skepticism about some of Zittrain’s applications of generativity. Our problem with the book–or, really, our reason to look forward to the sequel–is that only in a few places does the carefully worked out theory really make contact with his practical recommendations. The final third of the book consists of some very clever case studies and proposals, but there’s something of a missing link: the proposals don’t always clearly follow from the theory of generativity.
Our central example, and the backbone of our review, is Zittrain’s discussion of the iPhone. It, and other “tethered appliances” feature what he calls “contingent generativity“: they can be programmed and extended for now, but Apple can always pull the plug on anything it doesn’t like. He’s afraid of that future–but the reasons he gives to worry about it aren’t really concerns about generativity as such. They implicate other values, like free speech and individual autonomy, and one must do more work than Zittrain has to link these values up with generativity. Indeed, it’s easy to make arguments that the iPhone and iPad have been massive improvements for generativity; recall Apple’s ad campaign that other phones have “the kinda sorta looks like the Internet” but the iPhone has “the Internet” itself.
Whether this and similar compromises–such as Google’s ability to turn off its cloud, or Wikipedia’s ability to revert your edits and ban your IP block–are worthwhile restrictions or not has to come from a richer, multivalued theory. That is, we think Zittrain has really and truly pinned down the fundamental architectural virtue of the Internet, but only just started on the long road of harnessing that theory to give advice for practical policy problems. In The Fourth Quadrant, Zittrain has started in on that important work–and we hope it’s a down payment on that sequel.
posted by Orin Kerr
The Future of the Internet rests on the combination of an empirical claim and a predictive claim. The empirical claim is about the characteristics of “open” versus “closed” systems: “Open” is better for us than “closed.” The predictive claim is that “the pieces are in place for a wholesale shift” from open to closed. The argument of the book is therefore about a coming future we want to avoid: Instead of allowing the shift from open to closed, we should work to ensure that we maintain an open system.
Is Zittrain right? My answer is to say something you’re not supposed to say on the Internet: I don’t know. Whether open or closed is better strikes me as a complicated empirical question. It may depend on the circumstances, and it certainly depends on your values. Either way, I’m not in a position to know in which circumstances each approach is likely to be helpful.
Similarly, I don’t know if we’re likely to see a wholesale shift in the balance between open and closed systems. I’m generally skeptical about Zittrain’s claim in Chapter 3 about the direction of cybersecurity — a claim that underlies Zittrain’s predictions about where we’re headed. My vague sense is that cybersecurity is generally improving over time, not getting worse. Of course, my impression is only anecdotal. But here are two anecdotes to give you an idea of why I’m skeptical that the problem is getting worse.
Back around 1995 to 2000, the Internet was regularly threatened by global viruses like the Melissa virus and the I Love You virus that threatened to shut down the Internet. We don’t generally see those same global threats today, however. By and large, virus threats have become an inconvenience dealt with through updating anti-virus software rather than the huge threat to the Net they were a decade ago. That’s a big shift. And it’s a shift towards better security rather than worse security.
Another sign is the annual CSI/FBI Computer Crime and Security Survey, which publishes information about how much of a security threat major companies are seeing online. From around 200o to 2005, the numbers went up every year — to a great deal of media attention and consternation. But since 2005, the numbers generally have dropped. Companies seem to be seeing less of a threat, and they’re reporting fewer losses than before. That doesn’t mean the problem is “solved” — no security problem is ever “solved” — but it does suggest that the cybersecurity picture is improving on the whole compared to where it was a few years ago.
Of course, none of this means Zittrain’s hypothesis is wrong. (And it’s a marvelously engaging and fun book either way.) Rather, I’m just unsure that the empirical and predictive claims are right. I don’t have enough of a sense of the empirical benefits of open and closed, or enough certainty about what the future holds, to really know.
posted by Salil Mehra
First off, thanks to Concurring Opinions and Danielle Citron for hosting this online symposium on Jonathan Zittrain’s The Future of the Internet – and How to Stop it. Before I launch into my own thoughts, I want to add my own version of the praise that the book has already won. It is an immensely readable work that succeeds in showing us where we’ve been, how we got to where we are, and the steps to take to avoid going where we’d rather not be.
I have three brief points, involving a comparison with Japan, some thoughts about competition, consumer protection and innovation, and finally, a somewhat different take on the lessons of Wikipedia.
This symposium is incredibly timely, particularly given the concern in recent weeks about the Google/Verizon agreement. In TFOTI, Zittrain highlights the risks that threaten the Internet’s future, and explains how the net neutrality debate is in some ways a mismatch for those risks. For example, he points out that the migration from the Internet to, in his words, tethered appliances like the iPhone and TiVo, ultimately provide an end-run around net neutrality on the Internet (pp. 177-185). Accordingly, he argues that preserving generativity is a better-tailored principle.
The lead in The Economist this week also takes on the Google/Verizon agreement, and critiques net neutrality from a different angle calling America’s “vitriolic net-neutrality debate” “a reflection of the lack of competition in broadband access.” If you’re reading this symposium, you probably already know, possibly because you read this, that in many other industrialized countries incumbent telcos were forced years ago – and not just in a superficial way – to open up wholesale broadband to competitors.
I’m in Tokyo this academic year thanks to Temple’s long reach across the globe and to my gracious hosts at Keio University Law School. I’ve been travelling to Japan repeatedly since the late 1980s, and one of the changes I’ve been struck by is how a country that in the 1990s was generally held to be well behind the U.S. in telecommunications now seems ahead in broadband and mobile Internet. Read the rest of this post »
posted by Frank Pasquale
There have been a great series of posts on the book here today, and all are well worth pondering on their own terms. I just wanted to throw in a controversial perspective that might lead to more dialogue.
Thierer’s post makes the case for optimism about the future of the internet. To bolster his point of view, he might also have drawn on a growing literature skeptical of the “cyberwar” threat. There’s a transcript of a cyberwar debate involving Zittrain here, where Marc Rotenberg and Bruce Schneier were pretty skeptical of the threat. On p. 41 or so Schneier makes a political economy case that the threat may be exaggerated by those with commercial interests in selling security-related products and services. Glenn Greenwald’s take on cyberwar is more caustic:
In every way that matters, the separation between government and corporations is nonexistent, especially (though not only) when it comes to the National Security and Surveillance State. Indeed, so extreme is this overlap that even [Bush's Director of National Intelligence] McConnell . . . told The New York Times that his ten years of working “outside the government,” for Booz Allen, would not impede his ability to run the nation’s intelligence functions.
That’s because his Booz Allen work was indistinguishable from working for the Government, and therefore — as he put it — being at Booz Allen “has allowed me to stay focused on national security and intelligence communities as a strategist and as a consultant. Therefore, in many respects, I never left.” As the NSA scandal revealed, private telecom giants and other corporations now occupy the central role in carrying out the government’s domestic surveillance and intelligence activities — almost always in the dark, beyond the reach of oversight or the law. . . .At this point, it’s more accurate to view the U.S. Government and these huge industry interests as one gigantic, amalgamated, inseparable entity — with a public division and a private one.
If we take Greenwaldian concerns seriously (as apparently the Cato Institute has), it’s vital that we get objective analysis of the cyberwar threat from those without a commercial interest in its being either exaggerated or downplayed. But it would also be good for Thierer to acknowledge that the type of strict divide between public and private that is the premise of his final paragraph really doesn’t exist. Privacy laws are so easily circumvented that government will almost always have some access to data collection about individuals by corporations.
posted by Adam Thierer
In his opening essay in this symposium, Jonathan Zittrain ensures us that he is “not exactly a pessimist.” “I recognize, and celebrate,” he says, “the fact that the digital environment of 2010 is the coolest, most interesting, most option-filled it’s ever been.” Terrific! I am glad to hear that because the crux of my repeated critiques of his book, The Future of the Internet, over the past two years has been focused on its unrelenting – and largely unwarranted – pessimism about our possible cyber-futures. Alas, his essay on these pages still displays much of that underlying techno-pessimism and begs me to ask: Will the real Jonathan Zittrain please stand up?
Regardless of whether Zittrain is more optimistic now than when he penned his book two years ago, others are seemingly taking its pessimist message to heart. Indeed, “the Death of the Internet” is a hot meme in the Internet policy world these days. Much as a famous 1966 cover of Time magazine asked “Is God Dead?” Wired magazine, the magazine for the modern digerati, proclaimed in a recent cover story that “The Web is Dead.” And just this past week, The Economist magazine ran a cover story fretting about “The Web’s New Walls,” wondering “how the threats to the Internet’s openness can be averted.” Like Zittrain’s book, the primary fear expressed in both essays was that the wide-open Internet experience of the past decade is giving way to a new regime of corporate control and walled gardens.
Before addressing this concern in more detail, let’s consider the origins of Zittrain’s pessimism. Zittrain’s Future of the Internet, as well as Tim Wu’s soon-to-be-released The Master Switch: The Rise and Fall of Information Empires, might best be understood as the second and third installments in a trilogy that began with the publication of Lawrence Lessig’s seminal 1999 book, Code and Other Laws of Cyberspace. Read the rest of this post »
posted by Jonathan Zittrain
Google CEO Eric Schmidt created buzz (and some shock and criticism) when he suggested in a recent Wall Street Journal interview that, in the not too distant future, “every young person…will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites.”
I’ve been intrigued by these concepts, too, and while I don’t think people should have to change their names to escape their pasts — whether earned or unearned — I like the idea of reputation bankruptcy. It’s taken up as a partial solution to peer-to-peer privacy problems in the Future of the Internet:
Search is central to a functioning Web, and reputation has become central to search. If people already know exactly what they are looking for, a network needs only a way of registering and indexing specific sites. Thus, IP addresses are attached to computers, and domain names to IP addresses, so that we can ask for www.drudgereport.com and go straight to Matt Drudge’s site. But much of the time we want help in finding something without knowing the exact online destination. Search engines help us navigate the petabytes of publicly posted information online, and for them to work well they must do more than simply identify all pages containing the search terms that we specify. They must rank them in relevance. There are many ways to identify what sites are most relevant. A handful of search engines auction off the top-ranked slots in search results on given terms and determine relevance on the basis of how much the site operators would pay to put their sites in front of searchers. These search engines are not widely used. Most have instead turned to some proxy for reputation. As mentioned earlier, a site popular with others—with lots of inbound links—is considered worthier of a high rank than an unpopular one, and thus search engines can draw upon the behavior of millions of other Web sites as they sort their search results. Sites like Amazon deploy a different form of ranking, using the “mouse droppings” of customer purchasing and browsing behavior to make recommendations—so they can tell customers that “people who like the Beatles also like the Rolling Stones.” Search engines can also more explicitly invite the public to express its views on the items it ranks, so that users can decide what to view or buy on the basis of others’ opinions. Amazon users can rate and review the items for sale, and subsequent users then rate the first users’ reviews. Sites like Digg and Reddit invite users to vote for stories and articles they like, and tech news site Slashdot employs a rating system so complex that it attracts much academic attention.
eBay uses reputation to help shoppers find trustworthy sellers. eBay users rate each others’ transactions, and this trail of ratings then informs future buyers how much to trust repeat sellers. These rating systems are crude but powerful. Malicious sellers can abandon poorly rated eBay accounts and sign up for new ones, but fresh accounts with little track record are often viewed skeptically by buyers, especially for proposed transactions involving expensive items. One study confirmed that established identities fare better than new ones, with buyers willing to pay, on average, over 8 percent more for items sold by highly regarded, established sellers. Reputation systems have many pitfalls and can be gamed, but the scholarship seems to indicate that they work reasonably well. There are many ways reputation systems might be improved, but at their core they rely on the number of people rating each other in good faith well exceeding the number of people seeking to game the system—and a way to exclude robots working for the latter. For example, eBay’s rating system has been threatened by the rise of “1-cent eBooks” with no shipping charges; sellers can create alter egos to bid on these nonitems and then have the phantom users highly rate the transaction. One such “feedback farm” earned a seller a thousand positive reviews over four days. eBay intervenes to some extent to eliminate such gaming, just as Google reserves the right to exact the “Google death penalty” by de-listing any Web site that it believes is unduly gaming its chances of a high search engine rating.
These reputation systems now stand to expand beyond evaluating people’s behavior in discrete transactions or making recommendations on products or content, into rating people more generally. This could happen as an extension of current services—as one’s eBay rating is used to determine trustworthiness on, say, another peer-to-peer service. Or, it could come directly from social networking: Cyworld is a social networking site that has twenty million subscribers; it is one of the most popular Internet services in the world, largely thanks to interest in South Korea. The site has its own economy, with $100 million worth of “acorns,” the world’s currency, sold in 2006.
Not only does Cyworld have a financial market, but it also has a market for reputation. Cyworld includes behavior monitoring and rating systems that make it so that users can see a constantly updated score for “sexiness,” “fame,” “friendliness,” “karma,” and “kindness.” As people interact with each other, they try to maximize the kinds of behaviors that augment their ratings in the same way that many Web sites try to figure out how best to optimize their presentation for a high Google ranking. People’s worth is defined and measured precisely, if not accurately, by the reactions of others. That trend is increasing as social networking takes off, partly due to the extension of online social networks beyond the people users already know personally as they “befriend” their friends’ friends’ friends.
The whole-person ratings of social networks like Cyworld will eventually be available in the real world. Similar real-world reputation systems already exist in embryonic form. Law professor Lior Strahilevitz has written a fascinating monograph on the effectiveness of “How’s My Driving” programs, where commercial vehicles are emblazoned with bumper stickers encouraging other drivers to report poor driving. He notes that such programs have resulted in significant accident reductions, and analyzes what might happen if the program were extended to all drivers. A technologically sophisticated version of the scheme dispenses with the need to note a phone number and file a report; one could instead install transponders in every vehicle and distribute TiVo-like remote controls to drivers, cyclists, and pedestrians. If someone acts politely, say by allowing you to switch lanes, you can acknowledge it with a digital thumbsup that is recorded on that driver’s record. Cutting someone off in traffic earns a thumbs-down from the victim and other witnesses. Strahilevitz is supportive of such a scheme, and he surmises it could be even more effective than eBay’s ratings for online transactions since vehicles are registered by the government, making it far more difficult escape poor ratings tied to one’s vehicle. He acknowledges some worries: people could give thumbs-down to each other for reasons unrelated to their driving—racism, for example. Perhaps a bumper sticker expressing support for Republicans would earn a thumbs-down in a blue state. Strahilevitz counters that the reputation system could be made to eliminate “outliers”—so presumably only well-ensconced racism across many drivers would end up affecting one’s ratings. According to Strahilevitz, this system of peer judgment would pass constitutional muster if challenged, even if the program is run by the state, because driving does not implicate one’s core rights. “How’s My Driving?” systems are too minor to warrant extensive judicial review. But driving is only the tip of the iceberg.
Imagine entering a café in Paris with one’s personal digital assistant or mobile phone, and being able to query: “Is there anyone on my buddy list within 100 yards? Are any of the ten closest friends of my ten closest friends within 100 yards?” Although this may sound fanciful, it could quickly become mainstream. With reputation systems already advising us on what to buy, why not have them also help us make the first cut on whom to meet, to date, to befriend? These are not difficult services to offer, and there are precursors today. These systems can indicate who has not offered evidence that he or she is safe to meet—as is currently solicited by some online dating sites—or it may use Amazon-style matching to tell us which of the strangers who have just entered the café is a good match for people who have the kinds of friends we do. People can rate their interactions with each other (and change their votes later, so they can show their companion a thumbs-up at the time of the meeting and tell the truth later on), and those ratings will inform future suggested acquaintances. With enough people adopting the system, the act of entering a café can be different from one person to the next: for some, the patrons may shrink away, burying their heads deeper in their books and newspapers. For others, the entire café may perk up upon entrance, not knowing who it is but having a lead that this is someone worth knowing. Those who do not participate in the scheme at all will be as suspect as brand new buyers or sellers on eBay.
Increasingly, difficult-to-shed indicators of our identity will be recorded and captured as we go about our daily lives and enter into routine transactions— our fingerprints may be used to log in to our computers or verify our bank accounts, our photo may be snapped and tagged many times a day, or our license plate may be tracked as people judge our driving habits. The more our identity is associated with our daily actions, the greater opportunities others will have to offer judgments about those actions. A government-run system like the one Strahilevitz recommends for assessing driving is the easy case. If the state is the record keeper, it is possible to structure the system so that citizens can know the basis of their ratings—where (if not by whom) various thumbs-down clicks came from—and the state can give a chance for drivers to offer an explanation or excuse, or to follow up. The state’s formula for meting out fines or other penalties to poor drivers would be known (“three strikes and you’re out,” for whatever other problems it has, is an eminently transparent scheme), and it could be adjusted through accountable processes, just as legislatures already determine what constitutes an illegal act, and what range of punishment it should earn.
Generatively grown but comprehensively popular unregulated systems are a much trickier case. The more that we rely upon the judgments offered by these private systems, the more harmful that mistakes can be. Correcting or identifying mistakes can be difficult if the systems are operated entirely by private parties and their ratings formulas are closely held trade secrets. Search engines are notoriously resistant to discussing how their rankings work, in part to avoid gaming—a form of security through obscurity. The most popular engines reserve the right to intervene in their automatic rankings processes—to administer the Google death penalty, for example—but otherwise suggest that they do not centrally adjust results. Hence a search in Google for “Jew” returns an anti- Semitic Web site as one of its top hits, as well as a separate sponsored advertisement from Google itself explaining that its rankings are automatic. But while the observance of such policies could limit worries of bias to search algorithm design rather than to the case-by-case prejudices of search engine operators, it does not address user-specific bias that may emerge from personalized judgments.
Amazon’s automatic recommendations also make mistakes; for a period of time the Official Lego Creator Activity Book was paired with a “perfect partner” suggestion: American Jihad: The Terrorists Living Among Us Today. If such mismatched pairings happen when discussing people rather than products, rare mismatches could have worse effects while being less noticeable since they are not universal. The kinds of search systems that say which people are worth getting to know and which should be avoided, tailored to the users querying the system, present a set of due process problems far more complicated than a stateoperated system or, for that matter, any system operated by a single party. The generative capacity to share data and to create mash-ups means that ratings and rankings can be far more emergent—and far more inscrutable.
As biometric readers become more commonplace in our endpoint machines, it will be possible for online destinations routinely to demand unsheddable identity tokens rather than disposable pseudonyms from Internet users. Many sites could benefit from asking people to participate with real identities known at least to the site, if not to the public at large. eBay, for one, would certainly profit by making it harder for people to shift among various ghost accounts. One could even imagine Wikipedia establishing a “fast track” for contributions if they were done with biometric assurance, just as South Korean citizen journalist newspaper OhmyNews keeps citizen identity numbers on file for the articles it publishes. These architectures protect one’s identity from the world at large while still making it much more difficult to produce multiple false “sock puppet” identities. When we participate in other walks of life—school, work, PTA meetings, and so on—we do so as ourselves, not wearing Groucho mustaches, and even if people do not know exactly who we are, they can recognize us from one meeting to the next. The same should be possible for our online selves. 
As real identity grows in importance on the Net, the intermediaries demanding it ought to consider making available a form of reputation bankruptcy. Like personal financial bankruptcy, or the way in which a state often seals a juvenile criminal record and gives a child a “fresh start” as an adult, we ought to consider how to implement the idea of a second or third chance into our digital spaces. People ought to be able to express a choice to de-emphasize if not entirely delete older information that has been generated about them by and through various systems: political preferences, activities, youthful likes and dislikes. If every action ends up on one’s “permanent record,” the press conference effect can set in. Reputation bankruptcy has the potential to facilitate desirably experimental social behavior and break up the monotony of static communities online and offline. As a safety valve against excess experimentation, perhaps the information in one’s record could not be deleted selectively; if someone wants to declare reputation bankruptcy, we might want it to mean throwing out the good along with the bad. The blank spot in one’s history indicates a bankruptcy has been declared—this would be the price one pays for eliminating unwanted details.
The key is to realize that we can make design choices now that work to capture the nuances of human relations far better than our current systems, and that online intermediaries might well embrace such new designs even in the absence of a legal mandate to do so.
(And, as long as we’re talking about reputation — you can check out Dan Solove’s excellent book on the future of reputation here.)