Category: Architecture

2

The Aftermath of Wikileaks

The U.K.’s freedom of information commissioner, Christopher Graham, recently told The Guardian that the WikiLeaks disclosures irreversibly altered the relationship between the state and public.  As Graham sees it, the WikiLeaks incident makes clear that governments need to be more open and proactive, “publishing more stuff, because quite a lot of this is only exciting because we didn’t know it. . . WikiLeaks is part of the phenomenon of the online, empowered citizen . . . these are facts that aren’t going away.  Government and authorities need to wise up to that.”  If U.K. officials take Graham seriously (and I have no idea if they will), the public may see more of government.  Whether that more in fact provides insights to empower citizens or simply gives the appearance of transparency is up for grabs.

In the U.S., few officials have called for more transparency after the release of the embassy cables.  Instead, government officials have successfully pressured internet intermediaries to drop their support of WikiLeaks.  According to Wired, Senator Joe Lieberman, for instance, was instrumental in persuading Amazon.com to kick WikiLeaks off its web hosting service.  Senator Lieberman has suggested that Amazon, as well as Visa and and PayPal, came to their own decisions about WikiLeaks. Lieberman noted:

“While corporate entities make decisions based on their obligations to their shareholders, sometimes full consideration of those obligations requires them to act as responsible citizens.  We offer our admiration and support to those companies exhibiting courage and patriotism as they face down intimidation from hackers sympathetic to WikiLeaks’ philosophy of irresponsible information dumps for the sake of damaging global relationships.”

Unlike the purely voluntary decisions that Internet intermediaries make with regard to cyber hate, see here, Amazon’s response raises serious concerns about what Seth Kreimer has called “censorship by proxy.”  Kreimer’s work (as well as Derek Bambauer‘s terrific Cybersieves) explores American government’s pressure on intermediaries to “monitor or interdict otherwise unreachable Internet communications” to aid the “War on Terror.”

Legislators have also sought to ensure opacity of certain governmental information with new regulations.  Proposed legislation (spearheaded by Senator Lieberman) would make it a federal crime for anyone to publish the name of U.S. intelligence source.  The Securing Human Intelligence and Enforcing Lawful Dissemination (SHIELD) Act would amend a section of the Espionage Act that forbids the publication of classified information on U.S. cryptographic secrets or overseas communications intelligence.  The SHIELD Act would extend that prohibition to information on human intelligence, criminalizing the publication of information “concerning the identity of a classified source or information of an element of the intelligence community of the United States” or “concerning the human intelligence activities of the United States or any foreign government” if such publication is prejudicial to U.S. interests.

Another issue on the horizon may be the immunity afforded providers or users of interactive computer services who publish content created by others under section 230 of the Communications Decency Act.  An aside: section 230 is not inconsistent with the proposed SHIELD Act as it excludes federal criminal claims from its protections.  (This would not mean that website operators like Julian Assange would be strictly liable for others’ criminal acts on its services; the question would be whether a website operator’s actions violated the SHIELD Act).   Now for my main point: Senator Lieberman has expressed an interest in broadening the exemptions to section 230’s immunity to require the removal of certain content, such as videos featuring Islamic extremists.  Given his interest and the current concerns about security risks related to online disclosures, Senator Lieberman may find this an auspicious time to revisit section 230’s broad immunity.

2

The Offensive Internet

Harvard University Press recently published The Offensive Internet: Speech, Privacy, and Reputation, a collection of essays edited by Saul Levmore and Martha Nussbaum.  Frank Pasquale, Dan Solove, and I have chapters in the book as do Saul Levmore, Martha Nussbaum, Cass Sunstein, Anupam Chander, Karen Bradshaw and Souvik Saha, Brian Leiter, Geoffrey Stone, John Deigh, Lior Strahilevitz, and Ruben Rodrigues.  Stanley Fish just reviewed the book at New York Times.com.

The Business Section of “The Last Newspaper”

The New Museum of Contemporary Art has hosted an exhibit called “The Last Newspaper” the past few months. Part of the exhibit centers around newspaper-based art. Another focus has been a “hybrid of journalism and performance art,” as groups of editors and writers developed “last newspaper sections” in areas ranging from real estate to sports to leisure. I co-edited the business section, which is available here in a low-res copy. I’m posting our editorial statement below.

I like how the various articles (contributed by entrepreneurs, theorists, designers, and others) hang together. The terrific design work is a refreshing change from the barren pages of business blogs, law reviews, and academic books (though it looks like some legal scholars are renewing interest in visual aspects of justice).

Read More

4

Digital Lives of 2.0 People, Not Locked In But Extended Out

Reviewing the movie The Social Network and Jaron Lanier’s book You Are Not a Gadget: A Manifesto in this month’s New York Review of Books, Zadie Smith warns readers of the perils of social network sites like Facebook where “life is turned into a database.”  According to Smith, Facebook “locks us” into a system designed by a college nerd to resemble “a Noosphere, an Internet with one mind, a uniform environment in which it genuinely doesn’t matter who you are, as long as you make ‘choices’ (which means, finally, purchases).”  Smith writes:

“When a human being becomes a set of data on a website like Facebook, he or she is reduced.  Everything shrinks.  Individual character.  Friendships.  Language. Sensibility.  In a way, it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears.  It reminds me that those of us who turn in disgust from what we consider an overinflated liberal-bourgeois sense of self should be careful what we wish for: our denuded networked selves don’t look more free, they just look more owned.”

Smith worries about her students and other “2.0 kids.”  She contrasts “1.0 people” who use social media tools to connect with others in an outward-facing way with “2.0 kids” who employ them to turn inward and towards the trivial.  2.0 people, Smith fears, are embedded in the software, avatars who don’t realize that “what makes something fully real is that it is impossible to represent it to completion.”  She wonders: “what if 2.0 people feel their socially networked selves genuinely represent them to completion?”  In Smith’s view, Mark Zuckerberg tamed “the wild west of the Internet” to “fit the suburban fantasies of a suburban soul,” risking the extinction of the “private person who is a mystery to the world and–which is more important — to herself.”

Smith’s review recalls Neil Postman’s critique of television culture and Benjamin Barber’s warnings about contemporary consumerism.  While television helped us amuse ourselves to death and pervasive pop culture produces shoppers, not thinkers, social network sites turn youth culture into over-sharing, unthinking, eager-to-please avatars who “watch the reality-TV show Bride Wars because their friends are.”  Yet this can’t be the whole story.  Whether 41 or 21, social network participants live in the real world, integrating their online activities seamlessly into their daily lives.  Far more goes on in social network sites like Facebook than sharing information to “make others like you” as Smith suggests.  On Facebook and other popular social media sites, people join groups of every stripe.  They work, as Miriam Cherry’s terrific new article Virtual Work addresses.  They build  reputations in ways that can enhance offline careers.  They join study groups.  In many respects, social media sites provide platforms for genuine participation far more than just Government 2.0 engagement.  Far from deadening the everyday citizen, social media platforms can resemble Alexis de Toqueville’s town meeting, John Dewey’s schools, and Cynthia Estlund’s workplace.  Of course, citizenship participation online is different–it is not the face-to-face interaction envisioned by Toqueville, Dewey, and Estlund.  But even with the challenges brought by internet-mediated interactions, 2.0 kids are more than denuded avatars.

1

Future of the Internet Symposium: Do we need a new generativity principle?

[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet’s overall ability to foster innovation) is here.]

In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)

Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.[1]

Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.

This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments [2] (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.

Read More

0

Future of the Internet Symposium: Will Robotics Be Generative?

I don’t know that generativity is a theory, strictly speaking. It’s more of a quality. (Specifically, five qualities.) The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives. These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.

The law is in a position to influence this dynamic. Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy. Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.

Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom. I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States.

As van Schewick notes in her post, Zittrain is concerned primarily with yet another incentive, one perhaps less amenable to legal intervention. After all, the incentive to tether and lock down is shaped by a set of activities that are already illegal.

One issue that does not come up in The Future of the Internet (correct me if I’m wrong, Professor Zittrain) or in Internet Architecture and Innovation (correct me if I’m wrong, Professor van Schewick) is that of legal liability for that volatile thing you actually run on these generative platforms: software. That’s likely because this problem looks like it’s “solved.” A number of legal trends—aggressive interpretation of warranties, steady invocation of the economic loss doctrine, treatment of data loss as “intangible”—mean you cannot recover from Microsoft (or Dell or Intel) because Word ate your term paper. Talk about a blow to generativity if you could.

Read More

0

Future of the Internet Symposium: Generative End Hosts vs. Generative Networks?

Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.

As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:

1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture) [1] to design the architecture of a network creates a network with these characteristics.

2. A sufficient number of general-purpose end hosts [2] that allowed their users to install and run any application they like.

Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”

In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.

Read More

1

Future of the Internet Symposium: The Difficulty in Identifying Open v. Closed Systems

In his post, Adam Thierer presses on the question of whether we can distinguish open and closed systems.  He suggests that Zittrain overstates the problem, noting that many networks and appliances combine features of generativity and tetheredness and that consumers can always choose products and networks with characteristics that they like.

To be sure, it can be difficult to identify the degree of openness/generativity of systems, but not just because appliances and networks combine them seamlessly.  Confusion may arise because providers fail to articulate their positions clearly and transparently regarding certain third party activities.  This surely explains some of the examples of contingent generativity that Zittrain highlights: one minute the app you wrote is there, the next it is not, or postings at the content layer appear and then are gone.  In the face of vague policies, consumers may have difficulty making informed choices, especially when providers embed decisions into architecture.

Part of Zittrain’s plan to preserve innovation online is to enlist netizens to combat harmful activities that prompt providers to lock down their devices.  A commitment to transparency about unacceptable third-party activities can advance that important agenda.  For instance, social media providers often prohibit “hateful” speech in their Terms of Service or Community Guidelines without defining it with specificity.  Without explaining the terms of, and harms to be prevented by, hate speech policies as well as the consequences of policy violations, users may lack the tools necessary to engage as responsible netizens.  Some social media providers inform users when content violating their Terms of Service has been taken down, a valuable step in educating communities about the limits to openness.  Users of Facebook can see, for instance, that the Kill a Jew Day group once appeared and has now been removed.  This sort of transparency is a first step in an important journey of allowing consumers to make educated choices about the services/appliances/networks they use and to garner change through soft forms of regulation.

5

Future of the Internet Symposium: (Im)Perfect Enforcement

Prohibition wasn’t working. President Hoover assembled the Wickersham Commission to investigate why. The Commission concluded that despite an historic enforcement effort—including the police abuses that made the Wickersham Commission famous—the government could not stop everyone from drinking. Many people, especially in certain city neighborhoods, simply would not comply. The Commission did not recommend repeal at this time, but by 1931 it was just around the corner.

Five years later an American doctor working in a chemical plant made a startling discovery. Several workers began complaining that alcohol was making them sick, causing most to stop drinking it entirely—“involuntary abstainers,” as the doctor, E.E. Williams, later put it. It turns out they were in contact with a chemical called disulfiram used in the production of rubber. Disulfiram is well-tolerated and water-soluble. Today, it is marketed as the popular anti-alcoholism drug Antabuse.

Were disulfiram discovered just a few years earlier, would federal law enforcement have dumped it into key parts of the Chicago or Los Angeles water supply to stamp out drinking for good? Probably not. It simply would not have occurred to them. No one was regulating by architecture then. To dramatize this point: when New York City decided twenty years later to end a string of garbage can thefts by bolting the cans to the sidewalk, the decision made the front page of the New York Times. The headline read: “City Bolts Trash Baskets To Walks To End Long Wave Of Thefts.”

In an important but less discussed chapter in The Future of the Internet, Jonathan Zittrain explores our growing taste and capacity for “perfect enforcement.” Read More

3

Future of the Internet Symposium: The Role of Infrastructure Management in Determining Internet Freedom

Last week, Facebook reportedly blocked users of Apple’s new Ping social networking service from reaching Facebook friends because the company was concerned about the prospect of massive amounts of traffic inundating its servers.  This is precisely the type of architectural lockdown Jonathan Zittrain brilliantly portends in The Future of the Internet and How to Stop It. Contemplating this service blockage and re-reading Jonathan’s book this weekend have me thinking about the role of private industry infrastructure management in shaping Internet freedom.

The Privatization of Internet Governance

I’m heading to the United Nations Internet Governance Forum in Vilnius, Lithuania, where I will be speaking on a panel with Vinton Cerf and members of the Youth Coalition on Internet Governance about “Core Internet Values and the Principles of Internet Governance Across Generations.” What role will “infrastructure management” values increasingly play as a private industry ordering of the flow of information on the Internet? The privatization of Internet governance is an area that has not received enough attention.  Internet scholars are often focused on content.  Internet governance debates often reduce into an exaggerated dichotomy, as Milton Mueller describes it, between the extremes of cyberlibertarianism and cyberconservativism. The former can resemble utopian technological determinism and the later is basically a state sovereignty model that wants to extend traditional forms of state control to the Internet.

The cyberlibertarian and cyberconservative perspectives are indistinguishable in that they both tend to disregard the infrastructure governance sinews already permeating the Internet’s technical architecture.  There is also too much attention to institutional governance battles and to the Internet Governance Forum itself, which is, in my opinion, a red herring because it has no policy-making authority and fails to address important controversies.

Where there is attention to the role of private sector network management and traffic shaping, much analysis has focused on “last mile” issues of interconnection rather than the Internet’s backbone architecture.  Network neutrality debates are a prime example of this.  Another genre of policy attention addresses corporate social responsibility at the content level, such as the Facebook Beacon controversy and the criticism Google initially took for complying with government requests to delete politically sensitive YouTube videos and filter content. These are critical issues, but equally important and less visible decisions occur at the architectural level of infrastructure management.  I’d like to briefly mention two examples of private sector infrastructure management functions that also have implications for Internet freedom and innovation: private sector Internet backbone peering agreements and the use of deep packet inspection for network management.

Private Sector Internet Backbone Peering Agreements

For the Internet to successfully operate, Internet backbones obviously must connect with one another.  These backbone networks are owned and operated primarily by private telecommunications companies such as British Telecom, Korea Telecom, Verizon, AT&T, Internet Initiative Japan and Comcast.  Independent commercial networks conjoin either at private Internet connection points between two companies or at multi-party Internet exchange points (IXPs).

IXPs are the physical junctures where different companies’ backbone trunks interconnect and exchange Internet packets and route them toward their appropriate destinations.  One of the largest IXPs (based on throughput of peak traffic) is the Deutscher Commercial Internet Exchange (DE-CIX) in Frankfurt, Germany.  This IXP connects hundreds of Internet providers, including content delivery networks and web hosting services as well as Internet service providers.  Google, Sprint, Level3, and Yahoo all connect through DE-CIX, as well as to many other IXPs.

Other interconnection points involve private contractual arrangements between two telecommunications companies to connect for the purpose of exchanging Internet traffic. Making this connection at private interconnection points requires physical interconnectivity and equipment but it also involves agreements about cost, responsibilities, and performance. There are generally two types of agreements – peering agreements and transit agreements. Peering agreements refer to mutually beneficial arrangements whereby no money is exchanged among companies agreeing to exchange traffic at interconnection points.  In a transit agreement, one telecommunications company agrees to pay a backbone provider for interconnection. There is no standard approach for the actual agreement to peer or transit, with some interconnections involving formal contracts and others based upon verbal agreements between companies’ technical personnel.

Interconnection agreements are an unseen regime.  They have few directly relevant statutes, almost no regulatory oversight, and little transparency in private contracts and agreements.  Yet these interconnection points have important economic and implications to the future of the Internet.  They certainly have critical infrastructure implications depending on whether they provide sufficient redundancy, capacity and security.  Disputes over peering and transit agreements, not just problems with physical architecture, have created network outages in the past.  The effect on free market competition is another concern, related to possible lack of competition in Internet backbones, dominance by a small number of companies, and peering agreements among large providers that could be detrimental to potential competitors. Global interconnection disputes have been numerous and developing countries have complained about transit costs to connect to dominant backbone providers.  The area of interconnection patents is another emerging concern with implications to innovation.  Interconnection points are also obvious potential points of government filtering and censorship.  Because of the possible implications to innovation and freedom, greater transparency and insight into the arrangements and configurations at these sites would be very helpful.

Network Management via Deep Packet Inspection

Another infrastructure management technique with implications to the future of the Internet is the use of deep packet inspection (DPI) for network management traffic shaping.  DPI is a capability manufactured into network devices (e.g. firewalls) that scrutinizes the entire contents of a packet, including the payload as well as the packet header.  This payload is the actual information content of the packet.  The bulk of Internet traffic is information payload, versus the small amount of administrative and routing information contained within packet headers.  ISPs and other information intermediaries have traditionally used packet headers to route packets, perform statistical analysis, and perform routine network management and traffic optimization.  Until recent years, it has not been technically viable to inspect the actual content of packets because of the enormous processing speeds and computing resources necessary to perform this function.

The most publicized instances of DPI have involved the ad-serving practices of service providers wishing to provide highly targeted marketing based on what a customer views or does on the Internet.  Other attention to DPI focuses on concerns about state use of deep packet inspection for Internet censorship. One of the originally intended uses of DPI, and still an important use, is for network security. DPI can help identify viruses, worms, and other unwanted programs embedded within legitimate information and help prevent denial of service attacks. What will be the implications of increasingly using DPI for network management functions, legitimately concerned with network performance, latency, and other important technical criterion?

Zittrain discusses how the value of trust was designed into the Internet’s original architecture.  The new reality is that the end-to-end architectural principle historically imbued in Internet design has waned considerably over the years with the introduction of Network Address Translation (NATs), firewalls, and other networks intermediaries. Deep packet inspection capability, engineered into routers, will further erode the end-to-end principle, an architectural development which will have implications to the future of the Internet’s architecture as well as to the future of individual privacy and network neutrality.

As I head to the Internet Governance Forum in Vilnius, Lithuania, Zittrain’s book is a reminder of what is at stake at the intersection of technical expediency and Internet freedom and how private ordering, rather than governments or new Internet governance institutions, will continue to shape the future of the Internet.