Site Meter

Author: Barbara van Schewick

0

Network Non-Discrimination and Quality of Service

Over the past ten years, the debate over “network neutrality” has remained one of the central debates in Internet policy. Governments all over the world have been investigating whether legislative or regulatory action is needed to limit the ability of providers of Internet access services to interfere with the applications, content and services on their networks.

In addition to rules that forbid network providers from blocking applications, content and services, rules that forbid discrimination are a key component of any network neutrality regime. Non-discrimination rules apply to any form of differential treatment that falls short of blocking. Policy makers who consider adopting network neutrality rules need to decide which, if any, forms of differential treatment should be banned. These decisions determine, for example, whether a network provider is allowed to provide low-delay service only to its own streaming video application, but not to competing video applications; whether network providers can count only traffic from unaffiliated video applicationsbut not their own Internet video applications towards users’ monthly bandwidth cap; or whether network providers can charge different Internet access charges depending on the application used, independent of the amount of traffic created by the application.

Read More

1

Future of the Internet Symposium: Do we need a new generativity principle?

[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet's overall ability to foster innovation) is here.]

In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)

Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.[1]

Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.

This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments [2] (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.

Read More

0

Future of the Internet Symposium: Generative End Hosts vs. Generative Networks?

Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.

As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:

1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture) [1] to design the architecture of a network creates a network with these characteristics.

2. A sufficient number of general-purpose end hosts [2] that allowed their users to install and run any application they like.

Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”

In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.

Read More