Site Meter

Category: Symposium (Future of Internet)

5

On Defining Generativity, Openness, and Code Failure

I’ve really enjoyed the back-and-forth in this symposium about the many issues raised in Jonathan Zittrain’s Future of the Net, and I appreciate that several of the contributors have been willing to address some of my concerns and criticisms in a serious way. I recognize I’m a bit of a skunk at the garden party here, so I really do appreciate being invited by the folks at Concurring Opinions to play a part in this. I don’t have much more to add beyond my previous essay, but I wanted to stress a few points and offer a challenge to those scholars and students who are currently researching these interesting issues.

As I noted in my earlier contribution, I’m very much hung up on this whole “open vs. closed” and “generative vs. sterile/tethered” definitional question. Much of the discussion about these concepts takes place at such a high level of abstraction that I get very frustrated and want to instead shift the discussion to real-world applications of these concepts.  Because when we do, I believe we find that things are not so clear-cut.  Again, “open” devices and platforms rarely are perfectly so; and “closed” systems aren’t usually completely clamped down.  Same goes for the “generative vs. sterile/tethered” dichotomy.

That’s one reason I’ve given Jonathan such grief for making Steve Jobs and his iPhone the villain of his book, which is highlighted in the very first and last line of Future of the Net as the model of what we should hope to avoid. But is it really?  Ignore the fact that there are plenty of more “open” or “generative” phones / OSs on the market.  The more interesting question here is how “closed” is the iPhone really?  And how does it stack up next to, say, Android, Windows Mobile, Blackberry, Palm, etc.?  More importantly, how and when do we take the snapshot and measure such things?

I’ve argued that Zittrain’s major failing in FoTN—and Lessig’s in Code—comes down to a lack of appreciation for just how rapid and unpredictable the pace of change in this arena has been and will continue to be. The relentlessness and intensity of technological disruption in the digital economy is truly unprecedented.  We’ve had multiple mini-industrial revolutions within the digital ecosystem over the past 15 years. I’ve referred to this optimistic counter-perspective in terms of “evolutionary dynamism” but it’s really more like revolutionary dynamism.  Nothing—absolutely nothing—that was sitting on our desks in 1995 is still there today (in terms of digital hardware / software, I mean).  Heck, I doubt that much of what was on our desk in 2005 is still there either—with the possible exception of some crusty desktop computers running Windows XP. Read More

0

Future of the Internet Symposium: Casting a Wider Net

The project behind writing a book like The Future of the Internet is not only admirable, it should and does inspire people to think about major philosophical and social questions about the inherent politics in technological infrastructure. The project is also hard, and likely to draw critics, both valid and not. When you set out to talk about the future of something as broad and culturally revolutionary as the internet, you can’t possibly hope to succeed, you can only hope to fail better over time.

To continue the evolving failure, I’d like to take an ecological approach to The Future of the Internet and ask what context do Zittrain’s points exist in? We are told that more developers are writing for iPhone and Facebook than Linux, that iPhones dominate the landscape, that iPads might determine something of our political future. But this is only true for something that is already a walled garden– the American socio-economic middle and upper class. Beyond this barrier of perspective the landscape is very different. Is it true that people develop for the iPhone in favor of other platforms in Kenya? What about China or South Korea? Probably not, but the transnational nature of the net means we have to care about those places as well if we want to come up with a true picture of what’s going on, or going to happen.

Skype is one of the most often cited examples of an application people want to protect in the net neutrality debate. It was developed by Estonian hackers previously famous for the illegal file sharing app Kazaa. When Kazaa came out, no analysts and tech pundits were saying “Look to Estonia to revolutionize the telecommunications debate.” But it’s obvious that Skype was informed by the peer-to-peer nature of Kazaa, and by the legal and technical troubles the Kazaa builders wrestled with. Now the walled gardens of the net have to quickly take and maintain a stance on Skype, both on a technical and political level. What these kinds of applications ultimately demonstrate is that the next killer app has no pre-definable vector, and if you lock down one part of the net, chain up one cohort, then some other will be the source of disruption. To imagine that the governments of the world will somehow line up and cooperate on a net policy that universally kills this creative impulse is like waiting for a one world government to solve the problems of climate change. Sure, it seems possible on paper, but don’t hold your breath.

Even if we could reliably regulate the internet, what is the internet? It’s a specific implementation of telecommunication infrastructure. But not terribly specific. It’s easy to say what is definitely the internet, harder to say what isn’t. Is text messaging part of the internet? My first instinct is to say no, but it’s an interface and control on many internet applications. It’s been a key part of monitoring and tightly integrated at administrative levels of the net for as long as it’s been around. So perhaps we have to allow it in the pool. What about phone calls themselves? Again, problematic, as telecom companies will sometimes use the same protocols and wires to transit calls as net traffic. African, Afghani, and Filipino programs that move banking onto cell phones show that generativity  moves to the edges of the net/telecom division when you can’t access the net itself for some reason.

What is generative? This is also hard. The telecom infrastructure was built to be non-generative, non-open, and not user friendly. It was built top-down and tightly regulated. But the net was built on top of it, so it ultimately was generative despite the intentions of its builders. The net nested a bottom-up social structure in that top-down architecture. The total generativity of a system can only be determined in retrospect from how it was used, not from how it was architected. To focus only on the protocols as written to understand whether a technology will be generative is like trying to determine whether an artist has a good eye by looking at his DNA.

Generative and non-generative systems have always emerged from strange parents, and given birth to strange children. I’ve seen nothing to make me fear for the future of the net in general, though I think Facebook, Apple, and Zittrain’s points make me fear that the respectable net will be an increasingly boring place. Nevertheless they will fall in time. To keep their captive audience happy Apple has to be right all the time. The general purpose environment only has to be right once. People are not sticky, and getting less sticky by the day, and a change that captures their imagination  will drag them away from a platform or a business model or a political system with scary haste. We can’t see these changes coming from looking at how things are structured to work. We have to look at the limits of how they might be messed with.

If you want to understand the future of the internet, or the future in general, you have to look past how technology is used, and see how it’s misused. Can the net go horribly wrong? Oh yes, but not only in the ways we can predict here, now. The radio was key to allied victory in WWII, and to instigating the Rwandan genocide 50 years later. Undoubtedly the net and cell phones will grow closer together, and have their moments of glory and horror in human history.

0

Future of the Internet Symposium: Identity

Zittrain’s book mentioned en passant that unlike the closed, proprietary services, the Internet has no authentication; he also suggests that this is tied to the alleged lack of consideration for security by the Internet’s designers.  I won’t go into the latter, save to note that I regard it as a calumny; within the limits of the understanding of security 30 years ago, the designers did a pretty good job, because they felt that what was really at risk — the computers attached to the net — needed to protect themselves, and that there was nothing the network could or should do to help.  This is in fact deeply related to Zittrain’s thesis about the open nature of the Internet, but I doubt I’ll have time to write that up before this symposium ends.

The question of identity, though, is more interesting; it illustrates how subtle technical design decisions can force certain policy decisions, much along the lines that Lessig set forth in Code.  We must start, though, by defining “identity”.  What is it, and in particular what is it in an Internet context?  Let me rephrase the question: who are you?  A name?  A reputation?  A fingerprint?  Some DNA?  A “soul”?

Tolkien probably expressed the dilemma best in a conversation between Frodo and Tom Bombadil in Lord of the Rings:

‘Who are you, Master?’ he asked.

‘Eh, what?’ said Tom sitting up, and his eyes glinting in the gloom. ‘Don’t you know my name yet? That’s the only answer. Tell me, who are you, alone, yourself and nameless?

We are, in some sense, our names, with all the baggage appertaining thereto.  For some web sites, you can pick an arbitrary name and no one will know or care if it’s your legal name.  For other purposes, though, you’re asked to prove your identity, perhaps via the oft-requested “government-issued photo ID”.  In other words, we have a second player: an authority who vouches for someone’s name.  This authority has to be mutually trusted — I’m not going to prove my identity to Mafia, Inc., by giving them my social security number, birthdate, etc., and you’re not likely to believe what they say.  Who is trusted will vary, depending on the circumstances; a passport issued by the government of Elbonia might be sufficient to enter the US, but MI-6 would not accept such a document even if it were in the name of James Bond.  This brings up the third player: the acceptor or verifier.

When dealing with closed, proprietary networks, the vouching authority and the acceptor are one and the same.  More to the point, the resources you are accessing all belong to the verifier.  The Internet, though, is inherently decentralized.  It is literally a “network of networks”; no one party controls them all.  Furthermore, the resource of most interest — end-systems — may belong to people who don’t own any networks; they just buy connectivity from someone else.  Who are the verifiers?

A biometric — fingerprints, DNA, retina prints, even “soul prints” — doesn’t help over the net.  The verifier simply sees a string of bits; it has no knowledge of where they’re from.  You may authenticate yourself to a local device via a biometric, but it in turn will just send bits upstream.

Because of the decentralized nature, there is no one verifying party.  I somehow have to authenticate to my ISP.  In dial-up days, this was done when I connected to the network; today, it’s done by physical connection (e.g., the DSL wire to your house) or at network log-in time in WiFi hotspots.  My packets, though, will traverse very many networks on the way to their destination.  Must each of them be a verifier?  I can’t even tell a priori what networks my packets will use (see the previous discussion on interconnection agreements); I certainly don’t have business relationships with them, nor do I know whom they will consider acceptable identity vouchers.

This isn’t just a performance issue, though I should note that verifying every packet in the core of the Internet was well beyond the state of the art 30 years ago, and may still be impossible.  It is an architectural limitation, stemming from the decision in the late 1970s to avoid a centrally-controlled core.

The design of  the Internet dictates that you are only strongly authenticated to your local host or site.  Anything beyond that is either taken on faith or is done by end-to-end authentication.  That, though, it exactly how the Internet was designed to operate, and it doesn’t assume that any two parties even have the same notion of identity.  My identity on my phone is a phone number; my login on my computers is “smb”; my university thinks I’m smb2132; Concurring Opinions knows me by my full name.  Which is correct?  Any and all — the Internet is too decentralized for any one notion of identity.  Had the designers created a more centralized network, you might indeed able to authenticate to the core.  But there is no core, with all of the consequences, good and bad, that that implies.

(This is my last post of the symposium.  I’ll be offline for a few days; when I come back online, I may add a few comments.  I’ve very much enjoyed participating.)

0

Future of the Internet Symposium: File under…

James Grimmelmann’s discussion of the essential theory of generativity and its value as the ‘right theory’ (as opposed to its application, which he suggests needs more discussion for FOI 2.0) is a nice link to something I’m still quite curious about.  Since The Future Of The Internet came out, a diverse bunch have been responding to it, and I think those responses are worth considering in this symposium, as a way of adding some further spice to our analysis of a fine book and particularly its role in debates about theory and ideology.

This can start at quite a simple level.  I smiled when, in the wonderful bookshop in the Tate Modern gallery in London, I spotted a single paperback copy of The Future Of The Internet in the ‘Critical Theory’ section, completely surrounded by the many works of Slavoj Žižek.  Of course, methods of classification in libraries and bookstores can be revealing (even when everything is miscellaneous), and that’s certainly the case here.  What sort of impact is Zittrain’s work having outside of cyberlaw – and what does that say about the development of cyberlaw itself?  Many will know of the preface to Paul Berman’s reader on Law & Society Approaches To Cyberspace (via SSRN), where he takes a three-generations approach, suggesting that Zittrain (through the 2006 Harvard Law Review generativity article), along with Benkler and others, are a third generation combining aspects of the first (mid-90s debates about exceptionalism and cyberlibertarianism) and the second (sceptical, sober, Lessig, Reidenberg).  I wonder if we can now articulate a better version of the third generation in its own right, though – and whether Zittrain himself sees it that way.

Read More

1

Future of the Internet Symposium: Do we need a new generativity principle?

[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet's overall ability to foster innovation) is here.]

In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)

Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.[1]

Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.

This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments [2] (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.

Read More

Future of the Internet Symposium: An Iron Cage for the iPhone Age

William Gibson’s essay on “Google’s Earth” deserves to be read by anyone interested in the “future of the internet.” Gibson states that “cyberspace has everted. . . . and [c]olonized the physical”, “[m]aking Google a central and evolving structural unit not only of the architecture of cyberspace, but of the world.” He’s reminded me of James Boyle’s observation that:

Sadly for academics, the best social theorists of the information age are still science fiction writers and, in particular cyberpunks—the originators of the phrase ‘cyberspace’ and the premier fantasists of the Net. If one wants to understand the information age, this is a good place to start.

Some legal academics have taken this idea to heart; for example, Richard Posner apparently began writing Catastrophe in response to Margaret Atwood’s Oryx and Crake. With that in mind, I wanted to point to some speculative fiction that I think ought to inform our sense of “the future of the internet.”
Read More

0

Future of the Internet Symposium: Will Robotics Be Generative?

I don’t know that generativity is a theory, strictly speaking. It’s more of a quality. (Specifically, five qualities.) The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives. These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.

The law is in a position to influence this dynamic. Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy. Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.

Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom. I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States.

As van Schewick notes in her post, Zittrain is concerned primarily with yet another incentive, one perhaps less amenable to legal intervention. After all, the incentive to tether and lock down is shaped by a set of activities that are already illegal.

One issue that does not come up in The Future of the Internet (correct me if I’m wrong, Professor Zittrain) or in Internet Architecture and Innovation (correct me if I’m wrong, Professor van Schewick) is that of legal liability for that volatile thing you actually run on these generative platforms: software. That’s likely because this problem looks like it’s “solved.” A number of legal trends—aggressive interpretation of warranties, steady invocation of the economic loss doctrine, treatment of data loss as “intangible”—mean you cannot recover from Microsoft (or Dell or Intel) because Word ate your term paper. Talk about a blow to generativity if you could.

Read More

0

Future of the Internet Symposium: Generative End Hosts vs. Generative Networks?

Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.

As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:

1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture) [1] to design the architecture of a network creates a network with these characteristics.

2. A sufficient number of general-purpose end hosts [2] that allowed their users to install and run any application they like.

Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”

In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.

Read More

2

Future of the Internet Symposium: How can we create even better incentives?

Disclaimer: The views expressed here are mine alone and do not in any way represent those of my employer.

I appreciated Orin Kerr’s suggestion to take Adam Thierer’s seven objections to the Zittrain thesis as a starting point for further discussion. I’m particularly interested in exploring objection #2, that incentives already exist to check closed systems that negatively impacted consumer welfare. In general, I agree with Adam’s assertion that these incentives exist, particularly in market economies. But, I think the core value of the Jonathan’s thesis is not so much an assertion that these incentives do not exist today, but rather a question as to whether we could create even more powerful ones through generative design.

The “perfect enforcement” consequences of tethered design that Jonathan explores seem to be very real in some countries, if you believe recent news about efforts in some countries to shut down services entirely if surveillance and censorship mechanisms are not put in place. As an American who has lived in the US most of her life, I can’t comment extensively on the extent to which incentives exist globally the way they do in the US. Here, I have faith that our right to free speech enshrined in the Constitution would enable a whistleblower to identify behavior of that nature. I also have faith that our competitive marketplace would lead to alternative services springing up quickly. It’s not clear to me that these and other incentives exist globally, so I’d like to broaden Adam’s point and ask how we could think about designing generative systems that would create the types of social and economic incentives required to check bad behavior on the part of powerful actors.

In 2009 Google launched a little project called Measurement Lab, an open platform of servers on which developers and researchers can deploy Internet measurement tools. It’s one example of an attempt to decentralize the power that comes with measurement and open up access to data that has otherwise been available only to a handful of backbone and last-mile providers. M-Lab, as it’s called, couldn’t have been launched by Google alone; it required collaboration between a diverse group of academic researchers, non-profit organizations and companies, few of whom (if any!) had any direct financial interest in the project, not entirely unlike the Internet if at a much smaller scale. The outcome of this project is that policymakers can have access to independent, objective data and research about Internet speed, latency, and accessibility. It’s I think fair to say that it’s a generative approach to solving the Internet accessibility problems.

When I think about the generativity thesis, these are the types of solutions to hard problems that come to mind. As Adam observes, Jonathan doesn’t lay out a concrete proposed solution(s) for tackling the vast array of policy problems he observes, but in my view the primary contribution of the work is not in a proposed solution. It is in a framework for thinking about a possible solution space.

Read More

1

Future of the Internet Symposium: The Difficulty in Identifying Open v. Closed Systems

In his post, Adam Thierer presses on the question of whether we can distinguish open and closed systems.  He suggests that Zittrain overstates the problem, noting that many networks and appliances combine features of generativity and tetheredness and that consumers can always choose products and networks with characteristics that they like.

To be sure, it can be difficult to identify the degree of openness/generativity of systems, but not just because appliances and networks combine them seamlessly.  Confusion may arise because providers fail to articulate their positions clearly and transparently regarding certain third party activities.  This surely explains some of the examples of contingent generativity that Zittrain highlights: one minute the app you wrote is there, the next it is not, or postings at the content layer appear and then are gone.  In the face of vague policies, consumers may have difficulty making informed choices, especially when providers embed decisions into architecture.

Part of Zittrain’s plan to preserve innovation online is to enlist netizens to combat harmful activities that prompt providers to lock down their devices.  A commitment to transparency about unacceptable third-party activities can advance that important agenda.  For instance, social media providers often prohibit “hateful” speech in their Terms of Service or Community Guidelines without defining it with specificity.  Without explaining the terms of, and harms to be prevented by, hate speech policies as well as the consequences of policy violations, users may lack the tools necessary to engage as responsible netizens.  Some social media providers inform users when content violating their Terms of Service has been taken down, a valuable step in educating communities about the limits to openness.  Users of Facebook can see, for instance, that the Kill a Jew Day group once appeared and has now been removed.  This sort of transparency is a first step in an important journey of allowing consumers to make educated choices about the services/appliances/networks they use and to garner change through soft forms of regulation.