Category: Architecture

4

Online Symposium: Zittrain’s The Future of the Internet–And How To Stop It

It’s an honor to introduce Jonathan Zittrain and the participants in our online symposium on The Future of the Internet–And How to Stop It. From tomorrow through Wednesday, we will be discussing Zittrain’s important book, which warns of a shift in the Internet’s trajectory from a wide-open Web of creative anarchy to a series of closed platforms that will curtail innovation.  As  Zittrain predicted, “tethered appliances” dominate our information ecosystem today.  We increasingly trade generative technologies like PCs that permit experimentation for sterile, reliable appliances like mobile phones, video game consoles, and book readers that limit or forbid tinkering.  Zittrain attributes this phenomenon to the unfortunate, yet now predictable, pathologies that generativity enables.  Although generative technologies facilitate innovation, they permit the spread of spam, viruses, malware, and the like.

According to Zittrain, the Internet is at a crucial inflection point.  Rather than sustaining the wide-open Web of creativity and disruption, the Internet may in time become a series of controlled networks that limit innovation and enable inappropriate governmental and corporate surveillance.  Zittrain offers various strategies to forestall such scenarios, including tools to empower users to solve problems that drive users to sterile appliances and networks.  Zittrain argues that our information ecology functions best with generative technology at its core.

The Future of the Internet raises a host of fascinating and timely questions. Is the future of the Internet indeed bleak?  As this month’s cover story for Wired asks: is Zittrain’s dark future only likely in the “commercial content side” of the digital economy?  Might a healthy balance of generative technologies and tethered appliances emerge, or is the move to appliancized networks a grab for control that will be difficult to shake?  Will non-generative technologies impact our democratic commitments and cultural values?  Should we remain committed to protecting generativity?  Are there alternative strategies for preserving innovation besides the ones that Zittrain offers?

To consider these and other issues, we have invited an all-star cast of thinkers:

Steven Bellovin

M. Ryan Calo

Laura DeNardis

James Grimmelmann

Orin Kerr

Lawrence Lessig

Harry Lewis

Daithí Mac Síthigh

Betsy Masiello

Salil Mehra

Quinn Norton

Alejandro Pisanty

Joel Reidenberg

Barbara van Schewick

Adam Thierer

My co-bloggers will join this conversation as well.  In a post in April 2009, co-blogger Deven Desai started our conversation about The Future of the Internet–And How to Stop It.  Since that time, the wild-fire adoption of tethered appliances, iPod applications, iTunes, and the like have shown just how prophetic and important Zittrain’s book is.  We are excited for the discussion to begin.

0

Using Transparency As A Mask

As mankind deploys increasing numbers of sensors, and makes more sense of this data, more of our secrets are revealed.  In a world of greater transparency, will you be able to be you?  Or will you feel obligated to mask who you are, drawn to the safety of the center of the bell curve?

Will a more transparent society make you average?

Imagine for a moment that video feeds from street surveillance cameras are the blue puzzle pieces, your path through life lit up by your cell phone location as the green puzzle pieces and your Facebook social network as the yellow puzzle pieces.  Flicker the brown puzzle pieces and Twitter, orange puzzle pieces.  And maybe one day your energy consuming devices in your home may be spewing out the magenta puzzle pieces. As increasing volume and range of data converges, a colorful, highly revealing picture of our lives will unfold, with or without our knowledge or permission.  Traditional physical sensors like credit card and license plate readers are one thing.  The human is the sensor, thanks to Web 2.0, is altogether a different thing.

Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.

With more data, comes better understanding and prediction.  The convergence of data might reveal your “discreet” rendezvous or the fact you are no longer on speaking terms your best friend.  No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it.  Pity … you thought that all of this information was secret.

How will mankind respond? Will people feel forced to modify their behavior towards normal only because they fear others may discover their intimate personal affairs?  This is what Julie Cohen and Neil Richards have worried about – the “chilling effect.” Read More

1

Broadband Providers, Big Money, and the Price of an Open Broadband Internet

In an editorial entitled The Price of Broadband Politics, the New York Times aptly captured the big-money politics facing the F.C.C. in its push to ensure an open, nondiscriminatory, and competitive access to broadband Internet.  The New York Times writes:

“One good measure of the intensity with which phone and cable companies dislike the Federal Communications Commission’s plan to extend its regulatory oversight over access to broadband Internet is the amount of money they are spending on political contributions.

Last month, 74 House Democrats sent a letter to the F.C.C.’s chairman, Julius Genachowski, warning him “not to move forward with a proposal that undermines critically important investment in broadband and the jobs that come with it.” Rather than extend its authority over telecommunications networks to broadband under the 1996 Telecommunications Act, they demanded that the F.C.C. wait for Congress to pass specific legislation.

The message parroted views held by AT&T, Comcast and Verizon — the biggest broadband service providers in the country. (Comcast warned that the F.C.C.’s efforts could “chill investment and innovation.”) Their executives and political action committees have been among the top 20 campaign contributors to 58 of the 74 lawmakers in the past two election cycles.

As the F.C.C. proceeds with its plan to regulate broadband access, it seems likely we can expect more of this resistance from members of Congress. Read More

1

Baby Steps for Transparency in Voting Systems

This country’s electronic voting systems remain black boxes with no means for the public to check their accuracy and security.  To make matters worse, reports persist about election officials’ failure to protect those black boxes from mischief.  Until recently, a warehouse storing thousands of Pennsylvania’s electronic voting machines kept its door propped open, leaving the machines vulnerable to manipulation.

The news is not all doom and gloom.  To combat public concern about the reliability of its software, e-voting provider Sequoia has begun posting its code online, allowing the public to assess code that the company will put through the federal voting system certification process.  The move is one that I called for in Open Code Governance: disclosing code of proprietary election systems to permit public inspection and feedback while keeping the vendor in charge of changes to the software.  Sequoia, unfortunately, only controls a small portion of the e-voting market.  About 80 percent of voters cast their votes on black box machines manufactured by ES&S (which recently acquired Premier formerly known as Diebold).

To be sure, open code wouldn’t solve all of our voting problems.  As the recent conviction of Kentucky voting officials attest, voting manipulation can be low tech: there, voting officials trained workers to mislead voters into believing that they finished voting after punching an initial review screen when in fact voters need to verify the vote on a subsequent screen.  This allowed workers to change individuals’ votes.  Nonetheless, open code approach would facilitate greater transparency while likely enhancing the accuracy and security of systems given the feedback of interested experts.

Greg Miller of the Open Source Digital Voting Foundation recently shared his concerns about the potential for glass box voting technologies.  He remarked: “When you’re a company who has a shareholder interest to maintain, and your competitive advantage is predicated on trade secrets and other intellectual property mechanisms, then you’re going to resort to black box technology to protect your competitive advantage . . . Well, black box technology doesn’t work in a world that demands ‘glass box’ technology, and when shareholder interests collide with public interest, that’s a train wreck.”  If new legislation or administrative policy required open code, might that demand that vendors compete on other important grounds, such as the reliability of their hardware or the machines’ enhanced value to the visually impaired.  Something worth discussing.

3

The Gospel of Generativity

In today’s New York Times, Steven Johnson shared his recent crisis of faith.  Johnson has long believed in open platforms to promote innovation and diversity online.   In his view, the “gospel” of openness goes like this: “In the words of one of the Web’s brightest theorists, Jonathan Zittrain of Harvard, the Web displays the ‘generative’ power of a platform where you don’t have to ask permission to create and share new ideas.  If you want democratic media, where small, innovative start-ups can compete with giant multinationals, open platforms are the way to go.”  Johnson has apparently devoted a hundred pages of book chapters, essays, and blog posts spreading the “gospel” of openness.

Now, Johnson is rethinking his belief in openness.  Why?  As Johnson explains, Apple’s iPhone software has been the “most innovative in the history of computing.”  More than 150,000 applications have been created for the iPhone in less than two years, and small developers created so many of them.  As Johnson notes, “it’s conceivable that, had Apple loosened the restrictions surrounding the App Store,the iPhone ecosystem would have been even more innovative, even more democratic.  But I suspect that this view is too simplistic.  The more complicated reality is that the closed architecture of the iPhone platform has contributed to its generativity in important ways.”  Consumers have been willing to experiment with apps because they come from a trusted source.  At the same time, the single payment mechanism helped nurture the ecosystem by making it easier to by apps “impulsively” with one-click ordering.  In Johnson’s view, while the iPhone/iPad ecosystem likely could benefit from a little more openness, it has made clear that “sometimes, if you get the conditions right, a walled garden can turn into a rain forest.”

This summer, Concurring Opinions will take up this question in earnest.  Jonathan Zittrain will join us for an online symposium on his book The Future of the Internet — And How to Stop ItThe Future of the Internet has already generated (forgive the pun) exciting reviews.  Ann Bartow’s review recently appeared in the Michigan Law Review and James Grimmelmann and Paul Ohm have a forthcoming one in the Maryland Law Review.  We are going to build upon this literature, joining legal academics and computer scientists to discuss the net’s future.  Perhaps even Steven Johnson will join in on the fun.

3

The Right to the Internet

According to a poll sponsored by the BBC World Service, four in five adults in more than 26 countries believe that Internet access is a fundamental right.  The poll asked more than 27,000 adults about their attitudes towards the Internet and found that 87 percent of regular Internet users agree that access should be a “fundamental right of all people.”  More than 71 percent of non-Internet users felt that they should have the right to access the global network.

Crucial to our access to the Internet is our continued adherence to the end-to-end principle.  As legal scholar and computer scientist Barbara van Schewick explained in her Opening Statement at the FCC’s Workshop on Innovation, Investment, and the Open Internet, the “network was designed to be as general as possible in order to support a wide variety of applications with different needs.  So when a new application comes along, the network doesn’t have to be changed to allow the application to run.  All the innovator has to do is write the program that runs on a computer attached to the Internet.”  As van Schewick notes, the low cost of developing new applications has enabled the creation of eBay and Skype, even though many questioned those applications’ ability to succeed in the marketplace (who would buy goods through online auctions?) or their plausibility (network engineers didn’t initially think internet telephony was possible).

Now, however, sophisticated technology is available that “enables network providers to identify the applications and content on their network and control their execution.”    According to van Schewick, the “original Internet was application-blind,” but “today’s Internet is not.”  This matters to access and innovation.  Although a programmer may have a great idea for a video platform that will revolutionize the way people watch television, cable providers could squash it.  They could block the inventor’s application or slow it down.  Why would they do that?  As van Schewick explains, maybe the application competes with theirs, maybe they want a share of the inventor’s profits, maybe they don’t like the content, or maybe the application is slowed down to manage bandwidth.  Whatever the reason, the network provider can ensure the failure of the inventor’s project, chasing away potential investors and other inventors.  In the end, this risks the diversity of innovation and its concomitant societal benefits.  If network providers “pick winners and losers on the Internet, if they decide how users can use the network, users may end up with applications that they would not have chosen, and may be forced to use the Internet in a way that does not create the value it could.”

In short, our failure to commit to network neutrality, to permit discrimination among applications, has a deep impact on what people now believe is their fundamental right.  van Schewick closed her Open Statement with a telling story.  She asked if the audience had tried to explained to their partners’ grandparents why they should get the Internet.  She explained that she had and noted that she didn’t say: “Grandma, you have to get the Internet?  It’s cool!  It lets you send data packets back and forth.”  “No, I said: ‘If you get the Internet, you can call us and see your grandchildren on the screen.  And if we have new pictures, you’ll be able to see them immediately after we send them.  And you can read about everything you can possibly imagine’ . . . ”  Thus, by “protecting the factors that have fostered application innovation in the past, we can make sure that the Internet will be even more useful and valuable in the future.”

3

Innovative Architectures of Privacy

As Daniel J. Weitzner recently noted to the New York Times, our current notice-and-choice model of privacy may soon be dead and good riddance.  Since the 1990s, we have relied upon websites’ privacy policies to inform individuals about whether their information would be collected, used, and shared.  Consumers usually don’t read these policies and, if they did, they likely would not understand them.  This leaves us with with much room to do better.

In “Redrawing the Route to Online Privacy,” the New York Times discusses how law and technology might get help us out of this mess.  The article highlighted several intriguing technical innovations.  A group at Carnegie Mellon University has designed software that will nudge consumers about the privacy implications of sharing certain information.  As CMU’s  Lorrie Faith Cranor explains, social network site users often share their birth dates, hoping to receive online greetings from friends yet doing so runs the risk of marketing profiling, identification, and identity theft.  Software could inform consumers of these risks before they share their birth dates.  M. Ryan Calo, a fellow at Stanford Law School’s Center for Internet and Society who has done exciting work on the privacy implications of robots, is exploring voice and animation technology emulating humans that would provide “visceral notice.”  Before someone puts information in a personal health record like GoogleHealth, a virtual nurse could explain the privacy implications of sharing the information.  Calo explains that people naturally react more strongly, in a more visceral way, to anthropomorphic cues.  The think tank Future of Privacy led by Jules Polonetsky and Chris Wolff is testing the effectiveness of using new icons and key phrases to provide web surfers with more transparency and choice about behavioral advertising practices.  Princeton’s Ed Felten (whose important computer science research has rightly preoccupied government and industry) is working on re-engineering the Web browser for greater privacy.  Felten would alter the software’s design so that information about on-screen viewing sessions is kept separate and not routinely passed along so a person’s browsing behavior can be tracked.

As these efforts make clear, code is crucial to the protection of consumer privacy.  To what extent, if at all, should we invoke law to regulate websites’ information practices?  Congress and the Federal Trade Commission is mulling rules that would limit a site’s use of information collected online.  As the New York Times notes, government might ban the use of recorded trails of a person’s web-browsing in employment or health insurance decisions.  It would be worth considering limits on data collection and retention practices too.  Law could require the deletion of certain information after a certain time, in the manner suggested by Viktor Mayer-Schonberger’s work.  All worth pondering.

1

Boyden on Google Buzz and COPPA

Guest blogger Professor Bruce Boyden has terrific insights on all things technology and law and so I invited him to comment on the Children’s Online Privacy Protection Act and its impact on the Google Buzz phenomenon.  So here is Professor Boyden:

Thanks, Danielle, for inviting me to expand on my comment yesterday on your post on the Google Buzz story. Google Buzz has been obviously been all over the news lately, in part for various complaints about Google’s privacy practices. Those complaints have focused on the way in which Buzz, enrollment in which was automatic for Gmail users, initially defaulted to effectively sharing users’ email contacts with the public. EPIC has filed a complaint with the FTC arguing that this combination of automatic enrollment and “opt-out” of information-sharing was an unfair or deceptive trade practice in violation of Section 5 of the FTC Act.

But that’s not what caught my attention in Danielle’s post. What really set off alarm bells in my head was Danielle’s recounting how her children and their friends, all under the age of 13, suddenly had their Gmail accounts turned into Google Buzz accounts,  and then proceeded to upload all sorts of information about themselves using the service. That raises the prospect that Google Buzz, by collecting such information without getting the appropriate parental consent, violated the Children’s Online Privacy Protection Act, or COPPA. I haven’t seen any discussion of this issue anywhere else.

COPPA is one of the few privacy statutes with real bite: it has strict rules that require substantial effort to follow, and the FTC has shown itself to be a vigorous enforcer. Indeed, the FTC has gone after two social networking sites for COPPA violations recently, and in one case imposed a fine of $1 million. So is Google violating COPPA? The answer is unclear but there’s definitely risk for Google here.

COPPA regulates the online collection of information from children under the age of 13. It applies to two classes of websites: those that have “actual knowledge” that they are collecting information from children, and those that are “directed to children.” If a website in either category is going to collect personally identifiable information (PII) from children, it first has to get “verifiable consent” from a parent. The FTC uses a “sliding scale” to determine what sort of verifiable parental consent is required; for information that is going to be publicly disclosed, as here, the FTC’s COPPA regulations require something like a mail-in form or a credit card. Read More

10

BRIGHT IDEAS: Helen Nissenbaum’s Privacy in Context: Technology, Policy, and the Integrity of Social Life

I’d like to second Dan’s enthusiasm for Helen Nissenbaum‘s newest book, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press 2009).  Privacy in Context is engrossing and important, and, lucky for us, I had a chance to interview Professor Nissenbaum about the book, her scholarship, and her thoughts on the future of privacy.  First, let me tell you a bit about Professor Nissenbaum.  Then, I will reproduce our interview below.

Helen Nissenbaum is Professor of Media, Culture and Communication, and Computer Science, at New York University, where she is also Senior Faculty Fellow of the Information Law Institute.  Her areas of expertise span social, ethical, and political implications of information technology and digital media. Nissenbaum has written extensively in journals of philosophy, politics, law, media studies, information studies, and computer science and has written and edited four books (including the book we highlight today).  She has also authored several important studies of values embodied in computer system design, including search engines, digital games, and facial recognition technology.

DC:  Why did you write this book?

HN:  I had published a series of articles on how privacy, conceptually and in practice, had been challenged by IT and digital media. Although, initially, these had been mainly critical in tone, for example, demonstrating how “privacy in public” exposed glaring weaknesses not only in predominant understandings of privacy but in approaches law and regulation, as well, they ultimately yielded the substantive idea of privacy as a claim to appropriate flows of personal information within distinctive social contexts, modeling this idea in terms of contextual integrity and — what I call in the book — “context-relative informational norms.” IT systems and digital media are often felt as privacy threats because they are disruptive of entrenched flows, they violate norms.

With these articles in far-flung journals, I realized it would be hard, if not impossible, for anyone to pull the whole argument together, to recognize the problems in certain other approaches and how contextual integrity addressed some of these. A book would consolidate these works into a coherent whole in what I imagined it would be the work of a mere few months — an extravagant miscalculation, of course.

While collaborating with colleagues from the PORTIA project (Adam Barth, Anupam Datta, and John Mitchell) to develop a formal expression of contextual integrity (in linear temporal logic), I came to realize that it needed significant sharpening. Further, it became increasingly clear that the theory needed a far more robust and fleshed out prescriptive (or normative) dimension, which I had only briefly sketched in the Washington Law Review article. This component would be absolutely essential to the success of contextual integrity as a whole, if the theory was to have moral “teeth.” And, of course, the longer I worked the larger the field became, more cases with which to reckon, more outstanding work to take into consideration. Mere months became a couple years.

DC:  What for you are the most pressing concerns that the book addresses.

HN:  Among the most pressing for me were:

First, to demonstrate that the private-public distinction, as useful as it may be in other areas of political and legal philosophy, is a terrible dead-end for conceptualizing a right to privacy and for formulating policy. In my view, far too much time has been wasted deciding whether this or that piece of information is private or public, whether this or that place is private or public, when, in fact, what ultimately we care about is what constraints ought to be imposed on the flows of this or that information in this or that place. We could make much more rapid progress addressing urgent privacy questions if we addressed the latter questions head-on instead of tying ourselves in knots over the former.

Second, to challenge the definition of privacy as control over information about oneself, which dominates policy realms, even if not to that extent in academia. The trouble with this definition is that it immediately places privacy at odds with other values, conceived as more pro-social. If the right to privacy is the right to control then of course it must be moderated, traded-off, compromised for the general good!  Moreover, it not even clear that control offers the best protection to the subject. Imagine, for example, if all that stood between individuals and access to their complete health records was subject consent and place these individual in a situation where a job, or mortgage, the chance to win the lottery, … hung in the balance. Fortunately, U.S. law recognizes that we need substantive constraints on information flow in certain areas – contexts – of life and though critics have pointed out many weaknesses in the letter of these laws, I believe the approach is dead right. Read More

6

Mikey Doesn’t Like It: Watchlists Are Not For Kids

Thankfully, our blog has Jeff Kahn, an expert on national security, guest blogging with us this month to teach us about the history and development of airline screening.  Picking up on Jeff’s insights, I’d like to follow up on stories about eight-year old Michael Hicks whose travels have been disrupted with frequent pat downs and questioning.  Why?  Michael’s name matches that of a person on the TSA selectee list.  As my previous posts, see here and here, and Technological Due Process article explored, the TSA uses crude matching algorithms by design.  The gamble for higher false positives is worth the pay off of nabbing a person bent on destruction.  This means that kids like Mikey and many others, even the late Senator Ted Kennedy for a time, face delays, intrusive questioning, and other inconveniences when they travel.

So how has the TSA responded to this recent flap about Mikey?  Blogger Bob on The TSA Blog explains:  “It’s inevitable that every several months or so, some cute kid gets their mug posted on a major news publication with a headline reading something like: “Does this look like a terrorist to you?” Anything involving kids or cats gets tons of mileage and everybody starts tweeting and retweeting that there’s an 8 year old on the no fly list.  There are no children on the No Fly or Selectee lists.  What happens is the child’s name is a match or similar match to an actual individual on the No Fly or Selectee Watch List.”  Now, Blogger Bob’s explanation is indeed spot on, but it seems callous and perhaps counter-productive if the TSA wants to tackle its PR problem with the public.  It seems dismissive to say that we only get up in arms when someone’s child gets ensnared in a screening mess.  While talking about Mikey may be a useful tool for newspapers to pique the public’s interest, so did the story about the late Senator Ted Kennedy and the many others, including airline pilots, who have difficulty traveling due to the TSA’s currently inefficient redress process.

Mikey’s mom, Najlah Hicks, commented on Blogger Bob’s post with this missive: “Instead of reaching out to our family, you chose to belittle the process by stating that ‘Anything involving kids or cats gets tons of mileage and everybody starts tweeting and retweeting that there’s an 8 year old on the no fly list.’  . . . It would have been far more helpful had he reached out to our family and help us formulate a solution than belittle the effort.  I am insulted and appalled that a representative from the TSA would chose to make such a juvenile and insulting statement.  You could have easily left the above quote off and just shared the Redress process with everyone.  It has been made quite clear to our family from both Continental and US Airlines that our son is clearly on a TSA list and they have absolutely no power in which to remove him.  If you think it’s far more helpful to belittle the process rather than just giving people the information they need, then I think the TSA has far more serious issues than any of us imagine.  I look forward to getting our son off a list he’s supposedly not on.”  Now, Blogger Bob assures the public that problems like Mikey’s will disappear once the Secure Flight program becomes operational.  We shall see.  Until then, while tricks may be for kids, watch lists are not.