Site Meter

Author: Jonathan Zittrain


A few Qs on the Master Switch

The Master Switch is a great read — thanks, Tim, for unearthing and synthesizing such an astounding amount of history.

I thought I’d share some of the questions that came up as I read it –

  • Patents: At different places Tim underscores how crucial a role they play in the development of an industry.  At times the story is one where patents helped competition, as when they allow the little guy to avoid being crushed by the prevailing behemoth.  Bell can hold off the Western Union thanks to patents.  But at other times they’re clearly subject to abuse, such as with the movie trust.  What, in short, would you recommend changing about current patent policy, if anything?  Is misuse doctrine enough to carve away the bad uses while leaving the salutary ones?  Why not be supportive of at least some business method patents, in circumstances where they could help the little guy too?
  • Peering: Given the worries expressed in the book about consolidation, I’m so curious to know what Tim and others think about the current state of play in Internet peering.  To what extent should peering agreements be, as they largely are today, confidential?  How much should gov’t intervene in peering disputes, such as the famed recent blowout between Level3 and Comcast over Netflix traffic — which apparently accounts for a stunning 40% — 40%! — of U.S. bandwidth usage?
  • Do we have a neutral net today?: The dark matter of the Internet makes no appearance in the book: Akamai.  At one point Tim brings up the spectre of a broadband provider offering a fast lane to faraway content providers for an extra fee — and suggests how awful that would be.  But then what of Akamai, which offers exactly that kind of fast lane?  It is saving the Net by offering a quality-of-service set of efficiencies to speed video on its way through clever co-location — indeed, that’s how Netflix got to people before the Level3 imbroglio — or is it exactly the model that makes it harder for new entrants, at least new bandwidth-intensive entrants, to compete?
  • The net neutrality tripwire: The book points out the power of corporate norms — if today’s barons are looking to build empires, they’re certainly not advertising it the way their predecessors did.  The few examples of actual net neutrality violations are pretty thin, and as the book points out, quickly disavowed as the actions of rogue employees, or accidents, when uncovered.  Should systematic net neutrality violations take place, wouldn’t that result in a rather quick sea change in favor of net neutrality?
  • Search neutrality: Frank Pasquale hasn’t put it to Tim in this symposium, so I will!  At the end of the day Google comes out pretty well in your estimation, though it’s on probation as any private behemoth would be in your eyes.  But you do accord it the privilege of, among everyone, currently holding the closest thing to the Master Switch.  So: should there be some kind of mandated search neutrality?  Or do you think, despite what you describe as Google’s monopoly position, that market forces will discipline any movement to unduly shade organic search results?  Would you apply separation principles to the various Google ventures?  (You seem to dismiss everything but search as a sideshow, but I’m not so sure — I take Google seriously that it aims to organize all the world’s information, and personal data, which can be gathered through many of those ancillary projects from Orkut to Reader — is information.)
  • One wire or many: In the book’s history Tim seems to rue consolidation of the phone network to one wire.  More generally, he looks with awe but ultimately disfavor on the anti-Adam Smith segment of corporatist philosophy that says that competition is messy and wasteful.  But what, then, would be the best regime for broadband?  What’s the ideal?  Is it gov’t-provided fiber, full stop, like interstate highways?  Private fiber, even a single wire, but with open access requirements?  No open access but net neutrality?  Or lots of wires?
  • Advertising: The book devotes some interesting space to the role of advertising in radio, and how advertising pushed American radio one way while the BBC went another.  Should the gov’t have any dog in the fight as browser makers come, in the name of privacy, to develop what could be powerful ad blocking software, leading some to say that the foundations of the current free Web are threatened?  Or are do-not-track systems a form of consumer empowerment to be cheered?
  • Apple + AT&T = ?: Since the book went to press we’ve seen Apple announce Verizon as an alternative iPhone carrier.  How much does this impact the sense of Apple and AT&T as, in essence, an attempted merger?
  • Other consolidation: Another kind of consolidation, taking place quite naturally, is the sheltering of formerly self-hosted Web sites under bunkerized umbrellas.  If I’m going to run a Web server today as a small- or medium-sized venture I’m less likely than ever to try to host it in my basement, and instead will look to Amazon hosting or somesuch — in part because of the prevalence of denial of service attacks and other unpredictabilities.  It’s hard to blame Amazon for taking the business that comes its way — or for exercising its choices about how to implement its terms of service.  Or does the logic of the book say that there should be hosting neutrality, too, a form of common carriage?

So, questions rather than claims from me.  Thanks again for a pathbreaking book — and a thought-provoking symposium.  …JZ


Cybersecurity: separating genuine worries from fearmongering

The Future of the Internet has a lot of worries in it about the state of cybersecurity.  I’ve argued against some extremely knowledgeable people in saying that the cyberwarfare threat has not been greatly exaggerated.  But there are some security fears that just don’t bother me so much.

In 1996, a physicist named Alan Sokol published an article in Social Text, a cultural studies journal.  It was called “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity,” and as the name suggests, it’s pretty impenetrable.  You can check it out here.  Soon after it came out, he published an article in the now-defunct Lingua Franca, saying that the first article had been a hoax.  He said he did it to see if the journal would “publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions.”

I remember feeling pretty sympathetic to the Social Text editors at the time — which was before I was immersed in legal academia, where most of the law reviews are run by students and don’t perform what other fields would recognize as formal peer review.  Publishing an article doesn’t mean that the journal editors agree with everything it says, and no doubt the Social Text editors had little experience dealing with physics.  Sure, they could have sent it to other physicists, but in the meantime they probably welcomed what looked like a rare attempt by someone from the hard sciences to communicate with an otherwise-alien audience, even if the person was deemed an apostate by his colleagues.  Moreover, being of the postmodern deconstructionist bent, they gleaned a lot from the text — no doubt more than what its insincere author had put in.  (As Wiki says they put it: “its status as parody does not alter, substantially, our interest in the piece, itself, as a symptomatic document.”)

I was reminded of the Sokal Affair when I read Thomas Ryan’s presentation to the 2010 Black Hat conference about one Robin Sage.  This isn’t the U.S. special ops training exercise conducted each year, but rather a fake identity the author created on LinkedIn and elsewhere.

The author says he intentionally chose the photo of a young, attractive woman in order to better do what he did next: friend a bunch of security professionals on LinkedIn.  He says that Robin’s success in social networking said something about the security chops of those who friended her.

I’m not so sure.  He convincingly writes that her profile’s credibility could be debunked with a little Internet sleuthing, but I don’t think it’s surprising that many social network users regularly go to such lengths.  Some people are picky about from whom they allow connections; others are content to accept anything that looks like it’s not a spammer — and Robin was not.

Ryan includes some snippets of messages that Robin received from her new connections.  One asked her to review a paper he was writing; another complimented her on her looks; another pointed out a job opportunity.  I’m not sure any of these is troublesome.  Ryan figures that if the paper were shared and was pre-publication, a malevolent person behind the Robin persona could have passed it off as his or her own.  That’s a bit of a reach.  Yes, anything can happen, but there are risks in any communication or interaction with a stranger or mere acquaintance.  Ryan says in his paper’s summary that Robin was offered “gifts, government and corporate jobs, and options to speak at a variety of security conferences.”  But when that’s unpacked in the main text, it’s all very tentative — pointing out a job opportunity is not the same as offering a job, and suggesting interest in a conference is not the same as vetting the presentation should the interest be reciprocated.  There’s an intriguing section of the paper about the gender dynamic — Ryan intentionally chose a young, attractive woman as Robin’s avatar, ’and suggests that “Whether these same reactions would have been elicited towards another male is questionable. It can be put forth that Robins appearance and gender played a key role in many people’s comfort level.”

There’s some interesting research on this sort of thing, such as a study by researchers at the University of Wisconsin in which identical resumes were sent for academic jobs with only the names switched from one gender to another.  They found that men were given more opportunities than their identical women counterparts.  At the very least, gender comfort level can cut both ways, and Ryan’s experiment was, I think even by his own account, as casual as Alan Sokol’s with Social Text.  It’s more to make a provocation than to actually investigate gender bias or sloppy intellectual work, respectively.

The Robin Sage experiment — and the lessons we’re supposed to draw from it — interest me because I’m interested in the ways in which kindness among strangers can be crucial to the world being a good place to live — and the Internet functioning at all.  It’s not surprising that a security professional would conduct an experiment in which people were duped into friending someone who wasn’t real and then conclude that those people were observing security practices that were too lax.  But the more you think about it, the more you can think of all sorts of similar experiments: offer to help someone with his or her shopping bags, and then drop them.  See someone taking a picture of his friends in a park, offer to do it so he can join the picture, and then run away with the camera.  Hold a door for someone, and then hit them from behind.  Should an experimenter do any of these, would the lesson be about the gullibility of the target or the cruelty of the experimenter?

To be sure, Ryan’s experiment was conducted among fellow security professionals.  He suggests that Robin’s fake job description suggested that she held a U.S. federal government security clearance — so other people with clearances might be misled into sharing classified information with her.  But there’s no reason to think that people would spill secrets under those circumstances any more than you’d write a check for $5,000 or give your home address to a brand new “friend” on Facebook.

The beauty of social networks like LinkedIn or Facebook is that they allow a level of connection with someone that has no easy real-world analogue.  LinkedIn can be for colleagues and friends, but it also can include faraway students who want to connect with a professor they’ve never met — and maybe never will — or any number of other configurations.  Just because Wikipedia allows anyone to edit most of its pages, doesn’t mean that it innately and permanently trusts every edit.  The system is set up to be able to revert the work of vandals, and any example of how “easy” it is to vandalize a Wikipedia page is beside the point.  The idea there is that there are more people quickly responding to vandals than there are vandals — so an open system functions.  Similarly, so long as we don’t share more than we mean to, the presence of strangers among our LinkedIn colleagues or even Facebook friends shouldn’t be a red flag.  More might be gained from “friends we haven’t met” than lost to the occasional bad actor.

So: pleased to meet you, Thomas Ryan — if that’s who you really are.  And even if it’s not.  …JZ


Reputation bankruptcy

Google CEO Eric Schmidt created buzz (and some shock and criticism) when he suggested in a recent Wall Street Journal interview that, in the not too distant future, “every young person…will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites.”

I’ve been intrigued by these concepts, too, and while I don’t think people should have to change their names to escape their pasts — whether earned or unearned — I like the idea of reputation bankruptcy.  It’s taken up as a partial solution to peer-to-peer privacy problems in the Future of the Internet:

Search is central to a functioning Web, and reputation has become central to search. If people already know exactly what they are looking for, a network needs only a way of registering and indexing specific sites. Thus, IP addresses are attached to computers, and domain names to IP addresses, so that we can ask for and go straight to Matt Drudge’s site. But much of the time we want help in finding something without knowing the exact online destination. Search engines help us navigate the petabytes of publicly posted information online, and for them to work well they must do more than simply identify all pages containing the search terms that we specify. They must rank them in relevance. There are many ways to identify what sites are most relevant. A handful of search engines auction off the top-ranked slots in search results on given terms and determine relevance on the basis of how much the site operators would pay to put their sites in front of searchers. These search engines are not widely used. Most have instead turned to some proxy for reputation. As mentioned earlier, a site popular with others—with lots of inbound links—is considered worthier of a high rank than an unpopular one, and thus search engines can draw upon the behavior of millions of other Web sites as they sort their search results. Sites like Amazon deploy a different form of ranking, using the “mouse droppings” of customer purchasing and browsing behavior to make recommendations—so they can tell customers that “people who like the Beatles also like the Rolling Stones.” Search engines can also more explicitly invite the public to express its views on the items it ranks, so that users can decide what to view or buy on the basis of others’ opinions. Amazon users can rate and review the items for sale, and subsequent users then rate the first users’ reviews. Sites like Digg and Reddit invite users to vote for stories and articles they like, and tech news site Slashdot employs a rating system so complex that it attracts much academic attention.

eBay uses reputation to help shoppers find trustworthy sellers. eBay users rate each others’ transactions, and this trail of ratings then informs future buyers how much to trust repeat sellers. These rating systems are crude but powerful. Malicious sellers can abandon poorly rated eBay accounts and sign up for new ones, but fresh accounts with little track record are often viewed skeptically by buyers, especially for proposed transactions involving expensive items. One study confirmed that established identities fare better than new ones, with buyers willing to pay, on average, over 8 percent more for items sold by highly regarded, established sellers. Reputation systems have many pitfalls and can be gamed, but the scholarship seems to indicate that they work reasonably well. There are many ways reputation systems might be improved, but at their core they rely on the number of people rating each other in good faith well exceeding the number of people seeking to game the system—and a way to exclude robots working for the latter. For example, eBay’s rating system has been threatened by the rise of “1-cent eBooks” with no shipping charges; sellers can create alter egos to bid on these nonitems and then have the phantom users highly rate the transaction. One such “feedback farm” earned a seller a thousand positive reviews over four days. eBay intervenes to some extent to eliminate such gaming, just as Google reserves the right to exact the “Google death penalty” by de-listing any Web site that it believes is unduly gaming its chances of a high search engine rating.

These reputation systems now stand to expand beyond evaluating people’s behavior in discrete transactions or making recommendations on products or content, into rating people more generally. This could happen as an extension of current services—as one’s eBay rating is used to determine trustworthiness on, say, another peer-to-peer service. Or, it could come directly from social networking: Cyworld is a social networking site that has twenty million subscribers; it is one of the most popular Internet services in the world, largely thanks to interest in South Korea. The site has its own economy, with $100 million worth of “acorns,” the world’s currency, sold in 2006.

Not only does Cyworld have a financial market, but it also has a market for reputation. Cyworld includes behavior monitoring and rating systems that make it so that users can see a constantly updated score for “sexiness,” “fame,” “friendliness,” “karma,” and “kindness.” As people interact with each other, they try to maximize the kinds of behaviors that augment their ratings in the same way that many Web sites try to figure out how best to optimize their presentation for a high Google ranking. People’s worth is defined and measured precisely, if not accurately, by the reactions of others. That trend is increasing as social networking takes off, partly due to the extension of online social networks beyond the people users already know personally as they “befriend” their friends’ friends’ friends.

The whole-person ratings of social networks like Cyworld will eventually be available in the real world. Similar real-world reputation systems already exist in embryonic form. Law professor Lior Strahilevitz has written a fascinating monograph on the effectiveness of “How’s My Driving” programs, where commercial vehicles are emblazoned with bumper stickers encouraging other drivers to report poor driving. He notes that such programs have resulted in significant accident reductions, and analyzes what might happen if the program were extended to all drivers. A technologically sophisticated version of the scheme dispenses with the need to note a phone number and file a report; one could instead install transponders in every vehicle and distribute TiVo-like remote controls to drivers, cyclists, and pedestrians. If someone acts politely, say by allowing you to switch lanes, you can acknowledge it with a digital thumbsup that is recorded on that driver’s record. Cutting someone off in traffic earns a thumbs-down from the victim and other witnesses. Strahilevitz is supportive of such a scheme, and he surmises it could be even more effective than eBay’s ratings for online transactions since vehicles are registered by the government, making it far more difficult escape poor ratings tied to one’s vehicle. He acknowledges some worries: people could give thumbs-down to each other for reasons unrelated to their driving—racism, for example. Perhaps a bumper sticker expressing support for Republicans would earn a thumbs-down in a blue state. Strahilevitz counters that the reputation system could be made to eliminate “outliers”—so presumably only well-ensconced racism across many drivers would end up affecting one’s ratings. According to Strahilevitz, this system of peer judgment would pass constitutional muster if challenged, even if the program is run by the state, because driving does not implicate one’s core rights. “How’s My Driving?” systems are too minor to warrant extensive judicial review. But driving is only the tip of the iceberg.

Imagine entering a café in Paris with one’s personal digital assistant or mobile phone, and being able to query: “Is there anyone on my buddy list within 100 yards? Are any of the ten closest friends of my ten closest friends within 100 yards?” Although this may sound fanciful, it could quickly become mainstream. With reputation systems already advising us on what to buy, why not have them also help us make the first cut on whom to meet, to date, to befriend? These are not difficult services to offer, and there are precursors today. These systems can indicate who has not offered evidence that he or she is safe to meet—as is currently solicited by some online dating sites—or it may use Amazon-style matching to tell us which of the strangers who have just entered the café is a good match for people who have the kinds of friends we do. People can rate their interactions with each other (and change their votes later, so they can show their companion a thumbs-up at the time of the meeting and tell the truth later on), and those ratings will inform future suggested acquaintances. With enough people adopting the system, the act of entering a café can be different from one person to the next: for some, the patrons may shrink away, burying their heads deeper in their books and newspapers. For others, the entire café may perk up upon entrance, not knowing who it is but having a lead that this is someone worth knowing. Those who do not participate in the scheme at all will be as suspect as brand new buyers or sellers on eBay.

Increasingly, difficult-to-shed indicators of our identity will be recorded and captured as we go about our daily lives and enter into routine transactions— our fingerprints may be used to log in to our computers or verify our bank accounts, our photo may be snapped and tagged many times a day, or our license plate may be tracked as people judge our driving habits. The more our identity is associated with our daily actions, the greater opportunities others will have to offer judgments about those actions. A government-run system like the one Strahilevitz recommends for assessing driving is the easy case. If the state is the record keeper, it is possible to structure the system so that citizens can know the basis of their ratings—where (if not by whom) various thumbs-down clicks came from—and the state can give a chance for drivers to offer an explanation or excuse, or to follow up. The state’s formula for meting out fines or other penalties to poor drivers would be known (“three strikes and you’re out,” for whatever other problems it has, is an eminently transparent scheme), and it could be adjusted through accountable processes, just as legislatures already determine what constitutes an illegal act, and what range of punishment it should earn.

Generatively grown but comprehensively popular unregulated systems are a much trickier case. The more that we rely upon the judgments offered by these private systems, the more harmful that mistakes can be. Correcting or identifying mistakes can be difficult if the systems are operated entirely by private parties and their ratings formulas are closely held trade secrets. Search engines are notoriously resistant to discussing how their rankings work, in part to avoid gaming—a form of security through obscurity. The most popular engines reserve the right to intervene in their automatic rankings processes—to administer the Google death penalty, for example—but otherwise suggest that they do not centrally adjust results. Hence a search in Google for “Jew” returns an anti- Semitic Web site as one of its top hits, as well as a separate sponsored advertisement from Google itself explaining that its rankings are automatic. But while the observance of such policies could limit worries of bias to search algorithm design rather than to the case-by-case prejudices of search engine operators, it does not address user-specific bias that may emerge from personalized judgments.

Amazon’s automatic recommendations also make mistakes; for a period of time the Official Lego Creator Activity Book was paired with a “perfect partner” suggestion: American Jihad: The Terrorists Living Among Us Today. If such mismatched pairings happen when discussing people rather than products, rare mismatches could have worse effects while being less noticeable since they are not universal. The kinds of search systems that say which people are worth getting to know and which should be avoided, tailored to the users querying the system, present a set of due process problems far more complicated than a stateoperated system or, for that matter, any system operated by a single party. The generative capacity to share data and to create mash-ups means that ratings and rankings can be far more emergent—and far more inscrutable.

As biometric readers become more commonplace in our endpoint machines, it will be possible for online destinations routinely to demand unsheddable identity tokens rather than disposable pseudonyms from Internet users. Many sites could benefit from asking people to participate with real identities known at least to the site, if not to the public at large. eBay, for one, would certainly profit by making it harder for people to shift among various ghost accounts. One could even imagine Wikipedia establishing a “fast track” for contributions if they were done with biometric assurance, just as South Korean citizen journalist newspaper OhmyNews keeps citizen identity numbers on file for the articles it publishes. These architectures protect one’s identity from the world at large while still making it much more difficult to produce multiple false “sock puppet” identities. When we participate in other walks of life—school, work, PTA meetings, and so on—we do so as ourselves, not wearing Groucho mustaches, and even if people do not know exactly who we are, they can recognize us from one meeting to the next. The same should be possible for our online selves. []

As real identity grows in importance on the Net, the intermediaries demanding it ought to consider making available a form of reputation bankruptcy. Like personal financial bankruptcy, or the way in which a state often seals a juvenile criminal record and gives a child a “fresh start” as an adult, we ought to consider how to implement the idea of a second or third chance into our digital spaces. People ought to be able to express a choice to de-emphasize if not entirely delete older information that has been generated about them by and through various systems: political preferences, activities, youthful likes and dislikes. If every action ends up on one’s “permanent record,” the press conference effect can set in. Reputation bankruptcy has the potential to facilitate desirably experimental social behavior and break up the monotony of static communities online and offline. As a safety valve against excess experimentation, perhaps the information in one’s record could not be deleted selectively; if someone wants to declare reputation bankruptcy, we might want it to mean throwing out the good along with the bad. The blank spot in one’s history indicates a bankruptcy has been declared—this would be the price one pays for eliminating unwanted details.

The key is to realize that we can make design choices now that work to capture the nuances of human relations far better than our current systems, and that online intermediaries might well embrace such new designs even in the absence of a legal mandate to do so.

(And, as long as we’re talking about reputation — you can check out Dan Solove’s excellent book on the future of reputation here.)


Net neutrality: the FCC takes back the ball

There’s some movement in the U.S. network neutrality debates under a rather dry heading: “Further Inquiry Into Two Under-Developed Issues in the Open Internet Proceeding.”

So far: a couple weeks ago Google and Verizon announced a “legislative framework proposal” to “preserve the open Internet and the vibrant and innovative markets it supports, to protect consumers, and to promote continued investment in broadband access,”  blogged here.  The proposal emerged in the vacuum created by a Federal court ruling overturning the FCC’s regulation of Comcast’s throttling of peer-to-peer traffic, and it was criticized harshly by a number of open Internet advocates as an undue boon to the network providers’ interests.

Now the FCC has re-entered the picture with its September “further inquiry,” and done so with a deft touch.  First, by seeking additional comments, the document makes it clear that its “NPRM” — a proceeding to craft rules to promote an open Internet that many thought the Comcast decision had derailed — is still alive.  Exactly how any rules will be made is not discussed; instead, the FCC notes the areas where consensus has been reached: some conception of net neutrality is a good idea, at least on non-wireless platforms; that network practices should be disclosed; that net neurality shouldn’t preclude reasonable network management practices by ISPs; and that case-by-case, flexible adjudication beats lengthy and complex rules.

That’s an astute move: to the extent that the Google/Verizon document represented horse trading — “I’ll agree that net neutrality should apply to wired networks if you agree that it’s too soon to talk about rules for wireless” — the FCC has moved rhetorically to lock in the parts of the deal that most embrace an open Internet by pointing out that there’s now consensus on those points.

That leaves the most controversial parts of the agreement as objects for further inquiry, and it’s where the FCC is looking for more public comments.  These “under-developed issues” are on the confusing “specialized services” and the less confusing (but no less challenged) wireless proposed exemptions (or at least temporary relief) from net neutrality rules.

There, the FCC offers a lucid and measured summary of the state of play on each issue, along with some initial thoughts on ways to resolve each, drawing from among the many comments already received from industry and public interest participants.  For specialized services, there’s the question of what happens when a network provider wants to use the pipe it has into someone’s house or business for something independent of vanilla Internet broadband.  There are legacy examples of this: the same wires that carry a phone company’s Internet DSL service carry regular old telephone service, too; and the same cable company coax that carries broadband also carries cable TV.  Indeed, those “specialized” services used to be the main ones, with the Internet as the afterthought.

It would be strange to say that the same net neutrality principles that mean Comcast can’t favor access to over also ought to mean that Comcast can’t favor MTV over Animal Planet in basic cable.  Basic cable is Comcast’s to fill as it pleases, conducting all sorts of deals to figure out whether a new channel should be cute cats or pay-per-view boxing.  (To be sure, this is with the exception of the byzantine and ill-considered “must carry” rules that give legacy TV broadcasters a chance to demand a corresponding cable channel without having to negotiate a deal for it — while also allowing those broadcasters to refuse to allow the cable company to carry the channels unless they cut a deal.  That’s Congress’s mess, though, not the FCC’s.)

So the strongest view against specialized services might be: OK, network providers, maybe you keep your legacy specialized services, but other than that, we want you to use your bandwidth for open Internet.  But then one could see new specialized services shoehorned in via one’s telephone (“Look, a new handset with a screen to plug into the regular phone line!”) or cable (“A new channel called the Best of YouTube, with fast forward, rewind, and favorite buttons on my cable remote!”).  The puzzle is: if we want to give those legacy modalities a chance to freshen up, or even contemplate new kinds of specialized services not anchored in the old ones, can we do it without the prospect of diminishing the open Internet that’s currently so popular over those very wires?  The Internet tail stands to wag the telco/cable/TV dog to which it was first attached; how to mediate between them now, if at all, should the dog (and its more proprietary frame) stage a comeback?

Check out pp. 2-4 of the FCC’s document for its own view of the issue, along with some approaches that could help situate specialized services without simply banning them.  I’m intrigued with the idea of guaranteed capacity for regular Internet service — in other words, new specialized services should not be used to shrink the pie for regular Internet offerings.  Experimentation could continue apace on the open Internet, with some of its best results then bottled up and offered sleekly through a more appliancized offering.  So long as there’s still general public access to and broad usage of the regular Internet, a hybrid ecosystem could offer the best of both worlds.  In a way, it’s preferable to have generative and “sterile” environments side-by-side than to have generative environments compete with “contingently generative” ones.  The latter is like the case of the iPhone — to a developer, it acts just like the open PC environment, where anyone can code for it and reach consumers, until it doesn’t — Apple bans a particular app or changes its rules after achieving huge market share.

And speaking of mobile smartphones, there’s then the question of wireless.  Some net neutrality advocates might ask: what question, saying that it should be treated the same as everything else — as Internet protocols intended.  Others, most directly the wireless carriers themselves, say that nondiscrimination rules will constrain their investment in building out the more nascent wireless infrastructure.  Again the FCC lays out some options, and for the first time that I’ve seen, asks the question not only of net neutrality for use of wireless bandwidth, but app neutrality for developers’ access to a smartphone platform’s app store.  The Future of the Internet has my own views on that question, and the FCC neatly asks if perhaps rules on one could help justify an absence of rules on the other: maybe app neutrality would make us worry less about network discrimination, or net neutrality could still permit app discrimination.

Despite the nondescript eponymous title that suggests that it’s just another abstruse government document, the FCC’s further inquiry is worth a read.  And its contents signal that regulators can be reassuringly versed in the topics they’ve taken up, even as their power to regulate remains in question.  There are still some moves the FCC could make to create net neutrality rules in the absence of a new statute, and without mentioning (much less taking) them, the invitation to comment is one the major parties to the debate won’t ignore.


Has the Future of the Internet happened?

I wrote the Future of the Internet — And How to Stop It, and its precursor law review article the Generative Internet, between 2004 and 2007. I wanted to capture a sense of just how bizarre the Internet — and the PC environment — were.  How much the values and assumptions of, metaphorically, dot-org and dot-edu, rather than just dot-com, were built into the protocols of the Internet and the architecture of the PC.  The amateur, hobbyist, backwater origins of the Internet and the PC were crucial to their success against more traditional counterparts, but also set the stage for a new host of problems as they became more popular.

The designers and makers of the Internet and PC platforms did not expect to come up with the applications for each — they figured unknown others would do that.  So, unlike CompuServe, AOL, or Prodigy, the Internet didn’t have a main menu.  And once for-profit ISPs started rolling the Internet out to anyone willing to subscribe, there came to be a critical mass of eyeballs ready to experience varieties of content and services — the providers of which didn’t have to negotiate a business deal with some Internet Overseer the way they did for CompuServe et al.  Some content and services could be paid for, at least as soon as credit cards could function cheaply online, and other could be free — either because of a separate business model like advertising, or because the provider didn’t feel inclined to monetize visiting eyeballs.  Tim Berners-Lee could invent the World Wide Web and have it run as just another application, seeking neither a patent on its workings nor an architecture for it that placed him in a position of control.  Today, of course, the Web is so ubiquitous that people often confuse it with the Internet itself.

When bad apples emerge on an unmediated platform — and they do as soon as there are enough people using it to make it worth it to subvert it — it can be difficult to deal with them.  If someone spams you on Facebook, the first step is to make it a customer service issue — complain to Facebook, and they can discipline the account.  If someone spams you on email, it’s much trickier, because there’s no Email Manager — just lots of email servers, some big, some little, and many of them with accounts hacked by others.  That’s one reason why a newer generation of Internet users prefers Facebook or Twitter messaging to old fashioned email.  Same for the PC itself: with no PC Manager, there’s no easy way to get help or exact justice when exposed to malware.  I worried that malware in particular, and cybersecurity in general, would be a fulcrum point in pushing “regular” people away from the happenstance of generative platforms designed by nerds who figured they could worry about security later.  Hence a migration to less generative platforms managed like services rather than products.

I understand and sympathize with that migration.  But it’s important to recognize its downsides — particularly if one is among the libertarian set, which has been comprised some of the most vocal critics of the Future of the Internet.  Whether software developer or user, volunteering control over one’s digital environment to a Manager means that the manager can change one’s experience at any time — or worse, be compelled to by outside pressures.  I write about this prospect at length here.  The famously ungovernable Internet suddenly becomes much more governable, an outcome most libertarian types would be concerned about.  Many Internet freedom proponents aren’t willing to argue for or trust those freedoms to a “mere” political process; they prefer to see them de facto guaranteed by a computing environment largely immune to regulation. Read More


“Keep the core neutral”

Internet founding parent David Clark was a guest in my cyberlaw class in the fall of 1997. We talked about Internet governance, although I don’t think anyone (including us) called it that yet. ICANN wasn’t a gleam in the U.S. Department of Commerce’s eye, but even then the amazing state of the domain name system — how it came into being, how it was managed — made for an extraodinary story.

Now lawyers and diplomats are all over the subject, and ICANN has ballooned into a multi-million dollar organization. I’ve argued elsewhere that arguments about ICANN and domain names don’t much matter except to those who want a piece of the financial pie, and I think predictions of domain names’ unimportance have largely proven true. Sure, IBM would not be happy if it lost, but it’s at no risk of having that happen, and the fact is that most people find things by Googling them than by entering a domain name. So long as search engines can crawl to various destinations, a world in which we couldn’t use mnemonic domain names wouldn’t be much different than the one we have now.

With that background, I’ve been thinking about the global petition to “keep the core neutral” signed by fellow travelers like Wendy Seltzer, Larry Lessig, and David Post. Is it something worth signing?

Read More


The End of Email

Like others, in the past week I’ve noticed a major uptick in the spam I receive on longstanding email addresses. It’s gotten to the point where I’ve configured Gmail to scoop up the mail from those boxes so it can do its own junk mail sorting, and then I POP the mail into my Eudora client from Gmail. It’s taken me from downloading email where more than 9 out of 10 are spam to fewer than 1 out of 10 as spam — with the spam sitting harmlessly on Gmail.

But this is a good time to point out something beyond the cat-and-mouse of spam-and-filter: email is dying.

Read More