Archive for the ‘Architecture’ Category
posted by Danielle Citron
As my co-blogger Gerard notes, today is SOPA protest day. Sites like Google or WordPress have censored their logo or offered up a away to contact your congressperson, though remain live. Other sites like Wikipedia, Reddit, and Craigslist have shutdown, and more are set to shut down at some point today. There’s lots of terrific commentary on SOPA, which is designed to tackle the problem of foreign-based websites that sell pirated movies, music, and other products–but with a heavy hand that threatens free expression and due process. The Wall Street Journal’s Amy Schatz has this story and Politico has another helpful piece; The Hill’s Brendan Sasso’s Twitter feed has lots of terrific updates. Mark Lemley, David Levine, and David Post carefully explain why we ought to reject SOPA and the PROTECT IP Act in “Don’t Break the Internet” published by Stanford Law Review Online. In the face of the protest, House Judiciary Committee Chairman Lamar Smith (R-TX) vowed to bring SOPA to a vote in his committee next month. “I am committed to continuing to work with my colleagues in the House and Senate to send a bipartisan bill to the White House that saves American jobs and protects intellectual property,” he said. So, too, Senator Patrick Leahy (D-VT) pushed back against websites planning to shut down today in protest of his bill. “Much of what has been claimed about the Senate’s PROTECT IP Act is flatly wrong and seems intended more to stoke fear and concern than to shed light or foster workable solutions. The PROTECT IP Act will not affect Wikipedia, will not affect reddit, and will not affect any website that has any legitimate use,” Chairman Leahy said. Everyone’s abuzz on the issue, and rightly so. I spoke at a panel on intermediary liability at the Congressional Internet Caucus’ State of the Net conference and everyone wanted to talk about SOPA. I’m hoping that the black out and other shows of disapproval will convince our representatives in the House and Senate to back off the most troubling parts of the bill. As fabulous guest blogger Derek Bambauer argues, we need to bring greater care and thought to the issue of Internet censorship. Cybersecurity is at issue too, and we need to pay attention. Derek may be right that both bills may go nowhere, especially given Silicon Valley’s concerted lobbying efforts against the bills. But we will have to watch to see if Representative Smith lives up to his promise to bring SOPA back to committee and if Senator Leahy remains as committed to PROTECT IP Act in a few weeks as he is today.
January 18, 2012 at 10:11 am Posted in: Architecture, Civil Rights, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Law Talk, Media Law, Social Network Websites, Technology, Web 2.0 Print This Post 2 Comments
posted by Derek Bambauer
Thanks to Danielle and the CoOp crew for having me! I’m excited.
Speaking of exciting developments, it appears that the Stop Online Piracy Act (SOPA) is dead, at least for now. House Majority Leader Eric Cantor has said that the bill will not move forward until there is a consensus position on it, which is to say, never. Media sources credit the Obama administration’s opposition to some of the more noxious parts of SOPA, such as its DNSSEC-killing filtering provisions, and also the tech community’s efforts to raise awareness. (Techdirt’s Mike Masnick has been working overtime in reporting on SOPA; Wikipedia and Reddit are adopting a blackout to draw attention; even the New York City techies are holding a demonstration in front of the offices of Senators Kirstin Gillibrand and Charles Schumer. Schumer has been bailing water on the SOPA front after one of his staffers told a local entrepreneur that the senator supports Internet censorship. Props for candor.) I think the Obama administration’s lack of enthusiasm for the bill is important, but I suspect that a crowded legislative calendar is also playing a significant role.
Of course, the PROTECT IP Act is still floating around the Senate. It’s less worse than SOPA, in the same way that Transformers 2 is less worse than Transformers 3. (You still might want to see what else Netflix has available.) And sponsor Senator Patrick Leahy has suggested that the DNS filtering provisions of the bill be studied – after the legislation is passed. It’s much more efficient, legislatively, to regulate first and then see if it will be effective. A more cynical view is that Senator Leahy’s move is a public relations tactic designed to undercut the opposition, but no one wants to say so to his face.
I am not opposed to Internet censorship in all situations, which means I am often lonely at tech-related events. But these bills have significant flaws. They threaten to badly weaken cybersecurity, an area that is purportedly a national priority (and has been for 15 years). They claim to address a major threat to IP rightsholders despite the complete lack of data that the threat is anything other than chimerical. They provide scant procedural protections for accused infringers, and confer extraordinary power on private rightsholders – power that will, inevitably, be abused. And they reflect a significant public choice imbalance in how IP and Internet policy is made in the United States.
Surprisingly, the Obama administration has it about right: we shouldn’t reject Internet censorship as a regulatory mechanism out of hand, but we should be wary of it. This isn’t the last stage of this debate – like Wesley in The Princess Bride, SOPA-like legislation is only mostly dead. (And, if you don’t like the Obama administration’s position today, just wait a day or two.)
Cross-posted at Info/Law.
January 16, 2012 at 7:28 pm Posted in: Architecture, Civil Procedure, Constitutional Law, Culture, Cyber Civil Rights, Cyberlaw, First Amendment, Google & Search Engines, Google and Search Engines, Intellectual Property, Media Law, Movies & Television, Politics, Technology, Web 2.0 Print This Post One Comment
posted by Danielle Citron
Bloomberg Businessweek reports on retailers’ use of camera surveillance to glean intelligence from shoppers’ behavior. A company called RetailNext, for instance, runs its software through a store’s security camera video feed to analyze customer behavior. It describes itself as the “leader in real-time in-store monitoring, enabling retailers and manufacturers to collect, analyze and visualize in-store data.” According to the company, it “uses best-in-class video analytics, on-shelf sensors, along with data from point-of-sale and other business systems, to automatically inform retailers about how people engage in their stores.” RetailNext’s software can integrate data from hardware such as RFID chips and motion sensors to track customers’ movements. The company explains that it “tracks more than 20 million shoppers per month by collecting data from more than 15,000 sensors in retail stores.” Its service apparently helps stores figure out where to place certain merchandise to boost sales. T-Mobile uses similar technology from another firm 3VR, whose software tracks how people move around their stores, how long they stand in front of displays, and which phones they pick up and for how long. 3VR is testing facial-recognition software that can identify shoppers’ gender and approximate age. Businessweek explains that the “software would give retailers a better handle on customer demographics and help them tailor promotions.” What we are seeing is, according to 3VR’s CEO, just “scratching the surface as someday “you’ll have the ability to measure every metric imaginable.”
Indeed. Little imagination is needed to predict the future in light of our present. As Joseph Turow‘s important new book The Daily You: How the New Advertising Industry Is Defining Your Identity and Worth (Yale University Press) explores, data collection and analysis of individuals is breathtaking. In the name of better, more relevant advertising and marketing efforts, companies like Acxiom have databases teeming with our demographic data (age, gender, race, ethnicity, address, income, marital status), interests, online and offline spending habits, and heath status based on our purchases and online comments (diabetic, allergy sufferer, and the like). Consumers are sorted into categories such as “Corporate Clout,” “Soccer and SUV,” “Mortgage Woes,” and “On the Edge.” eXelate gathers online data of over 200 million unique individuals per month through deals with hundreds of sites: their demographics, social activities, and social networks. Advertisers can add even more data to eXelate’s cookies– data from Nielsen, which includes Census Bureau data, as well as data brokers’ digital dossiers. Data firms like Lotame track the comments that people leave on sites and categorize them. Now, let’s consider weaving in facial recognition software and retailer cameras of companies like 3VR and RetailNext. And to really top things off, let’s think about linking all of this data to cellphone location information. The surveillance of networked spaces would be totalizing.
Turow’s book exposes important costs of these developments. This post will discuss a few–hopefully, I can have Professor Turow on for a Bright Ideas feature. This sort of targeting and hyper surveillance leaves many with far more narrow options and with social discrimination. Marketers use these databases to determine if Americans are worthy “targets” or not-worth-bothering with “waste.” For the “Soccer and SUV” moms between 35 and 45 who live in the West Coast and want to buy a small car, car companies may offer them serious discounts via online advertisements and e-mail. But their “On the Edge” counterparts get left in the cold with higher prices–why bother trying to attract people who don’t pay their debts? All of this sorting encourages media to offer soft stories designed to meet people’s interests, as secretly determined by those gathering and analyzing our networked lives. This discussion brings to mind to another important read: Julie Cohen‘s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press). As Professor Cohen thoughtfully explores, this sort of surveillance has a profound impact on the creative play of our everyday lives. It creates hierarchies among those watched and systematizes difference. I’ll have lots more to say about Cohen’s take on our networked society more generally, soon. In March, we will be hosting an online symposium on her book–much to look forward to in the new year.
posted by Danielle Citron
In what can only be described as the worst side of humanity, the bulletin board Dreamboard hosted a members-only sharing of child pornography, particularly of children under 12. New members could join the board only if they posted child pornography. Members had to continue to post images of child porn every 50 days or face removal. The rules of the board, printed in English, Russian, Japanese, and Spanish, included: (1) “Keep the girls under 13, in fact, I really need to see 12 or younger to know your[sic] a brother,” (2) “don’t avoid nudity in previews. I will NOT accept you if there’s no nudity. And my definition of nudity is pussy or anal in the shot. You just waste your own time if you don’t do this. Because you will not get in, if you don’t follow the rules.” One section of Dreamboard was titled “Super Hardcore,” and the rules required images and videos of “very young kids, getting fucked, and preteens in distress, and or crying. . . . If a girl looks totally comfortable, she’s not in distress, and it does NOT belong in this section.” This part of the site featured images of adults having violent sexual intercourse with very young children, including infants. One file was entitled “2yo assfuck she cries for mommy nasty pthc pedo 1 yo 3 yo 4 yo.” The board amassed over 120 terabytes of violent sexual rape and abuse of children.
According to the rules of the site, members were to use encryption technologies to prevent detection. The rules specified precisely which encryption technologies and proxy servers should be used and which should be avoided. Members did not use their real names, but instead screen names to conceal their identities. All of this suggests that the board went to great lengths to secure their anonymity.
Early this month, Attorney General Eric Holder, Jr. announced that federal investigators has charged 72 people for violating child pornography laws and more than 50 people have been arrested in the United States. The defendants included doctors, lawyers, police officers, and a Navy commander, according to the Ellis County Observer. Thirteen of those charged have pled guilty, and four members have been sentenced between 20 and 30 years. Around 600 people from around the world were members of the bulletin board, which has been shut down. The bulletin board used a server in Atlanta. As Assistant Attorney General Lanny Breuer explained, the site “was a living horror.” John Morton, director of Immigration and Customs Enforcement, declined to say how investigators overcame the technological precautions used by some of the members. He did tell the New York Times: “To those inclined to abuse small children, know this: this isn’t a place on the Internet or the planet in which you are truly safe. It may take us some time, it may take us some effort, but we will find you regardless of a screen name, a proxy server or an encryption effort, period.” Read the rest of this post »
August 18, 2011 at 11:48 am Posted in: Anonymity, Architecture, Criminal Law, Criminal Procedure, Cyber Civil Rights, Privacy, Privacy (Law Enforcement), Social Network Websites Print This Post 7 Comments
posted by Danielle Citron
In The New York Times, Stephanie Rosenbloom asks readers to “imagine a world in which we are assigned a number that indicates how influential we are.” That number would help determine our success at getting a job, hotel-room upgrade, break on a service, or free samples at the store. As Rosenbloom tells us, imagine no more, companies, such as Klout, PeerIndex, and Twitter Grader, are mining our social media activities and assigning us influence scores. Social scoring is based on our online social network activity, including the number of followers, friends, and the extent to which our online activity gets people moving. If if you recommend a salon to your social network friends and they follow suit, your good word has two functions. You’re doing a good thing for your friends and the salon (let’s hope), and now you’re doing good for you. Because you have inspired people to take action, your influence score may rise. In the present, people with high scores get preferential treatment by retailers. More than 2,500 marketers are now using Klout’s data. Audi will begin offering Facebook users promotions based on their Klout score. The Las Vegas Palms Hotel and Casino is using Klout data to give highly rated guests an upgrade or tickets to a show. In the future, those scores could be used by prospective employers, friends, and dates.
On the one hand, this market trend has something important to commend — its visibility. Consumers can find out their influence scores and work to raise them. By contrast, the impact of behavioral advertising is often hidden. We are tracked and scored in databases and have no idea how it shakes out. Joe Turow’s excellent book Niche Envy explains that consumers know very little about how their data personalizes market transactions. Some individuals may end up as haves and others as have-nots, but neither group knows the extent of it. As Turow explains, “our simple corner store is turning into a Marrakech bazaar–except that the merchant has been analyzing our diaries while we negotiate blindfolded, behind a curtain, through a translator.” On the other hand, the information isn’t perfect and the algorithms secret so people may waste time doing things that they believe will raise their scores but don’t. But that isn’t really troubling, unless every job or blog post had the effect we hoped it might. What’s troubling is the trend’s implications for society and culture. It seems old school to say that people blog, make friends, and engage in online chats to play, experiment, and create culture. Now, they may feel pressured to do all of these things as a matter of economic necessity. We may forgo experimentation for product endorsements, and idle chatter for better job prospects. This makes our children’s choice to engage with social media seem like less of choice than a carefully cultivated necessity. It also spells far more trouble for people who are already victimized, those who cyber mobs target with lies, threats, technical attacks, and privacy invasions. They go offline or write under pseudonyms to protect themselves. We now know that those choices (if we can call it that) cost more economically than they already do aside from the many other costs that my work discusses. I imagine there’s more to this influence score story but I thought I’d share my initial take.
posted by Danielle Citron
The question that I had been dreading came at last: “Mom, can I have a Facebook page?” My daughter provided a strong defense: she’s 13, so she meets Facebook’s Terms of Service age requirement; she’s nearly an adult in her religion’s eyes (her bat mitzvah is in a week); past practices proves she’s responsible; and well, she feels ready. (And I just discovered, she’s done her homework: see this Yahoo Answers! “My mom won’t let me get a Facebook page, how do I convince her?” thread that I found on my computer).
Next came the conversation. We talked about how increasingly social media activity is part of one’s life’s biography. Anything said and done in social network spaces becomes part of who you are in our Information Age. Colleges may ask for your Facebook password. Over 70% of employers look at social media data for interviewing and hiring (and sad to say, the outcomes are grim for applicants who over 60% of the time don’t get the interview or job due to social network profiles). It’s not just what you post that speaks volumes — your social network (friends and their friends) tells some of your story for you. There goes any control that you thought you had. FB users often wrestle with whether they should de-friend those whose online personas don’t match their sensibilities (or the way in which they want others to perceive them). This means that users need to keep a careful eye on their friends’ profiles (as well as ever-changing privacy settings).
That’s a lot of responsibility. Or, as Bill Keller of the New York Times put it when he allowed his 13-year old daughter to join Facebook, he felt “a little as if I had passed my child a pipe of crystal meth.“ Beyond the potential privacy and reputational concerns that accompany social media use, an online life has other potential perils, like overuse (and thus inattention to studies, face-to-face family time, etc.) that cyber-pessimists underscore (see Nicholas Carr’s The Shallows). And bullying, serious harassment, bigotry increasingly appear in mainstream social media in ways that kids can’t necessarily avoid (my work explores those problems, see here, here, and here, as well as terrific work by guest bloggers Ari Waldman and Mary Anne Franks). Of course, there’s also lots of positive stuff emerging from these networked spaces. Social media outlets like Facebook allow us to enact our personalities. They let us express ourselves in ever-changing and expanding ways. FB and other outlets host civic engagement as Helen Norton and I have emphasized.
I wonder, too, if my kid has a meaningful choice. Can digital natives really stay away from social media if all of their friends socialize there? And will employers and colleges expect that applicants partake in these activities because everyone else does? Someday, will resisting having a Facebook profile express something negative about you? Will it signal that you’re not socially adjusted or successful? As Scott Peppet underscores in his work, we may be forced to give up our privacy to show that we are indeed healthy, social, smart, and the like. That’s a lot to process, right? I’m going to chew on this a while. Your thoughts are most welcome!
posted by Danielle Citron
In Technological Due Process, 85 Wash. U. L. Rev. 1249 (2008), I explored the promise and perils of the increasing automation of administrative decision-making. The automated administrative state took root after the convergence of a number of trends — the budget shortfalls of the 1990s, the falling costs and increased performance of information systems, and the emergence of the Internet. Government officials saw computerized automation as an efficient way to reduce operating costs: Automated systems meant less paperwork and fewer staff. Today, all states now automate a significant portion of the administration of their public benefit programs. More than fifty federal agencies execute policy with data-matching and data-mining programs. As a result, agencies increasingly use information systems to make decisions about important individual rights.
Technological Due Process identified three central problems with administrative automated systems. First, when programmers translate policy into code, they inevitably distort it, thus embedding incorrect policy into systems. Second, data matching programs misidentify individuals because they use crude algorithms that cannot distinguish between similar names. Last, automated systems often have problems providing notice to individuals, often because they lack audit trails that capture why government agencies take particular action.
Colorado’s automated public benefits system, known as CBMS, served as an important case study for my work. Responses to open-sunshine requests revealed that from September 2004 to April 2007, programmers embedded over 900 incorrect rules regarding Medicaid, food stamps, and other public benefits into CBMS. As a result, CBMS terminated Medicaid benefits of patients with breast cancer based on income and asset limits unauthorized by federal or state law. It denied food stamps to individuals with prior drug convictions in violation of Colorado law. And it demanded that eligibility workers ask applicants if they were “beggars,” even though neither federal law nor state law required an answer to that question for the provision of public benefits. Moreover, because CBMS lacked audit trails, individuals often received wholly deficient notice when the system cut or terminated their benefits. At times, individuals received no notice.
The past four years has seen little progress. Although state officials in 2009 thought that entering into a $48.6 million, four-year contract with Deloitte Consulting would help fix these problems, matters have arguably gotten worse. CBMS, for instance, has delayed processing applications for benefits in 70% of cases (in violation of federal law). It continues to terminate individuals’ public benefits without notice. (One case led to the death of a nine-year old boy after a pharmacy would not fill his asthma prescription despite proof that his family qualified for Medicaid help). Business school professor Don McCubbrey, who I interviewed for Tech Due Process, recently explained to the Denver Post that the recent failures cannot be due to the thousands of new Medicaid and other benefit applications from the recession. In his view, a “system that large should be able to scale.” According to Ed Kahn of the Colorado Center on Law and Policy, the system hasn’t just failed to fulfill its federal and state requirements but has “regressed.” Read the rest of this post »
posted by Danielle Citron
Time magazine recently did a true-to-form story on Wikipedia, where guest editors (and our very own featured author) Jonathan Zittrain (see here too), Robert McHenry, Benjamin Mako Hill, and Mike Schroepfer assisted in writing/editing/re-writing a feature entitled Wikipedia’s “Ten Years of Inaccuracy and Remarkable Detail.” As the piece explained, Wikipedia just celebrated its 10th birthday. The site has 17 million entries in more than 250 languages, quite a feat given that Encyclopedia Brittanica only has 120,000 and only in English. The Time wiki-like piece notes that Wikipedia has a “diverse, international body of contributors.”
According to The New York Times, most contributors are male. More specifically, “less than 15 percent of its hundreds of thousands of contributors are female.” This, in turn, has skewed the gender disparity of topics and emphasis. Wikimedia’s executive director Sue Gardner explains that topics favored by girls such as friendship bracelets can seem short when compared with lengthy articles on something boys typically like such as toy soldiers or baseball cards. The New York Times notes that a category with five Mexican feminist writers might not seem so impressive when compared with 45 articles on characters in “The Simpsons.”
Why is this so? Joseph Reagle, a fellow at the Berkman Center for Internet and Society at Harvard and author of “Good Faith Collaboration: The Culture of Wikipedia,” explains that Wikipedia’s early contributors shared “many characteristics with the hard-driving hacker crowd,” including an ideology that “resists any efforts to impose rules or even goals like diversity, as well as a culture that may discourage women.” He notes that adopting an ideology of openess means being “open to very difficult, high-conflict people, even misogynists.” The demographics of Wikipedia’s editors may also stem, in part, from the tendency of women to be “less willing to assert their opinions in public.”
How Wikipedia is now, and has been, responding is worth noting. Sue Gardner told the Times that she hopes to raise the share of women contributors through subtle persuasion and outreach to welcome newcomers to Wikipedia. Dave Hoffman and Salil Mehra’s terrific piece Wikitruth Through Wikiorder demonstrates that the site has already fostered efforts to create a more inclusive environment. As Hoffman and Mehra explain, Wikipedia has an Arbitration Committee whose volunteer members rule on disputes and set forth concrete rules on how users should behave. The Arbitration Committee has sanctioned users who make homophobic, ethnic, racial or gendered attacks or who stalk and harass others. According to Hoffman and Mehra’s empirical study, in cases when either impersonation or anti-social conduct like hateful attacks occur, the Administrative Committee will ban the user in 21% of cases. Wikipedia’s more than 1,500 administrators, in turn, enforce those rules. Wikipedia also permits users to report impolite, uncivil, or other difficult communications with editors in its Wikiquette alerts notice board.
posted by Danielle Citron
The U.K.’s freedom of information commissioner, Christopher Graham, recently told The Guardian that the WikiLeaks disclosures irreversibly altered the relationship between the state and public. As Graham sees it, the WikiLeaks incident makes clear that governments need to be more open and proactive, “publishing more stuff, because quite a lot of this is only exciting because we didn’t know it. . . WikiLeaks is part of the phenomenon of the online, empowered citizen . . . these are facts that aren’t going away. Government and authorities need to wise up to that.” If U.K. officials take Graham seriously (and I have no idea if they will), the public may see more of government. Whether that more in fact provides insights to empower citizens or simply gives the appearance of transparency is up for grabs.
In the U.S., few officials have called for more transparency after the release of the embassy cables. Instead, government officials have successfully pressured internet intermediaries to drop their support of WikiLeaks. According to Wired, Senator Joe Lieberman, for instance, was instrumental in persuading Amazon.com to kick WikiLeaks off its web hosting service. Senator Lieberman has suggested that Amazon, as well as Visa and and PayPal, came to their own decisions about WikiLeaks. Lieberman noted:
“While corporate entities make decisions based on their obligations to their shareholders, sometimes full consideration of those obligations requires them to act as responsible citizens. We offer our admiration and support to those companies exhibiting courage and patriotism as they face down intimidation from hackers sympathetic to WikiLeaks’ philosophy of irresponsible information dumps for the sake of damaging global relationships.”
Unlike the purely voluntary decisions that Internet intermediaries make with regard to cyber hate, see here, Amazon’s response raises serious concerns about what Seth Kreimer has called “censorship by proxy.” Kreimer’s work (as well as Derek Bambauer‘s terrific Cybersieves) explores American government’s pressure on intermediaries to “monitor or interdict otherwise unreachable Internet communications” to aid the “War on Terror.”
Legislators have also sought to ensure opacity of certain governmental information with new regulations. Proposed legislation (spearheaded by Senator Lieberman) would make it a federal crime for anyone to publish the name of U.S. intelligence source. The Securing Human Intelligence and Enforcing Lawful Dissemination (SHIELD) Act would amend a section of the Espionage Act that forbids the publication of classified information on U.S. cryptographic secrets or overseas communications intelligence. The SHIELD Act would extend that prohibition to information on human intelligence, criminalizing the publication of information “concerning the identity of a classified source or information of an element of the intelligence community of the United States” or “concerning the human intelligence activities of the United States or any foreign government” if such publication is prejudicial to U.S. interests.
Another issue on the horizon may be the immunity afforded providers or users of interactive computer services who publish content created by others under section 230 of the Communications Decency Act. An aside: section 230 is not inconsistent with the proposed SHIELD Act as it excludes federal criminal claims from its protections. (This would not mean that website operators like Julian Assange would be strictly liable for others’ criminal acts on its services; the question would be whether a website operator’s actions violated the SHIELD Act). Now for my main point: Senator Lieberman has expressed an interest in broadening the exemptions to section 230′s immunity to require the removal of certain content, such as videos featuring Islamic extremists. Given his interest and the current concerns about security risks related to online disclosures, Senator Lieberman may find this an auspicious time to revisit section 230′s broad immunity.
January 7, 2011 at 1:25 pm Posted in: Anonymity, Architecture, Current Events, Cyberlaw, First Amendment, Google & Search Engines, Government Secrecy, Privacy (Electronic Surveillance), Privacy (National Security), Technology Print This Post 2 Comments
posted by Danielle Citron
Harvard University Press recently published The Offensive Internet: Speech, Privacy, and Reputation, a collection of essays edited by Saul Levmore and Martha Nussbaum. Frank Pasquale, Dan Solove, and I have chapters in the book as do Saul Levmore, Martha Nussbaum, Cass Sunstein, Anupam Chander, Karen Bradshaw and Souvik Saha, Brian Leiter, Geoffrey Stone, John Deigh, Lior Strahilevitz, and Ruben Rodrigues. Stanley Fish just reviewed the book at New York Times.com.
posted by Frank Pasquale
The New Museum of Contemporary Art has hosted an exhibit called “The Last Newspaper” the past few months. Part of the exhibit centers around newspaper-based art. Another focus has been a “hybrid of journalism and performance art,” as groups of editors and writers developed “last newspaper sections” in areas ranging from real estate to sports to leisure. I co-edited the business section, which is available here in a low-res copy. I’m posting our editorial statement below.
I like how the various articles (contributed by entrepreneurs, theorists, designers, and others) hang together. The terrific design work is a refreshing change from the barren pages of business blogs, law reviews, and academic books (though it looks like some legal scholars are renewing interest in visual aspects of justice).
December 27, 2010 at 10:16 pm Posted in: Architecture, Cyberlaw, Economic Analysis of Law, Just for Fun, Law and Inequality, Philosophy of Social Science, Politics, Technology Print This Post 2 Comments
posted by Danielle Citron
Reviewing the movie The Social Network and Jaron Lanier’s book You Are Not a Gadget: A Manifesto in this month’s New York Review of Books, Zadie Smith warns readers of the perils of social network sites like Facebook where “life is turned into a database.” According to Smith, Facebook “locks us” into a system designed by a college nerd to resemble “a Noosphere, an Internet with one mind, a uniform environment in which it genuinely doesn’t matter who you are, as long as you make ‘choices’ (which means, finally, purchases).” Smith writes:
“When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way, it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears. It reminds me that those of us who turn in disgust from what we consider an overinflated liberal-bourgeois sense of self should be careful what we wish for: our denuded networked selves don’t look more free, they just look more owned.”
Smith worries about her students and other “2.0 kids.” She contrasts “1.0 people” who use social media tools to connect with others in an outward-facing way with “2.0 kids” who employ them to turn inward and towards the trivial. 2.0 people, Smith fears, are embedded in the software, avatars who don’t realize that “what makes something fully real is that it is impossible to represent it to completion.” She wonders: “what if 2.0 people feel their socially networked selves genuinely represent them to completion?” In Smith’s view, Mark Zuckerberg tamed “the wild west of the Internet” to “fit the suburban fantasies of a suburban soul,” risking the extinction of the “private person who is a mystery to the world and–which is more important — to herself.”
Smith’s review recalls Neil Postman’s critique of television culture and Benjamin Barber’s warnings about contemporary consumerism. While television helped us amuse ourselves to death and pervasive pop culture produces shoppers, not thinkers, social network sites turn youth culture into over-sharing, unthinking, eager-to-please avatars who “watch the reality-TV show Bride Wars because their friends are.” Yet this can’t be the whole story. Whether 41 or 21, social network participants live in the real world, integrating their online activities seamlessly into their daily lives. Far more goes on in social network sites like Facebook than sharing information to “make others like you” as Smith suggests. On Facebook and other popular social media sites, people join groups of every stripe. They work, as Miriam Cherry’s terrific new article Virtual Work addresses. They build reputations in ways that can enhance offline careers. They join study groups. In many respects, social media sites provide platforms for genuine participation far more than just Government 2.0 engagement. Far from deadening the everyday citizen, social media platforms can resemble Alexis de Toqueville’s town meeting, John Dewey’s schools, and Cynthia Estlund’s workplace. Of course, citizenship participation online is different–it is not the face-to-face interaction envisioned by Toqueville, Dewey, and Estlund. But even with the challenges brought by internet-mediated interactions, 2.0 kids are more than denuded avatars.
posted by Barbara van Schewick
[This is the second of two posts on Jonathan Zittrain’s book The Future of the Internet and how to stop it. The first post (on the relative importance of generative end hosts and generative network infrastructure for the Internet's overall ability to foster innovation) is here.]
In the book’s section on “The Generativity Principle and the Limits of End-to-End Neutrality,” Zittrain calls for a new “generativity principle” to address the Internet’s security problem and prevent the widespread lockdown of PCs in the aftermath of a catastrophic security attack: “Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities.” (p. 165)
Zittrain argues that by assigning responsibility for security to the end hosts, “end-to-end theory” creates challenges for users who have little knowledge of how to best secure their computers. The existence of a large number of unsecured end hosts, in turn, may facilitate a catastrophic security attack that will have widespread and severe consequences for affected individual end users and businesses. In the aftermath of such an attack, Zittrain predicts, users may be willing to completely lock down their computers so that they can run only applications approved by a trusted third party.
Given that general-purpose end hosts controlled by users rather than by third-party gatekeepers are an important component of the mechanism that fosters application innovation in the Internet, Zittrain argues, a strict application of “end-to-end theory” may threaten the Internet’s ability to support new applications more than implementing some security functions in the network – hence the new principle.
This argument relies heavily on the assumption that “end-to-end theory” categorically prohibits the implementation of security-related functions in the core of the network. It is not entirely clear to me what Zittrain means by “end-to-end theory.” As I explain in chapter 9 of my book, Internet Architecture and Innovation (pp. 366-368), the broad version of the end-to-end arguments  (i.e., the design principle that was used to create the Internet’s original architecture) does not establish such a rule. The broad version of the end-to-end arguments provides guidelines for the allocation of individual functions between the lower layers (the core of the network) and the higher layers at the end hosts, not for security-related functions as a group.
posted by Ryan Calo
I don’t know that generativity is a theory, strictly speaking. It’s more of a quality. (Specifically, five qualities.) The attendant theory, as I read it, is that technology exhibits these particular, highly desirable qualities as a function of specific incentives. These incentives are themselves susceptible to various forces—including, it turns out, consumer demand and citizen fear.
The law is in a position to influence this dynamic. Thus, for instance, Comcast might have a business incentive to slow down peer-to-peer traffic and only refrain due to FCC policy. Or, as Barbara van Schewick demonstrates inter alia in Internet Architecture and Innovation, a potential investor may lack the incentive to fund a start up if there is a risk that the product will be blocked.
Similarly, online platforms like Facebook or Yahoo! might not facilitate communication to the same degree in the absence of Section 230 immunity for fear that they will be held responsible for the thousand flowers they let bloom. I agree with Eric Goldman’s recent essay in this regard: it is no coincidence that the big Internet players generally hail from these United States.
As van Schewick notes in her post, Zittrain is concerned primarily with yet another incentive, one perhaps less amenable to legal intervention. After all, the incentive to tether and lock down is shaped by a set of activities that are already illegal.
One issue that does not come up in The Future of the Internet (correct me if I’m wrong, Professor Zittrain) or in Internet Architecture and Innovation (correct me if I’m wrong, Professor van Schewick) is that of legal liability for that volatile thing you actually run on these generative platforms: software. That’s likely because this problem looks like it’s “solved.” A number of legal trends—aggressive interpretation of warranties, steady invocation of the economic loss doctrine, treatment of data loss as “intangible”—mean you cannot recover from Microsoft (or Dell or Intel) because Word ate your term paper. Talk about a blow to generativity if you could.
posted by Barbara van Schewick
Which factors have allowed the Internet to foster application innovation in the past, and how can we maintain the Internet’s ability to serve as an engine of innovation in the future? These questions are central to current engineering and policy debates over the future of the Internet. They are the subject of Jonathan Zittrain’s The Future of the Internet and how to stop it and of my book Internet Architecture and Innovation which was published by MIT Press last month.
As I show in Internet Architecture and Innovation, the Internet’s original architecture had two components that jointly created an economic environment that fostered application innovation:
1. A network that was able to support a wide variety of current and future applications (in particular, a network that did not need to be changed to allow a new application to run) and that did not allow network providers to discriminate among applications or classes of applications. As I show in the book, using the broad version of the end-to-end arguments (i.e., the design principle that was used to create the Internet’s original architecture)  to design the architecture of a network creates a network with these characteristics.
2. A sufficient number of general-purpose end hosts  that allowed their users to install and run any application they like.
Both are essential components of the architecture that has allowed the Internet to be what Zittrain calls “generative” – “to produce unanticipated change through unfiltered contributions from broad and varied audiences.”
In The Future of the Internet and how to stop it, Zittrain puts the spotlight on the second component: general-purpose end hosts that allow users to install and run any application they like and their importance for the generativity of the overall system.
posted by Danielle Citron
In his post, Adam Thierer presses on the question of whether we can distinguish open and closed systems. He suggests that Zittrain overstates the problem, noting that many networks and appliances combine features of generativity and tetheredness and that consumers can always choose products and networks with characteristics that they like.
To be sure, it can be difficult to identify the degree of openness/generativity of systems, but not just because appliances and networks combine them seamlessly. Confusion may arise because providers fail to articulate their positions clearly and transparently regarding certain third party activities. This surely explains some of the examples of contingent generativity that Zittrain highlights: one minute the app you wrote is there, the next it is not, or postings at the content layer appear and then are gone. In the face of vague policies, consumers may have difficulty making informed choices, especially when providers embed decisions into architecture.
Part of Zittrain’s plan to preserve innovation online is to enlist netizens to combat harmful activities that prompt providers to lock down their devices. A commitment to transparency about unacceptable third-party activities can advance that important agenda. For instance, social media providers often prohibit “hateful” speech in their Terms of Service or Community Guidelines without defining it with specificity. Without explaining the terms of, and harms to be prevented by, hate speech policies as well as the consequences of policy violations, users may lack the tools necessary to engage as responsible netizens. Some social media providers inform users when content violating their Terms of Service has been taken down, a valuable step in educating communities about the limits to openness. Users of Facebook can see, for instance, that the Kill a Jew Day group once appeared and has now been removed. This sort of transparency is a first step in an important journey of allowing consumers to make educated choices about the services/appliances/networks they use and to garner change through soft forms of regulation.
posted by Ryan Calo
Prohibition wasn’t working. President Hoover assembled the Wickersham Commission to investigate why. The Commission concluded that despite an historic enforcement effort—including the police abuses that made the Wickersham Commission famous—the government could not stop everyone from drinking. Many people, especially in certain city neighborhoods, simply would not comply. The Commission did not recommend repeal at this time, but by 1931 it was just around the corner.
Five years later an American doctor working in a chemical plant made a startling discovery. Several workers began complaining that alcohol was making them sick, causing most to stop drinking it entirely—“involuntary abstainers,” as the doctor, E.E. Williams, later put it. It turns out they were in contact with a chemical called disulfiram used in the production of rubber. Disulfiram is well-tolerated and water-soluble. Today, it is marketed as the popular anti-alcoholism drug Antabuse.
Were disulfiram discovered just a few years earlier, would federal law enforcement have dumped it into key parts of the Chicago or Los Angeles water supply to stamp out drinking for good? Probably not. It simply would not have occurred to them. No one was regulating by architecture then. To dramatize this point: when New York City decided twenty years later to end a string of garbage can thefts by bolting the cans to the sidewalk, the decision made the front page of the New York Times. The headline read: “City Bolts Trash Baskets To Walks To End Long Wave Of Thefts.”
In an important but less discussed chapter in The Future of the Internet, Jonathan Zittrain explores our growing taste and capacity for “perfect enforcement.” Read the rest of this post »
September 7, 2010 at 2:58 pm Posted in: Architecture, Articles and Books, Book Reviews, Cyber Civil Rights, Cyberlaw, DRM, Jurisprudence, Legal Theory, Symposium (Future of Internet), Technology Print This Post 5 Comments
Future of the Internet Symposium: The Role of Infrastructure Management in Determining Internet Freedom
posted by admin
Last week, Facebook reportedly blocked users of Apple’s new Ping social networking service from reaching Facebook friends because the company was concerned about the prospect of massive amounts of traffic inundating its servers. This is precisely the type of architectural lockdown Jonathan Zittrain brilliantly portends in The Future of the Internet and How to Stop It. Contemplating this service blockage and re-reading Jonathan’s book this weekend have me thinking about the role of private industry infrastructure management in shaping Internet freedom.
The Privatization of Internet Governance
I’m heading to the United Nations Internet Governance Forum in Vilnius, Lithuania, where I will be speaking on a panel with Vinton Cerf and members of the Youth Coalition on Internet Governance about “Core Internet Values and the Principles of Internet Governance Across Generations.” What role will “infrastructure management” values increasingly play as a private industry ordering of the flow of information on the Internet? The privatization of Internet governance is an area that has not received enough attention. Internet scholars are often focused on content. Internet governance debates often reduce into an exaggerated dichotomy, as Milton Mueller describes it, between the extremes of cyberlibertarianism and cyberconservativism. The former can resemble utopian technological determinism and the later is basically a state sovereignty model that wants to extend traditional forms of state control to the Internet.
The cyberlibertarian and cyberconservative perspectives are indistinguishable in that they both tend to disregard the infrastructure governance sinews already permeating the Internet’s technical architecture. There is also too much attention to institutional governance battles and to the Internet Governance Forum itself, which is, in my opinion, a red herring because it has no policy-making authority and fails to address important controversies.
Where there is attention to the role of private sector network management and traffic shaping, much analysis has focused on “last mile” issues of interconnection rather than the Internet’s backbone architecture. Network neutrality debates are a prime example of this. Another genre of policy attention addresses corporate social responsibility at the content level, such as the Facebook Beacon controversy and the criticism Google initially took for complying with government requests to delete politically sensitive YouTube videos and filter content. These are critical issues, but equally important and less visible decisions occur at the architectural level of infrastructure management. I’d like to briefly mention two examples of private sector infrastructure management functions that also have implications for Internet freedom and innovation: private sector Internet backbone peering agreements and the use of deep packet inspection for network management.
Private Sector Internet Backbone Peering Agreements
For the Internet to successfully operate, Internet backbones obviously must connect with one another. These backbone networks are owned and operated primarily by private telecommunications companies such as British Telecom, Korea Telecom, Verizon, AT&T, Internet Initiative Japan and Comcast. Independent commercial networks conjoin either at private Internet connection points between two companies or at multi-party Internet exchange points (IXPs).
IXPs are the physical junctures where different companies’ backbone trunks interconnect and exchange Internet packets and route them toward their appropriate destinations. One of the largest IXPs (based on throughput of peak traffic) is the Deutscher Commercial Internet Exchange (DE-CIX) in Frankfurt, Germany. This IXP connects hundreds of Internet providers, including content delivery networks and web hosting services as well as Internet service providers. Google, Sprint, Level3, and Yahoo all connect through DE-CIX, as well as to many other IXPs.
Other interconnection points involve private contractual arrangements between two telecommunications companies to connect for the purpose of exchanging Internet traffic. Making this connection at private interconnection points requires physical interconnectivity and equipment but it also involves agreements about cost, responsibilities, and performance. There are generally two types of agreements – peering agreements and transit agreements. Peering agreements refer to mutually beneficial arrangements whereby no money is exchanged among companies agreeing to exchange traffic at interconnection points. In a transit agreement, one telecommunications company agrees to pay a backbone provider for interconnection. There is no standard approach for the actual agreement to peer or transit, with some interconnections involving formal contracts and others based upon verbal agreements between companies’ technical personnel.
Interconnection agreements are an unseen regime. They have few directly relevant statutes, almost no regulatory oversight, and little transparency in private contracts and agreements. Yet these interconnection points have important economic and implications to the future of the Internet. They certainly have critical infrastructure implications depending on whether they provide sufficient redundancy, capacity and security. Disputes over peering and transit agreements, not just problems with physical architecture, have created network outages in the past. The effect on free market competition is another concern, related to possible lack of competition in Internet backbones, dominance by a small number of companies, and peering agreements among large providers that could be detrimental to potential competitors. Global interconnection disputes have been numerous and developing countries have complained about transit costs to connect to dominant backbone providers. The area of interconnection patents is another emerging concern with implications to innovation. Interconnection points are also obvious potential points of government filtering and censorship. Because of the possible implications to innovation and freedom, greater transparency and insight into the arrangements and configurations at these sites would be very helpful.
Network Management via Deep Packet Inspection
Another infrastructure management technique with implications to the future of the Internet is the use of deep packet inspection (DPI) for network management traffic shaping. DPI is a capability manufactured into network devices (e.g. firewalls) that scrutinizes the entire contents of a packet, including the payload as well as the packet header. This payload is the actual information content of the packet. The bulk of Internet traffic is information payload, versus the small amount of administrative and routing information contained within packet headers. ISPs and other information intermediaries have traditionally used packet headers to route packets, perform statistical analysis, and perform routine network management and traffic optimization. Until recent years, it has not been technically viable to inspect the actual content of packets because of the enormous processing speeds and computing resources necessary to perform this function.
The most publicized instances of DPI have involved the ad-serving practices of service providers wishing to provide highly targeted marketing based on what a customer views or does on the Internet. Other attention to DPI focuses on concerns about state use of deep packet inspection for Internet censorship. One of the originally intended uses of DPI, and still an important use, is for network security. DPI can help identify viruses, worms, and other unwanted programs embedded within legitimate information and help prevent denial of service attacks. What will be the implications of increasingly using DPI for network management functions, legitimately concerned with network performance, latency, and other important technical criterion?
Zittrain discusses how the value of trust was designed into the Internet’s original architecture. The new reality is that the end-to-end architectural principle historically imbued in Internet design has waned considerably over the years with the introduction of Network Address Translation (NATs), firewalls, and other networks intermediaries. Deep packet inspection capability, engineered into routers, will further erode the end-to-end principle, an architectural development which will have implications to the future of the Internet’s architecture as well as to the future of individual privacy and network neutrality.
As I head to the Internet Governance Forum in Vilnius, Lithuania, Zittrain’s book is a reminder of what is at stake at the intersection of technical expediency and Internet freedom and how private ordering, rather than governments or new Internet governance institutions, will continue to shape the future of the Internet.
September 7, 2010 at 11:11 am Posted in: Architecture, Cyber Civil Rights, Cyberlaw, First Amendment, Politics, Privacy, Social Network Websites, Symposium (Future of Internet), Technology, Web 2.0 Print This Post 3 Comments
posted by Danielle Citron
It’s an honor to introduce Jonathan Zittrain and the participants in our online symposium on The Future of the Internet–And How to Stop It. From tomorrow through Wednesday, we will be discussing Zittrain’s important book, which warns of a shift in the Internet’s trajectory from a wide-open Web of creative anarchy to a series of closed platforms that will curtail innovation. As Zittrain predicted, “tethered appliances” dominate our information ecosystem today. We increasingly trade generative technologies like PCs that permit experimentation for sterile, reliable appliances like mobile phones, video game consoles, and book readers that limit or forbid tinkering. Zittrain attributes this phenomenon to the unfortunate, yet now predictable, pathologies that generativity enables. Although generative technologies facilitate innovation, they permit the spread of spam, viruses, malware, and the like.
According to Zittrain, the Internet is at a crucial inflection point. Rather than sustaining the wide-open Web of creativity and disruption, the Internet may in time become a series of controlled networks that limit innovation and enable inappropriate governmental and corporate surveillance. Zittrain offers various strategies to forestall such scenarios, including tools to empower users to solve problems that drive users to sterile appliances and networks. Zittrain argues that our information ecology functions best with generative technology at its core.
The Future of the Internet raises a host of fascinating and timely questions. Is the future of the Internet indeed bleak? As this month’s cover story for Wired asks: is Zittrain’s dark future only likely in the “commercial content side” of the digital economy? Might a healthy balance of generative technologies and tethered appliances emerge, or is the move to appliancized networks a grab for control that will be difficult to shake? Will non-generative technologies impact our democratic commitments and cultural values? Should we remain committed to protecting generativity? Are there alternative strategies for preserving innovation besides the ones that Zittrain offers?
To consider these and other issues, we have invited an all-star cast of thinkers:
My co-bloggers will join this conversation as well. In a post in April 2009, co-blogger Deven Desai started our conversation about The Future of the Internet–And How to Stop It. Since that time, the wild-fire adoption of tethered appliances, iPod applications, iTunes, and the like have shown just how prophetic and important Zittrain’s book is. We are excited for the discussion to begin.
September 6, 2010 at 2:58 pm Posted in: Administrative Announcements, Anonymity, Architecture, Cyberlaw, Google & Search Engines, Privacy, Symposium (Future of Internet), Technology, Web 2.0, Wiki Print This Post 4 Comments
posted by Jeff Jonas
As mankind deploys increasing numbers of sensors, and makes more sense of this data, more of our secrets are revealed. In a world of greater transparency, will you be able to be you? Or will you feel obligated to mask who you are, drawn to the safety of the center of the bell curve?
Will a more transparent society make you average?
Imagine for a moment that video feeds from street surveillance cameras are the blue puzzle pieces, your path through life lit up by your cell phone location as the green puzzle pieces and your Facebook social network as the yellow puzzle pieces. Flicker the brown puzzle pieces and Twitter, orange puzzle pieces. And maybe one day your energy consuming devices in your home may be spewing out the magenta puzzle pieces. As increasing volume and range of data converges, a colorful, highly revealing picture of our lives will unfold, with or without our knowledge or permission. Traditional physical sensors like credit card and license plate readers are one thing. The human is the sensor, thanks to Web 2.0, is altogether a different thing.
Unlike two decades ago, humans are now creating huge volumes of extraordinarily useful data as they self-annotate their relationships and yours, their photographs and yours, their thoughts and their thoughts about you … and more.
With more data, comes better understanding and prediction. The convergence of data might reveal your “discreet” rendezvous or the fact you are no longer on speaking terms your best friend. No longer secret is your visit to the porn store and the subsequent change in your home’s late night energy profile, another telling story about who you are … again out of the bag, and little you can do about it. Pity … you thought that all of this information was secret.
How will mankind respond? Will people feel forced to modify their behavior towards normal only because they fear others may discover their intimate personal affairs? This is what Julie Cohen and Neil Richards have worried about – the “chilling effect.” Read the rest of this post »
August 2, 2010 at 12:34 pm Posted in: Architecture, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (Law Enforcement), Privacy (National Security), Technology, Web 2.0 Print This Post No Comments