Site Meter

Category: Architecture

0

Exciting news for the Center on Democracy & Technology: Nuala O’Connor Appointed President and CEO

Brilliant news: CDT’s Board of Directors just announced that Nuala O’Connor has been named President & CEO, effective January 21, 2014. O’Connor will succeed Leslie Harris, who is stepping down after leading CDT for nearly nine years. As the privacy community knows well, Harris provided extraordinary leadership: vision, enthusiasm, and commitment. O’Connor will build on that tradition in spades. She is the perfect leader for CDT.

From CDT’s announcement:

“Nuala drove an ambitious civil liberties agenda as the first Chief Privacy Officer at the Department of Homeland Security in a post 9-11 world. She fought for and implemented policies to protect the human rights of U.S. and global citizens in a climate of overreaching surveillance efforts. The Board is thrilled to have Nuala at the helm as CDT expands on 20 years of Internet policy work advancing civil liberties and human rights across the globe,” said Deirdre Mulligan, CDT Board Chair.

O’Connor is an internationally recognized expert in technology policy, particularly in the areas of privacy and information governance. O’Connor comes to CDT from Amazon.com, where she served both as Vice President of Compliance & Customer Trust and as Associate General Counsel for Privacy & Data Protection. Previously she served as the first Chief Privacy Officer at the U.S. Department of Homeland Security (DHS). At DHS, O’Connor was responsible for groundbreaking policy creation and implementation on the use of personal information in national security and law enforcement.

“I am honored to join the superb team at the Center for Democracy & Technology. CDT is at the forefront of advocating for civil liberties in the digital world,” said O’Connor. “There has never been a more important time in the fight to keep the Internet open, innovative and free. From government surveillance to data-driven algorithms to the Internet of things, challenges abound. I am committed to continuing to grow CDT’s global influence and impact as a voice for the open Internet and for the rights of its users.”

“Nuala is a brilliant choice to lead CDT. She is a passionate advocate for civil liberties, highly expert about the emerging global challenges and fully committed to CDT’s mission. She is a bold leader who will guide CDT into its next chapter. I have had the honor of working with CDT’s talented and thoughtful team for almost nine years. I am confident that they will thrive with Nuala at the helm,” said Leslie Harris.

Beyond her experience at Amazon and DHS, O’Connor has also worked in consumer privacy at General Electric, and as Chief Counsel for Technology at the U.S. Department of Commerce. She also created the privacy compliance department at DoubleClick and practiced law at Sidley Austin, Venable, and Hudson Cook.

O’Connor, who is originally from Belfast, Northern Ireland, holds an A.B. from Princeton University, an M.Ed. from Harvard University, and a J.D. from Georgetown University Law Center. She currently serves on numerous nonprofit boards, and is the recipient of a number of national awards, including the IAPP Vanguard Award, the Executive Women’s Forum’s Woman of Influence award, and was named to the Federal 100, but is most proud of having been named “Geek of the Week” by the Minority Media & Telecom Council in May 2013. She lives in the Washington, D.C. area with her three school-aged children.

2

Pressing a point

 

Prentice Women’s Hospital is a landmark for me.  Owned by Northwestern University, it stands directly across from the Northwestern Law complex, meaning that I passed it virtually every day as a law student and more recently as a VAP here at the school.  So I’m keenly interested in the University’s plan to tear down the concrete, clover leaf-shaped structure and replace it with a state-of-the-art research facility.  The debate over its fate also illustrated a trend towards advocacy in the mainstream media that raises some interesting legal questions.

Prentice_Women's_Hospital_Chicago

The building is one of the foremost examples of late-Modernist architecture in the city, and activists pressed the Chicago Commission on Landmarks to give the building landmark status, thus preserving it from demolition. When, in the midst of the preservation effort last year, local alderman Brendan Reilly said he was “open to suggestions” to save the building, New York Times architecture critic Michael Kimmelman stepped in.  Kimmelman did not merely detail the architectural relevance of the building or express his support for preservation.  Instead, he asked Chicago architecture’s It Girl, Jeanne Gang, whether it would be possible to build a research tower on top of the existing structure.  She responded with drawings of a 31-story skyscraper perched on top of the clover leaf.  Kimmelman wrote about Gang’s idea, running pictures of her concept in the paper.  Again, though, he didn’t stop there.  He contacted a field officer for the Chicago office of the National  Trust for Historic Preservation, and asked whether her organization would support the idea.  He contacted Northwestern to ask whether the university might sign on.  And he called the president of an international structural engineering firm to get feedback on the structural and financial feasibility of the plan.  Somewhere along the way, Kimmelman stopped looking like a reporter, or even a critic, and started looking more like one of the activists trying to save the building.

Putting aside the admirable intentions that obviously drove Kimmelman, his efforts illustrate the increasingly porous boundary between reporting and advocacy, even in the mainstream media.  Of course, partisanship and muckraking in journalism are not new.  But as journalism migrates onto our phones and screens alongside Instagram and Facebook, and as “dying” newspapers and network news broadcasts venture beyond traditional reporting techniques to chase eyeballs and engagement, it grows increasingly difficult to categorize what exactly we are consuming when we consume the news.  Why do these questions, obvious fodder for media ethicists, matter to lawyers?  For two reasons, one specific and one general.

Read More

17

Is Forensics Law?

I’ve blogged on these pages before about the claim, popularized by Larry Lessig, that “code is law.”  During the Concurring Opinions symposium on Jonathan Zittrain’s 2010 book The Future of The Internet (And How To Stop It), I cataloged the senses in which architecture or “code” is said to constitute a form of regulation.  “Primary” architecture refers to altering a physical or digital environment to stop conduct before it happens.  Speed bumps are a classic example.  “Secondary” architecture instead alters an environment in order to make conduct harder to get away with—for instance, by installing a traffic light camera or forcing a communications network to build an entry point for law enforcement. Read More

0

BRIGHT IDEAS: Welcoming Barbara van Schewick to Discuss Network Non-Discrimination in Practice

On Friday, I learned that Professor Barbara van Schewick would be releasing a ground-breaking white paper entitled Network Neutrality and Quality of Service: What a Non-Discrimination Rule Should Look Like.  Lucky for us, Professor van Schewick agreed to come aboard to talk to us about her white paper, which she released on Monday, see her post here.  Her paper provides the first detailed analysis of the Federal Communications Commissions’ non-discrimination rule and of its implications for network providers’ ability to manage their networks and offer Quality of Service.  Crucially, it proposes a non-discrimination rule that policy makers can, and should, adopt around the world – a rule that the FCC adopted at least in part.

Professor van Schewick is an Associate Professor of Law and Helen L. Crocker Faculty Scholar at Stanford Law School, an Associate Professor (by courtesy) of Electrical Engineering in Stanford University’s Department of Electrical Engineering, and Director of Stanford Law School’s Center for Internet and Society.

This post is a terrific prelude to our online symposium on van Schewick’s book Internet Architecture and Innovation (MIT Press 2010), which is considered the seminal work on the science, economics and policy of network neutrality.  We will be holding our symposium in honor of the book’s paperback release in the early fall.

Thanks so much for coming aboard, and I hope this post gets you excited for our discussion in the fall.

H/T: Marvin Ammori and Elaine Adolfo

0

The Turn to Infrastructure for Internet Governance

Drawing from economic theory, Brett Frischmann’s excellent new book Infrastructure: The Social Value of Shared Resources (Oxford University Press 2012) has crafted an elaborate theory of infrastructure that creates an intellectual foundation for addressing some of the most critical policy issues of our time: transportation, communication, environmental protection and beyond. I wish to take the discussion about Frischmann’s book into a slightly different direction, moving away from the question of how infrastructure shapes our social and economic lives into the question of how infrastructure is increasingly co-opted as a form of governance itself.

Arrangements of technical architecture have always inherently been arrangements of power. This is certainly the case for the technologies of Internet governance designed to keep the Internet operational. This governance is not necessarily about governments but about technical design decisions, the policies of private industry and the decisions of new global institutions. By “Infrastructures of Internet governance,” I mean the technologies and processes beneath the layer of content and inherently designed to keep the Internet operational. Some of these architectures include Internet technical protocols; critical Internet resources like Internet addresses, domain names, and autonomous system numbers; the Internet’s domain name system; and network-layer systems related to access, Internet exchange points (IXPs) and Internet security intermediaries. I have published several books about the inherent politics embedded in the design of this governance infrastructure.  But here I wish to address something different. These same Internet governance infrastructures are increasingly being co-opted for political purposes completely irrelevant to their primary Internet governance function.

The most pressing policy debates in Internet governance increasingly do not involve governance of the Internet’s infrastructure but governance using the Internet’s infrastructure.  Governments and large media companies have lost control over content through laws and policies and are recognizing infrastructure as a mechanism for regaining this control.  This is certainly the case for intellectual property rights enforcement. Copyright enforcement has moved well beyond addressing specific infringing content or individuals into Internet governance-based infrastructural enforcement. The most obvious examples include the graduated response methods that terminate the Internet access of individuals that repeatedly violate copyright laws and the domain name seizures that use the Internet’s domain name system (DNS) to redirect queries away from an entire web site rather than just the infringing content. These techniques are ultimately carried out by Internet registries, Internet registrars, or even by non-authoritative DNS operators such as Internet service providers. Domain name seizures in the United States often originate with the Immigration and Customs Enforcement agency. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA).

An even more pronounced connection between infrastructure and governance occurs in so-called “kill-switch” interventions in which governments, via private industry, enact outages of basic telecommunications and Internet infrastructures, whether via protocols, application blocking, or terminating entire cell phone or Internet access services. From Egypt to the Bay Area Rapid Transit service blockages, the collateral damage of these outages to freedom of expression and public safety is of great concern. The role of private industry in enacting governance via infrastructure was also obviously visible during the WikiLeaks CableGate saga during which financial services firms like PayPal, Visa and MasterCard opted to block the financial flow of money to WikiLeaks and Amazon and EveryDNS blocked web hosting and domain name resolution services, respectively.

This turn to governance via infrastructures of Internet governance raises several themes for this online symposium. The first theme relates to the privatization of governance whereby industry is voluntarily or obligatorily playing a heightened role in regulating content and governing expression as well as responding to restrictions on expression. Concerns here involve not only the issue of legitimacy and public accountability but also the possibly undue economic burden placed on private information intermediaries to carry out this governance. The question about private ordering is not just a question of Internet freedom but of economic freedom for the companies providing basic Internet infrastructures. The second theme relates to the future of free expression. Legal lenses into freedom of expression often miss the infrastructure-based governance sinews that already permeate the Internet’s underlying technical architecture. The third important theme involves the question of what this technique of governance via infrastructure will mean for the technical infrastructure itself.  As an engineer as well as a social scientist, my concern is for the effects of these practices on Internet stability and security, particularly the co-opting of the Internet’s domain name system for content mediation functions for which the DNS was never intended. The stability of the Internet’s infrastructure is not a given but something that must be protected from the unintended consequences of these new governance approaches.

I wish to congratulate Brett Frischmann on his new book and thank him for bringing the connection between society and infrastructure to such a broad and interdisciplinary audience.

Dr. Laura DeNardis, American University, Washington, DC.

3

Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

1

Some thoughts on Cohen’s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice

Julie Cohen’s book is fantastic.  Unfortunately, I am late to join the symposium, but it has been a pleasure playing catch up with the previous posts.  Reading over the exchanges thus far has been a treat and a learning experience.  Like Ian Kerr, I felt myself reflecting on my own commitments and scholarship.  This is really one of the great virtues of the book.  To prepare to write something for the blog symposium, I reread portions of the book a second time; maybe a third time, since I have read many of the law review articles upon which the book is based.  And frankly, each time I read Julie’s scholarship I am forced to think deeply about my own methodology, commitments, theoretical orientation, and myopias. Julie’s critical analysis of legal and policy scholarship, debate,and rhetoric is unyielding as it cuts to the core commitments and often unstated assumptions that I (we) take for granted.

I share many of the same concerns as Julie about information law and policy (and I reach similar prescriptions too), and yet I approach them from a very different perspective, one that is heavily influenced by economics.  Reading her book challenged me to confront my own perspective critically.  Do I share the commitments and methodological infirmities of the neoliberal economists she lambasts?     Upon reflection, I don’t think so.  The reason is that not all of economics boils down to reductionist models that aim to tally up quantifiable costs and benefits. I agree wholeheartdly with Julie that economic models of copyright (or creativty,  innovation, or privacy) that purport to accurately sum up relevant benefits and costs and fully capture the complexity of cultural practices are inevitably, fundamentally flawed and that uncritical reliance on such models to formulate policy is distorting and biased toward seemless micromanagement and control. As she argues in her book, reliance on such models “focuses on what is known (or assumed) about benefits and costs, … [and] tends to crowd out the unknown and unpredictable, with the result that play remains a peripheral consideration, when it should be central.”  Interestingly, I make nearly the same argument in my book, although my argument is grounded in economic theory and my focus is on user activities that generate public and social goods.  I need to think more about the connections between her concept of play and the user activities I  examine.  But a key shared concept is that indeterminacy in the environment and the structure of rights and affordances sustains user capabilties and this is (might be) normatively attractive whether or not users choose to exercise the capabilities.  That is, there is social (option) value is sustaining flexibility and uncertainty.

Like Julie, I have been drawn to the Capabilities Approach (CA). It provides a normatively appealing framework for thinking about what matters in information policy—that is, for articulating ends.  But it seems to pay insufficient attention to the means.  I have done some limited work on the CA and information policy and hope to do more in the future.  Julie has provided an incredible roadmap.  In chapter 9, The Structural Conditions of Human Flourishing, she goes beyond the identification of capabilities to prioritize and examines the means for enabling capabilities.  In my view, this is a major contribution.  Specifically, she discusses three structural conditions for human flourishing: (1) access to knowledge, (2) operational transparency,and (3) semantic discontinuity to be a major contribution.  I don’t have much to say about the access to knowledge and operational transparency discussions, other than “yep.”  The semantic discontinuity discussion left me wanting more, more explanation of the concept and more explanation of how to operationalize it.  I wanted more because I think it is spot on.  Paul and others have already discussed this, so I will not repeat what they’ve said.  But, riffing off of Paul’s post, I wonder whether it is a mistake to conceptualize semantic discontinuity as “gaps” and ask privacy, copyright, and other laws to widen the gaps.  I wonder whether the “space” of semantic discontinuities is better conceptualized as the default or background environment rather than the exceptional “gap.”  Maybe this depends on the context or legal structure, but I think the relevant semantic discontinuities where play flourishes, our everyday social and cultural experiences, are and should be the norm.  (Is the public domain merely a gap in copyright law?  Or is copyright law a gap in the public domain?)  Baselines matter.  If the gap metaphor is still appealing, perhaps it would be better to describe them as gulfs.

1

Pakistan Scrubs the Net

Pakistan, which has long censored the Internet, has decided to upgrade its cybersieves. And, like all good bureaucracies, the government has put the initiative out for bid. According to the New York Times, Pakistan wants to spend $10 million on a system that can block up to 50 million URLs concurrently, with minimal effect on network speed. (That’s a lot of Web pages.) Internet censorship is on the march worldwide (and the U.S. is no exception). There are at least three interesting things about Pakistan’s move:

First, the country’s openness about its censorial goals is admirable. Pakistan is informing its citizens, along with the rest of us, that it wants to bowdlerize the Net. And, it is attempting to do so in a way that is more uniform than under its current system, where filtering varies by ISP. I don’t necessarily agree with Pakistan’s choice, but I do like that the country is straightforward with its citizens, who have begun to respond.

Second, the California-based filtering company Websense announced that it will not bid on the contract. That’s fascinating – a tech firm has decided that the public relations damage from helping Pakistan censor the Net is greater than the $10M in revenue it could gain. (Websense argues, of course, that its decision is a principled one. If you believe that, you are probably a member of the Ryan Braun Clean Competition fan club.)

Finally, the state is somewhat vague about what it will censor: it points to pornography, blasphemy, and material that affects national security. The last part is particularly worrisome: the national security trump card is a potent force after 9/11 and its concomitant fallout in Pakistan’s neighborhood, and censorship based on it tends to be secret. There is also real risk that national security interests = interests of the current government. America has an unpleasant history of censoring political dissent based on security worries, and Pakistan is no different.

I’ll be fascinated to see which companies take up Pakistan’s offer to propose…

Cross-posted at Info/Law.

7

Santorum: Please Don’t Google

If you Google “Santorum,” you’ll find that two of the top three search results take an unusual angle on the Republican candidate, thanks to sex columnist Dan Savage. (I very nearly used “Santorum” as a Google example in class last semester, and only just thought better of it.) Santorum’s supporters want Google to push the, er, less conventional site further down the rankings, and allege that Google’s failure to do so is political biased. That claim is obviously a load of Santorum, but the situation has drawn more thoughtful responses. Danny Sullivan argues that Google should implement a disclaimer, because kids may search on “Santorum” and be disturbed by what they find, or because they may think Google has a political agenda. (The site has one for “jew,” for example. For a long time, the first result for that search term was to the odious and anti-Semitic JewWatch site.)

This suggestion is well-intentioned but flatly wrong. I’m not an absolutist: I like how Google handled the problem of having a bunch of skinheads show up as a top result for “jew.” But I don’t want Google as the Web police, though many disagree. Should the site implement a disclaimer if you search for “Tommy Lee Pamela Anderson”? (Warning: sex tape.) If you search for “flat earth theory,” should Google tell you that you are potentially a moron? I don’t think so. Disclaimers should be the nuclear option for Google – partly so they continue to attract attention, and partly because they move Google from a primarily passive role as filter to a more active one as commentator. I generally like my Web results without knowing what Google thinks about them.

Evgeny Morozov has made a similar suggestion, though along different lines: he wants Google to put up a banner or signal when someone searches for links between vaccines and autism, or proof that the Pentagon / Israelis / Santa Claus was behind the 9/11 attacks. I’m more sympathetic to Evgeny’s idea, but I would limit banners or disclaimers to situations that meet two criteria. First, the facts of the issue must be clear-cut: pi is not equal to three (and no one really thinks so), and the planet is indisputably getting warmer. And second, the issue must be one that is both currently relevant and with significant consequences. The flat earthers don’t count; the anti-vaccine nuts do. (People who fail to immunize their children not only put them at risk; they put their classmates and friends at risk, too.) Lastly, I think there’s importance to having both a sense of humor and a respect for discordant, even false speech. The Santorum thing is darn funny. And, in the political realm, we have a laudable history of tolerating false or inflammatory speech, because we know the perils of censorship. So, keeping spreading Santorum!

Danielle, Frank, and the other CoOp folks have kindly let me hang around their blog like a slovenly houseguest, and I’d like to thank them for it. See you soon!

Cross-posted at Info/Law.

3

Ubiquitous Infringement

Lifehacker‘s Adam Dachis has a great article on how users can deal with a world in which they infringe copyright constantly, both deliberately and inadvertently. (Disclaimer alert: I talked with Adam about the piece.) It’s a practical guide to a strict liability regime – no intent / knowledge requirement for direct infringement – that operates not as a coherent body of law, but as a series of reified bargains among stakeholders. And props to Adam for the Downfall reference! I couldn’t get by without the mockery of the iPhone or SOPA that it makes possible…

Cross-posted to Info/Law.