Archive for the ‘Cyber Civil Rights’ Category
posted by Laura DeNardis
Madhavi Sunder’s thought-provoking new book, From Goods to a Good Life, creates an opportunity to rethink many areas of global knowledge policy, including how the Internet’s technical architecture is governed. Global Internet governance is often viewed through the lens of technical expediency and innovation policy, especially concentrating attention on the international institutions that coordinate critical Internet resources and infrastructure. Sunder’s book provides a refreshing theoretical basis for shifting this frame to place culture and human rights at the center of Internet governance debates. Technologies of Internet governance, although concealed in technical complexity and generally outside of public view, are the new spaces determining some of the most important cultural freedom issues of our time.
Sunder’s book suggests the technological features necessary for participatory culture to thrive. Some of these include many-to-many interactivity, amenability to manipulation and revision, and an architecture that shifts cultural production from the top-down hierarchical control of popular media to a distributed system in which cultural creation can reside at endpoints. As Sunder explains, “This open architecture facilitates democratic resistance to dominant cultural discourses.”
Some trends in Internet governance are discordant with these crucial features. Internet governance control points are neither legal control points nor are they confined within nation-state boundaries. They are often manifested through the design of technical architecture, the decisions of global institutions of Internet governance, and through private business models.
I’ll offer a few Internet governance questions with implications for the future of participatory culture. The first is the evolving, behind-the-scenes architecture of online advertising practices. Relinquishing information about ourselves, consciously or not, is the quid pro quo bargain for free culture. The companies that operate platforms supporting distributed cultural production obviously require massive annual operating budgets. They provide free distributed products (e.g. YouTube, social media, blogging platforms) but are supported by online advertising models predicated upon the centralized collection and retention of data (contextual, locational, behavioral) about individuals that use these products. The removal of material barriers to cultural production is predicated upon these information goods, which are in turn predicated upon the hidden and mechanized monetization networks that support them. Information collected about individuals routinely includes unique hardware identifiers, mobile phone numbers, IP addresses, and location as well as content and site-specific information. In what ways will these evolving practices eventually constrain participatory culture and human freedom? There is a cultural disconnect between the perception of online anonymity and the actuality of a multi-layered identity infrastructure beneath the layer of content.
A second Internet governance trend potentially agonistic to the future of participatory culture is the turn to the Domain Name System (DNS) for intellectual property rights enforcement. The DNS has always served a clear technical function of translating between the alphanumeric names that humans use and the binary Internet addresses that routers use. Right now, the authoritative Internet registries that resolve these names into binary numbers are already being asked to enforce trademark and copyright laws, essentially blocking queries from websites associated with piracy. If this practice expands to ISPs and other DNS operators (as SOPA/PIPA seemed to propose), what will be the collateral damage to free expression and participatory culture?
Finally, an emerging Internet governance challenge to participatory culture is the trend away from interoperability. The ability to exchange information regardless of location or device is a necessary ingredient for participatory culture. Some social media approaches actually erode interoperability in several ways: lack of inherent compatibility among platforms; lack of Uniform Resource Locator (URL) universality; lack of data portability; and lack of universal searchability. In all of these cases, standard approaches are available but companies have explicitly designed interoperability out of their systems. Cloud computing approaches seem to be lurching away from interoperability in a similar manner. These trends concentrate control and intelligence in medias res rather than at end points. These centralized and proprietary approaches mediated by gatekeepers are what the market has selected but this selection has consequences for cultural as well as technical interoperability.
Madhavi Sunder’s book is a reminder to think about these architectural and economic shifts with attention to their effects on participatory culture and to engage public input into these debates.
It might not be immediately obvious how issues as varied as essential medicines, viral Internet videos, and technical architecture are connected to each other and to human liberty. Drawing from theorists as diverse as Durkheim, Foucault, and Habermas, From Goods to a Good Life convincingly makes this connection. Congratulations to Professor Sunder for so insightfully helping us to connect issues of intellectual property and human freedom across diverse areas of global knowledge policy.
Dr. Laura DeNardis, Associate Professor, American University in Washington, D.C.
posted by Deven Desai
As the political season is in full swing and folks claim to understand SOPA, PIPA, etc., I thought I should point people to Adam Theirer’s post Mueller’s Networks and States = Classical Liberalism for the Information Age. I knew Adam a little before my stint at Google. I came to know him more while there. I do not agree with everything Adam says. Rather, he reminds me of folks I knew in law school. I disagreed with many people there, but respected the way they argued. Their points made me rethink mine and perhaps improve them. The distinction between cyber-libertarianism and Internet exceptionalism that Berin Szoka and Adam try to make is important. I am not sure it succeeds but as Adam says
They are not identical. Rather, as Berin and I argued, they are close cousins. Properly defined, cyber-libertarianism is essentially the application of traditional libertarian thinking — which is more properly defined as classically “liberal” — to Internet policy issues. Berin and I define “cyber-libertarianism” as “the belief that individuals — acting in whatever capacity they choose (as citizens, consumers, companies, or collectives) — should be at liberty to pursue their own tastes and interests online.” Internet exceptionalism, by contrast, is the belief that the Internet has changed culture and history profoundly and is deserving of special care before governments intervene. But that does not necessarily tell us what sort of philosophy or core tenants ultimately animate exceptionalism going forward. (emphasis added by me)
This last point is the reason I call out the piece. So far I have not seen anything that addresses the point in a satisfactory way. Adam and Berin face this gap and try to fill it. Agree. Disagree. That is your choice. But read the whole thing and see where you end up. One final note, I think classical liberalism as Adam defines it may be more empty than it seems. For now I cannot explain why. For that I apologize to those of that camp, but I am working on that. Oh which reminds me, Julie Cohen’s book, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice, takes on this issue.
posted by Omer Tene
Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]
Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).
The United States: Self Regulation
The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.
To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.
This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.
July 30, 2012 at 7:47 pm Tags: co-regulation, data protection, multistakeholder, Privacy, right to be forgotten, self regulation, w3c Posted in: Cyber Civil Rights, Cyberlaw, International & Comparative Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Uncategorized Print This Post 3 Comments
posted by Danielle Citron
The judge handed down the sentence in the Dahrun Ravi case today. For his conviction on witness- and evidence-tampering and lying to the police, Ravi will serve 30 days in jail. For the hate crimes charge and sentence enhancement, Ravi was sentenced to three years’ probation, 300 hours of community service, counseling on cyber bullying and alternative lifestyles, and payment of $11,000 to a group that helps victims of bias crimes. The judge included a recommendation to immigration authorities that the defendant, an Indian citizen who came to the United States as a child, not be deported. The judge made fairly clear his thinking. Before announcing the sentence, the judge said that he did not believe that the defendant hated Tyler Clementi but rather that he “acted out of colossal insensitivity.” To the defendant, the judge said: “You lied to your roommate who placed his trust in you without any conditions, and you violated it. I haven’t heard you apologize once.” He emphasized the defendant’s attempt to “corrupt the justice system” by tampering with evidence and witnesses. The judge explained that he took factors including Ravi’s youth and his lack of a criminal record into consideration.
Before the sentencing, many (including me) worried about a sentence that straddled the extremes. An unduly harsh sentence might produce a backlash against using hate crime laws in instances of bigoted online harassment (including threats, privacy invasions, etc.) while an unduly light sentence would trivialize what happened to the victim, the public shaming of his sexuality and bias intimidation. We have fallen into the latter zone. The defendant received a sentence of probation and counseling on the hate crime that he thrice rejected in plea offerings by the prosecutor. To make matters worse, the judge repudiated the jury’s conviction on the hate crime count when he characterized the defendant as insensitive, not bigoted. Even so, all is not lost. The sentence and conviction do say something important. They make clear that engaging in online harassment and shaming of individuals from traditionally subordinated groups has a cost. The sentence is not something to shrug at: the defendant has a criminal record for a hate crime with three years’ probation (even though he might have been sentenced to far more than that, ten years). To young people interested in bright futures, this is worth avoiding. Viewed at a distance, the case teaches us that juries will take similar cases seriously. It does not and should not say that such cases are easy and uncomplicated. They are hard and deservedly belong in the public eye. That this case made it into court with a conviction makes a difference.
posted by Laura DeNardis
Drawing from economic theory, Brett Frischmann’s excellent new book Infrastructure: The Social Value of Shared Resources (Oxford University Press 2012) has crafted an elaborate theory of infrastructure that creates an intellectual foundation for addressing some of the most critical policy issues of our time: transportation, communication, environmental protection and beyond. I wish to take the discussion about Frischmann’s book into a slightly different direction, moving away from the question of how infrastructure shapes our social and economic lives into the question of how infrastructure is increasingly co-opted as a form of governance itself.
Arrangements of technical architecture have always inherently been arrangements of power. This is certainly the case for the technologies of Internet governance designed to keep the Internet operational. This governance is not necessarily about governments but about technical design decisions, the policies of private industry and the decisions of new global institutions. By “Infrastructures of Internet governance,” I mean the technologies and processes beneath the layer of content and inherently designed to keep the Internet operational. Some of these architectures include Internet technical protocols; critical Internet resources like Internet addresses, domain names, and autonomous system numbers; the Internet’s domain name system; and network-layer systems related to access, Internet exchange points (IXPs) and Internet security intermediaries. I have published several books about the inherent politics embedded in the design of this governance infrastructure. But here I wish to address something different. These same Internet governance infrastructures are increasingly being co-opted for political purposes completely irrelevant to their primary Internet governance function.
The most pressing policy debates in Internet governance increasingly do not involve governance of the Internet’s infrastructure but governance using the Internet’s infrastructure. Governments and large media companies have lost control over content through laws and policies and are recognizing infrastructure as a mechanism for regaining this control. This is certainly the case for intellectual property rights enforcement. Copyright enforcement has moved well beyond addressing specific infringing content or individuals into Internet governance-based infrastructural enforcement. The most obvious examples include the graduated response methods that terminate the Internet access of individuals that repeatedly violate copyright laws and the domain name seizures that use the Internet’s domain name system (DNS) to redirect queries away from an entire web site rather than just the infringing content. These techniques are ultimately carried out by Internet registries, Internet registrars, or even by non-authoritative DNS operators such as Internet service providers. Domain name seizures in the United States often originate with the Immigration and Customs Enforcement agency. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA).
An even more pronounced connection between infrastructure and governance occurs in so-called “kill-switch” interventions in which governments, via private industry, enact outages of basic telecommunications and Internet infrastructures, whether via protocols, application blocking, or terminating entire cell phone or Internet access services. From Egypt to the Bay Area Rapid Transit service blockages, the collateral damage of these outages to freedom of expression and public safety is of great concern. The role of private industry in enacting governance via infrastructure was also obviously visible during the WikiLeaks CableGate saga during which financial services firms like PayPal, Visa and MasterCard opted to block the financial flow of money to WikiLeaks and Amazon and EveryDNS blocked web hosting and domain name resolution services, respectively.
This turn to governance via infrastructures of Internet governance raises several themes for this online symposium. The first theme relates to the privatization of governance whereby industry is voluntarily or obligatorily playing a heightened role in regulating content and governing expression as well as responding to restrictions on expression. Concerns here involve not only the issue of legitimacy and public accountability but also the possibly undue economic burden placed on private information intermediaries to carry out this governance. The question about private ordering is not just a question of Internet freedom but of economic freedom for the companies providing basic Internet infrastructures. The second theme relates to the future of free expression. Legal lenses into freedom of expression often miss the infrastructure-based governance sinews that already permeate the Internet’s underlying technical architecture. The third important theme involves the question of what this technique of governance via infrastructure will mean for the technical infrastructure itself. As an engineer as well as a social scientist, my concern is for the effects of these practices on Internet stability and security, particularly the co-opting of the Internet’s domain name system for content mediation functions for which the DNS was never intended. The stability of the Internet’s infrastructure is not a given but something that must be protected from the unintended consequences of these new governance approaches.
I wish to congratulate Brett Frischmann on his new book and thank him for bringing the connection between society and infrastructure to such a broad and interdisciplinary audience.
Dr. Laura DeNardis, American University, Washington, DC.
posted by Peter Swire
Along with a lot of other privacy folks, I have a lot of concerns about the cybersecurity legislation moving through Congress. I had an op-ed in The Hill yesterday going through some of the concerns, notably the problems with the over broad ”information sharing” provisions.
Writing the op-ed, though, prompted me to highlight one positive step that should happen in the course of the cybersecurity debate. The Privacy and Civil Liberties Oversight Board was designed in large part to address information sharing. This past Wednesday, the Senate Judiciary Committee had the hearing to consider the bipartisan slate of five nominees.
Here’s the point. The debate on CISPA and other cybersecurity legislation has highlighted all the information sharing that is going on already and that may be going on in the near future. The PCLOB is the institution designed to oversee problems with information sharing. So let’s confirm the nominees and get the PCLOB up and running as soon as possible.
The quality of the nominees is very high. David Medine, nominated to be Chair, helped develop the FTC’s privacy approach in the 1990′s and has worked on privacy compliance since, so he knows what should be done and what is doable. Jim Dempsey has been at the Center of Democracy and Technology for over 15 years, and is a world-class expert on government, privacy, and civil liberties. Pat Wald is the former Chief Judge of the DC Circuit. Her remarkably distinguished career includes major experience on international human rights issues. I don’t have experience with the other two nominees, but the hearing exposed no red flags for any of them.
The debates about cybersecurity legislation show the centrality of information sharing to how government will respond to cyber-threats. So we should have the institution in place to make sure that the information sharing is done in a lawful and sensible way, to be effective and also to protect privacy and civil liberties.
April 21, 2012 at 5:02 pm Tags: CISPA, civil liberties, cybersecurity Posted in: Administrative Law, Cyber Civil Rights, Cyberlaw, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security) Print This Post One Comment
posted by Peter Swire
The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords. Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.
As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.
Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.
We do have a precedent, however. In 1988, Congress enacted the Employee Polygraph Protection Act (EPPA). The EPPA says that employers don’t get to know everything an employee is thinking. Polygraphs are flat-out banned in almost all employment settings. The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.
The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer. Imagine a polygraph if your boss asked what you really thought about him/her. Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.
For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss. That list of exceptions can be a useful baseline to consider for social network passwords.
In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives. The social networks themselves support this ban on having employers require the passwords. I think we should, too.
April 11, 2012 at 1:14 pm Tags: Facebook, Maryland, passwords, polygraph Posted in: Administrative Law, Cyber Civil Rights, Cyberlaw, Privacy, Privacy (Consumer Privacy), Social Network Websites Print This Post 13 Comments
posted by Peter Swire
I strongly agree with the bipartisan consensus in the U.S. that the International Telecommunications Union should not gain new governance powers over the Internet. This coming December, there will be a major ITU conference in Dubai where there have been concerns about significant changes to the underlying ITU treaty.
From talking with people involved in the issue, my sense is that the risk of bad changes has subsided considerably. An administration memorandum from January discusses the progress made in the past year in fending off damaging proposals. Republican FCC Commissioner Robert McDowell recently published an excellent discussion of why those proposals would be bad. (McDowell erred, however, when he gratuitously and incorrectly criticized the administration for not addressing the issue). Civil society writers including Emma Llansó of CDT and Sophia Bekele concur.
In talking recently with one U.S. government official, however, here is one issue concerning the ITU and a possible UN role that has not been well addressed. Many developing countries look to the UN for technical assistance and best practices. These countries are facing a range of legal and policy issues, on topics that have been the subject of legislation in the U.S. and elsewhere: anti-spam, cybersecurity, phishing, domain name trademark disputes, data privacy, etc. If you are working on these issues for Ghana or Sri Lanka or whatever, where do you get that technical assistance about the Internet?
That seems like a good-faith question. Anybody have a good answer?
posted by Danielle Citron
On Friday, the New Jersey jury convicted Dharun Ravi of bias intimidation in connection with the charge of invasion of privacy. Here is the New Jersey bias intimidation provision:
Bias Intimidation. A person is guilty of the crime of bias intimidation if he commits, attempts to commit, conspires with another to commit, or threatens the immediate commission of an offense specified in chapters 11 through 18 of Title 2C of the New Jersey Statutes; N.J.S.2C:33-4; N.J.S.2C:39-3; N.J.S.2C:39-4 or N.J.S.2C:39-5,
(1) with a purpose to intimidate an individual or group of individuals because of race, color, religion, gender, handicap, sexual orientation, or ethnicity; or
(2) knowing that the conduct constituting the offense would cause an individual or group of individuals to be intimidated because of race, color, religion, gender, handicap, sexual orientation, or ethnicity; or
(3) under circumstances that caused any victim of the underlying offense to be intimidated and the victim, considering the manner in which the offense was committed, reasonably believed either that (a) the offense was committed with a purpose to intimidate the victim or any person or entity in whose welfare the victim is interested because of race, color, religion, gender, handicap, sexual orientation, or ethnicity, or (b) the victim or the victim’s property was selected to be the target of the offense because of the victim’s race, color, religion, gender, handicap, sexual orientation, or ethnicity.
Let me first make sense of the verdict and the important message it sends to the public. Then, I am going to talk about my concerns in the event that the sentence approaches (or comes close) to ten years.
The New Jersey bias law punishes the targeting of someone for intimidation — through the commission of a specified crime (privacy)– because of their protected status and the special harm to the targeted individual and society that results. What is that harm? Hate conveys and does something uniquely damaging. It demeans groups, treating them as lesser beings or inhuman “others” who do not possess equal worth.[i] It marks groups as inferior and “not worthy of equal citizenship.”[ii] It conveys the message that group members are objects whose autonomy can be freely snatched away because they have no shared humanity to consider.[iv] Hate diminishes group members’ standing in society. So, too, it incurs feelings of inferiority, shame, and humiliation.
The jury heard evidence to support the finding that bigotry drove Ravi’s decision to invade his roommate’s privacy and that his roommate Tyler Clementi was intimidated and reasonably believed Ravi invaded his privacy because he was gay. The testimony, tweets, and texts showed that Ravi set up his webcam to capture his gay roommate’s (Tyler Clementi) sexual encounter with a man and that he briefly watched the encounter with six friends. It revealed that two days later, Ravi dared his Twitter followers to watch a live streaming of his roommate’s sexual encounter with the same man because “Yes, it’s happening again.” With the help of two friends, Ravi ensured his webcam was working and trained on Clementi’s bed. In discussing his camera set up in a text to a high school friend, he wrote “Keep the gays away.” Before taking his own life, Clementi read Ravi’s tweets–over and over again, 38 times — and requested a room change from a resident assistant, describing his roommate’s behavior as “wildly inappropriate.”
There’s certainly evidence of bigotry. Ravi demeaned Clementi by exposing his sexuality to others. The live streaming, one accomplished and one foiled, amounted to a public shaming of Clementi for being gay. Clementi’s persistent checking of the tweets and his immediate action to change his room spoke to his feelings of humiliation.
The jury’s conviction for invasion of privacy and bias intimidation has a powerful and important expressive role to play. It says that society does not tolerate exposing someone’s sexuality to humiliate them. It conveys the message that we cannot treat LGBT individuals as “others” who, in Ravi’s words, should go away. It tells LGBT individuals that they do not have to tolerate such treatment, that they have every right to complain to law enforcement when something like this happens. And it says to law enforcement that they ought to pursue bias intimidation claims in cases such as these.
So what’s the problem? It’s important to recognize that the evidence wasn’t clear cut on the question of bias motive and intimidation. Clementi told a friend he did not care about what Ravi had done. Some evidence suggested that Ravi was not acted out of bigotry but instead that he was performing, showing off for friends. Reading this New Yorker piece demonstrates the complexity involved in their interactions. That has led many in the public to suggest that Ravi is guilty of being jerk and for invading Clementi’s privacy, but not for being a bigot. I’m worried that if the judge sentences Ravi to something close to ten years, that a backlash will follow. If people sense the verdict and sentence are unfair, we may hear for calls to revise hate crime laws and sentences to apply only to physical violence. And we may see prosecutors refuse to pursue cases of bigoted online harassment and/or privacy invasions where the evidence isn’t mixed, where the bigotry is both clear and deeply damaging. As it is, law enforcement routinely refuses to pursue bigoted online harassment on the grounds that victims can turn off their computers or that “boys will boys.” And those cases are not filled with lots of grays. The bigotry is clear and the damage overwhelming to victims. In short, I’m concerned that this is the wrong test case, one that may erect even higher barriers(and they are too high already) to punishing and deterring bigoted online harassment. My interview with Guy Roz of NPR’s All Things Considered spoke to these concerns, but I wanted to flesh them out further here.
[i] Deborah Hellman, When Is Discrimination Wrong? (Cambridge: Harvard University Press, 2008): 29.
[ii] Jeremy Waldron, “Dignity and Defamation: The Visibility of Hate,” 123 Harv. L. Rev. 1596, 1601 (2010).
[iii] Erving Goffman, Stigma: Notes on the Management of Spoiled Identity (New York: Simon & Schuster, 1963).
[iv] Martha Nussbaum, “Objectification and Internet Misogyny,” in The Offensive Internet (Cambridge: Harvard University Press 2010): 70.
posted by Danielle Citron
I’m amidst writing a book on cyber harassment and cyber stalking called Hate 3.0 (forthcoming Harvard University Press). Cyber harassment refers to online behavior that causes a reasonable person to suffer severe emotional distress. Cyber stalking has a more narrow meaning: it covers online behavior that causes a reasonable person to fear for her safety. Cyber stalking and cyber harassment often involve explicit or implicit threats of violence, calls for others to hurt victims, privacy invasions, defamation, impersonation, and/or technological attacks. The abuse tends to appear in e-mails, instant messages, blog entries, message boards, and/or sites devoted to tormenting individuals. The online abuse may be accompanied by offline harassment, including abusive phone calls, vandalism, threatening mail, and/or physical assault.
Stalking and harassment via networked technologies is not a one-off problem. Thousands upon thousands of cyber harassment and cyber stalking incidents occur annually. According to the Bureau of Justice Statistics, an estimated 850,000 people in 2006 experienced stalking with a significant online component, such as threats over e-mail and text, attack sites devoted to victims, and/or harassment in chat rooms and blogs.[i] A special 2009 report by the Department of Justice revealed that approximately 26,000 persons are victims of GPS stalking annually, including by cellphone. There’s evidence that harassment via networked technologies are increasing. College students encounter more sexually harassing speech in online interactions than in face-to-face ones.[ii] Researchers predict that thirty percent of Internet users will face some form of cyber harassment in their lives.[iii]
Yet there are serious reporting gaps, some have to do with the information that’s collected. The Location Privacy Protection Act of 2011(S. 1223), sponsored by Senator Al Franken (D-MN) and co- sponsored by Senator Richard Blumenthal (D-CT), aims to tackle a small part of this problem. The bill would require the National Institute of Justice to issue a study on the use of location technology in dating violence, stalking and domestic violence; to report these crimes to the FBI’s Internet Crime Complaint Center; and to require the Attorney General to develop a training curriculum so that law enforcement, courts, and victims advocates can better investigate and prosecute crimes involving the misuse of geo-location data. An excellent proposal, one I support whole heartedly. So, too, victims groups are working hard to help document what’s going on and to educate victims, law enforcement, and the police on tackling it. Working to Halt Online Abuse (WHOA) — with Jayne Hitchcock at the helm — has long been on the case. Without My Consent, a group spearheaded by tireless advocates Colette Vogele and Erica Johnstone, has joined these efforts (I’m an adviser along with my co-blogger Dan Solove, Ryan Calo, Chris Hoofnagle, Jason Schultz, and others). It is a non-profit organization seeking to combat online invasions of privacy. Its resources are intended to empower individuals to stand up for their privacy rights and inspire meaningful debate about the internet, accountability, free speech, and the serious problem of online invasions of privacy. The group is supported by the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley School of Law, the first legal clinic in the nation founded to provide students with the opportunity to represent the public interest in sound technology policy. It’s also affiliated with the non-residents fellows program at Stanford’s Center for Internet and Society.
[i] Katrina Baum et al., Bureau of Justice Statistics, Special Report No. NCJ 224527, Stalking Victimization in the United States (January 2009), 5.
[iii] Bradford W. Reyns, “Being Pursued Online: Extent and Nature of Cyberstalking Victimization from a Lifestyle/Routine Activities Perspective,” (PhD diss., University of Cincinnati, May 7, 2010), 29–33, 98.
posted by Danielle Citron
In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech. As we noted, many intermediaries like Facebook already choose to address online hatred in some way. We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so. We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations. With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable. Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.
Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League. Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations. Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article. But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation? Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”
slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).
The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.” That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content. And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech. And Facebook employees have been transparent about why. As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy). He said, let their friends counter that speech and embarrass them for being so asinine. The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland). It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB). The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation). See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.
As Kevin said, and Chris and I enthusiastically agreed, this memo is significant. Companies should follow FB’s lead. Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before. And users can debate it and tell FB that they think the policy is wanting and why. FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means. Does the prohibited content get removed or moved on for further discussion? Do users get the choice to take down violating content first? Do they get notice? Users need to know what happens when they violate TOS. That too helps users understand their rights and responsibilities as digital citizens. In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same. Bravo to Facebook.
posted by Derek Bambauer
Pakistan, which has long censored the Internet, has decided to upgrade its cybersieves. And, like all good bureaucracies, the government has put the initiative out for bid. According to the New York Times, Pakistan wants to spend $10 million on a system that can block up to 50 million URLs concurrently, with minimal effect on network speed. (That’s a lot of Web pages.) Internet censorship is on the march worldwide (and the U.S. is no exception). There are at least three interesting things about Pakistan’s move:
First, the country’s openness about its censorial goals is admirable. Pakistan is informing its citizens, along with the rest of us, that it wants to bowdlerize the Net. And, it is attempting to do so in a way that is more uniform than under its current system, where filtering varies by ISP. I don’t necessarily agree with Pakistan’s choice, but I do like that the country is straightforward with its citizens, who have begun to respond.
Second, the California-based filtering company Websense announced that it will not bid on the contract. That’s fascinating – a tech firm has decided that the public relations damage from helping Pakistan censor the Net is greater than the $10M in revenue it could gain. (Websense argues, of course, that its decision is a principled one. If you believe that, you are probably a member of the Ryan Braun Clean Competition fan club.)
Finally, the state is somewhat vague about what it will censor: it points to pornography, blasphemy, and material that affects national security. The last part is particularly worrisome: the national security trump card is a potent force after 9/11 and its concomitant fallout in Pakistan’s neighborhood, and censorship based on it tends to be secret. There is also real risk that national security interests = interests of the current government. America has an unpleasant history of censoring political dissent based on security worries, and Pakistan is no different.
I’ll be fascinated to see which companies take up Pakistan’s offer to propose…
Cross-posted at Info/Law.
March 8, 2012 at 3:03 pm Posted in: Architecture, Current Events, Cyber Civil Rights, Cyberlaw, Google and Search Engines, Intellectual Property, Politics, Privacy (National Security), Social Network Websites, Technology, Web 2.0 Print This Post One Comment
posted by Derek Bambauer
Ever-brilliant Web comic The Oatmeal has a great piece about piracy and its alternatives. (The language at the end is a bit much, but it is the character’s evil Jiminy Cricket talking.) It mirrors my opinion about Major League Baseball’s unwillingness to offer any Internet access to the postseason, which is hard on those of us who don’t own TVs (or subscribe to cable). Even if you don’t agree with my moral claims, it’s obvious that as the price of lawful access diverges from the price of unlawful access (which is either zero, or the expected present value of a copyright suit, which is darn near zero), infringement goes up.
So, if you want to see Game of Thrones (and I do), your options are: subscribe to cable plus HBO, or pirate. I think the series rocks, but I’m not paying $100 a month for it. If HBO expects me to do so, it weakens their moral claim against piracy.
Unconvinced? Imagine instead that HBO offers to let you watch Game of Thrones for free – but the only place on Earth you can view the series is in the Kodak Theater in Hollywood. You’re located in rural Iowa? Well, you’ve no cause for complaint! Fly to LA! I suspect that translating costs into physical costs makes the argument clearer: HBO charges not only for the content, but bundles it with one particular delivery medium. If that medium is unavailable to you, or unaffordable, you’re out of luck.
Unless, of course, you have broadband, and can BitTorrent.
As a minimum, I plan not to support any SOPA-like legislation until the content industries begin to offer viable Internet-based delivery mechanisms that at least begin to compete with piracy…
Cross-posted at Info/Law.
February 22, 2012 at 12:21 pm Posted in: Architecture, Culture, Current Events, Cyber Civil Rights, Cyberlaw, DRM, Innovation, Intellectual Property, Legal Ethics, Media Law, Movies & Television, Politics, Technology, Web 2.0 Print This Post 55 Comments
posted by Derek Bambauer
(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)
New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read the rest of this post »
February 21, 2012 at 10:20 pm Posted in: Anonymity, Blogging, Bright Ideas, Civil Rights, Conferences, Constitutional Law, Culture, Current Events, Cyber Civil Rights, Cyberlaw, Education, First Amendment, Media Law, Politics, Privacy (Gossip & Shaming), Psychology and Behavior, Race, Religion, Social Network Websites, Technology, Web 2.0 Print This Post 3 Comments
posted by Derek Bambauer
On RocketLawyer’s Legally Easy podcast, I talk with Charley Moore and Eva Arevuo about the EU’s proposed “right to be forgotten” and privacy as censorship. I was inspired by Jeff Rosen and Jane Yakowitz‘s critiques of the approach, which actually appears to be a “right to lie effectively.” If you can disappear unflattering – and truthful – information, it lets you deceive others – in other words, you benefit and they are harmed. The EU’s approach is a blunderbuss where a scalpel is needed.
Cross-posted at Info/Law.
February 17, 2012 at 12:01 pm Posted in: Anonymity, Architecture, Civil Rights, Consumer Protection Law, Culture, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Google and Search Engines, Innovation, Media Law, Political Economy, Politics, Privacy, Technology, Web 2.0 Print This Post No Comments
posted by Derek Bambauer
The RIAA’s Cary Sherman had a screed about the Stop Online Piracy and PROTECT IP Acts in the New York Times recently. Techdirt’s Mike Masnick brilliantly gutted it, and I’m not going to pile on – a tour de force requires no augmentation. What I want to suggest is that the recording industry – or, at least, its trade group – is dangerously out of touch.
Contrast this with at least part of the movie industry, as represented by Paramount Pictures. I received a letter from Al Perry, Paramount’s Vice President Worldwide Content Protection & Outreach. He proposed coming here to Brooklyn Law School to
exchange ideas about content theft, its challenges and possible ways to address it. We think about these issues on a daily basis. But, as these last few weeks [the SOPA and PROTECT IP debates] made painfully clear, we still have much to learn. We would love to come to campus and do exactly that.
Jason Mazzone, Jonathan Askin, and I are eagerly working to have Perry come to campus, both to present Paramount’s perspective and to discuss it with him. We’ll have input from students, faculty, and staff, and I expect there to be some pointed debate. We’re not naive – the goal here is to try to win support for Paramount’s position on dealing with IP infringement – but I’m impressed that Perry is willing to listen, and to enter the lion’s den (of a sort).
And that’s the key difference: Perry, and Paramount, recognize that Hollywood has lost a generation. For the last decade or so, students have grown up in a world where content is readily available via the Internet, through both licit and illicit means; where the content industries are the people who sue your friends and force you to watch anti-piracy warnings at the start of the movies you paid for; and where one aspires to be Larry Lessig, not Harvey Weinstein. Those of us who teach IP or Internet law have seen it up close. In another ten years, these young lawyers are going to be key Congressional staffers, think tank analysts, entrepreneurs, and law firm partners. And they think Hollywood is the enemy. I don’t share that view – I think the content industries are amoral profit maximizers, just like any other corporation – but I understand it.
And that’s where Sherman is wrong and Perry is right. The old moves no longer work. Buying Congresspeople to pass legislation drafted behind closed doors doesn’t really work (although maybe we’ll find out when we debate the Copyright Term Extension Act of 2018). Calling it “theft” when someone downloads a song they’d never otherwise pay for doesn’t work (even Perry is still on about this one).
One more thing about Sherman: his op-ed reminded me of Detective John Munch in Homicide, who breaks down and shouts at a suspect, “Don’t you ever lie to me like I’m Montel Williams. I am not Montel Williams.” Sherman lies to our faces and expects us not to notice. He writes, “the Protect Intellectual Property Act (or PIPA) was carefully devised, with nearly unanimous bipartisan support in the Senate, and its House counterpart, the Stop Online Piracy Act (or SOPA), was based on existing statutes and Supreme Court precedents.” Yes, it was carefully devised – by content industries. SOPA was introduced at the end of October, and the single hearing that was held on it was stacked with proponents of the bill. “Carefully devised?” Key proponents didn’t even know how its DNS filtering provisions worked. He argues, “Since when is it censorship to shut down an operation that an American court, upon a thorough review of evidence, has determined to be illegal?” Because censorship is when the government blocks you from accessing speech before a trial. “A thorough review of evidence” is a flat lie: SOPA enabled an injunction filtering a site based on an ex parte application by the government, in contravention of a hundred years of First Amendment precedent. And finally, he notes the massive opposition to SOPA and PROTECT IP, but then asks, “many of those e-mails were from the same people who attacked the Web sites of the Department of Justice, the Motion Picture Association of America, my organization and others as retribution for the seizure of Megaupload, an international digital piracy operation?” This is a McCarthyite tactic: associating the remarkable democratic opposition to the bills – in stark contrast to the smoke-filled rooms in which Sherman worked to push this legislation – with Anonymous and other miscreants.
But the risk for Sherman – and Paramount, and Sony, and other content industries – is not that we’ll be angry, or they’ll be opposed. It’s that they’ll be irrelevant. And if Hollywood takes the Sherman approach, rather than the Perry one, deservedly so.
Cross-posted at Info/Law.
February 14, 2012 at 7:40 pm Posted in: Architecture, Culture, Cyber Civil Rights, Cyberlaw, DRM, First Amendment, Google & Search Engines, Innovation, Intellectual Property, Media Law, Political Economy, Politics, Technology, Web 2.0 Print This Post 5 Comments
posted by Derek Bambauer
In the spirit of the excellent colloquy here about Marvin’s thinking on First Amendment architectures, I bring up this news item: Arizona State University blocked both Web access to, and e-mail from, the change.org Web site. ASU students had begun a petition demanding that the university reduce tuition. The university essentially made three claims as to why it did so (below, in order of increasing stupidity):
- It was a technical mistake;
- Change.org was spamming ASU; and
- ASU needs to “protect the use of our limited and valuable network resources for legitimate academic, research and administrative uses.”
#1 and #2 run together. If spam is the problem, you don’t need to block access to the Web site. However, if you are concerned that students are going to read the petition, and sign it, you do need to block access to the Web site.
For #2, sorry, ASU, this isn’t spam. Spam is unsolicited bulk commercial e-mail. Change.org is, allegedly, sending unsolicited political e-mail. And that’s protected by the First Amendment – see, for example, the Virginia Supreme Court’s analysis of that state’s anti-spam law that covered political messages. Potential political spammers have a sharp disincentive to fill recipient’s inboxes – it’s a sure-fire way to annoy them into opposing your position.
For #3, ASU doesn’t get to determine what academic and research uses are “legitimate.” If they throttle P2P apps, that’s fine. If they limit file sizes for attachments, no problem. But deciding that the message from Change.org is not “legitimate” is classic, and unconstitutional, viewpoint discrimination.
This looks like censorship. I think it’s more likely to be stupidity: someone in ASU’s IT department decided to block these messages as spam, and to filter outbound Web requests to the site contained within those messages. But: with great power over the network comes great responsibility. Well-intentioned constitutional violations are still unlawful. It would also help if ASU’s spokesperson simply admitted the mistake rather than engaging in idiotic justification.
As I mention in Orwell’s Armchair, public actors are increasingly important sources of Internet access. But when ASU and other public universities take on the role of ISP, they need to remember that they are not AOL: their technical decisions are constrained not merely by tech resources, but by our commitment to free speech. Let’s hope the Sun Devils cool off on the filtering…
Cross-posted at Info/Law.
February 10, 2012 at 5:10 pm Posted in: Architecture, Civil Rights, Constitutional Law, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Politics, Social Network Websites, Technology, Web 2.0 Print This Post No Comments
posted by Derek Bambauer
The European Commission released a draft of its revised Data Protection Directive this morning, and Jane Yakowitz has a trenchant critique up at Forbes.com. In addition to the sharp legal analysis, her article has both a Star Wars and Robot Chicken reference, which makes it basically the perfect information law piece…
January 25, 2012 at 4:32 pm Posted in: Advertising, Architecture, Civil Rights, Consumer Protection Law, Current Events, Cyber Civil Rights, Cyberlaw, Google and Search Engines, Innovation, Politics, Privacy, Privacy (Consumer Privacy), Social Network Websites, Technology, Web 2.0 Print This Post No Comments
posted by Danielle Citron
As my co-blogger Gerard notes, today is SOPA protest day. Sites like Google or WordPress have censored their logo or offered up a away to contact your congressperson, though remain live. Other sites like Wikipedia, Reddit, and Craigslist have shutdown, and more are set to shut down at some point today. There’s lots of terrific commentary on SOPA, which is designed to tackle the problem of foreign-based websites that sell pirated movies, music, and other products–but with a heavy hand that threatens free expression and due process. The Wall Street Journal’s Amy Schatz has this story and Politico has another helpful piece; The Hill’s Brendan Sasso’s Twitter feed has lots of terrific updates. Mark Lemley, David Levine, and David Post carefully explain why we ought to reject SOPA and the PROTECT IP Act in “Don’t Break the Internet” published by Stanford Law Review Online. In the face of the protest, House Judiciary Committee Chairman Lamar Smith (R-TX) vowed to bring SOPA to a vote in his committee next month. “I am committed to continuing to work with my colleagues in the House and Senate to send a bipartisan bill to the White House that saves American jobs and protects intellectual property,” he said. So, too, Senator Patrick Leahy (D-VT) pushed back against websites planning to shut down today in protest of his bill. “Much of what has been claimed about the Senate’s PROTECT IP Act is flatly wrong and seems intended more to stoke fear and concern than to shed light or foster workable solutions. The PROTECT IP Act will not affect Wikipedia, will not affect reddit, and will not affect any website that has any legitimate use,” Chairman Leahy said. Everyone’s abuzz on the issue, and rightly so. I spoke at a panel on intermediary liability at the Congressional Internet Caucus’ State of the Net conference and everyone wanted to talk about SOPA. I’m hoping that the black out and other shows of disapproval will convince our representatives in the House and Senate to back off the most troubling parts of the bill. As fabulous guest blogger Derek Bambauer argues, we need to bring greater care and thought to the issue of Internet censorship. Cybersecurity is at issue too, and we need to pay attention. Derek may be right that both bills may go nowhere, especially given Silicon Valley’s concerted lobbying efforts against the bills. But we will have to watch to see if Representative Smith lives up to his promise to bring SOPA back to committee and if Senator Leahy remains as committed to PROTECT IP Act in a few weeks as he is today.
January 18, 2012 at 10:11 am Posted in: Architecture, Civil Rights, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Law Talk, Media Law, Social Network Websites, Technology, Web 2.0 Print This Post 2 Comments
posted by Derek Bambauer
Thanks to Danielle and the CoOp crew for having me! I’m excited.
Speaking of exciting developments, it appears that the Stop Online Piracy Act (SOPA) is dead, at least for now. House Majority Leader Eric Cantor has said that the bill will not move forward until there is a consensus position on it, which is to say, never. Media sources credit the Obama administration’s opposition to some of the more noxious parts of SOPA, such as its DNSSEC-killing filtering provisions, and also the tech community’s efforts to raise awareness. (Techdirt’s Mike Masnick has been working overtime in reporting on SOPA; Wikipedia and Reddit are adopting a blackout to draw attention; even the New York City techies are holding a demonstration in front of the offices of Senators Kirstin Gillibrand and Charles Schumer. Schumer has been bailing water on the SOPA front after one of his staffers told a local entrepreneur that the senator supports Internet censorship. Props for candor.) I think the Obama administration’s lack of enthusiasm for the bill is important, but I suspect that a crowded legislative calendar is also playing a significant role.
Of course, the PROTECT IP Act is still floating around the Senate. It’s less worse than SOPA, in the same way that Transformers 2 is less worse than Transformers 3. (You still might want to see what else Netflix has available.) And sponsor Senator Patrick Leahy has suggested that the DNS filtering provisions of the bill be studied – after the legislation is passed. It’s much more efficient, legislatively, to regulate first and then see if it will be effective. A more cynical view is that Senator Leahy’s move is a public relations tactic designed to undercut the opposition, but no one wants to say so to his face.
I am not opposed to Internet censorship in all situations, which means I am often lonely at tech-related events. But these bills have significant flaws. They threaten to badly weaken cybersecurity, an area that is purportedly a national priority (and has been for 15 years). They claim to address a major threat to IP rightsholders despite the complete lack of data that the threat is anything other than chimerical. They provide scant procedural protections for accused infringers, and confer extraordinary power on private rightsholders – power that will, inevitably, be abused. And they reflect a significant public choice imbalance in how IP and Internet policy is made in the United States.
Surprisingly, the Obama administration has it about right: we shouldn’t reject Internet censorship as a regulatory mechanism out of hand, but we should be wary of it. This isn’t the last stage of this debate – like Wesley in The Princess Bride, SOPA-like legislation is only mostly dead. (And, if you don’t like the Obama administration’s position today, just wait a day or two.)
Cross-posted at Info/Law.
January 16, 2012 at 7:28 pm Posted in: Architecture, Civil Procedure, Constitutional Law, Culture, Cyber Civil Rights, Cyberlaw, First Amendment, Google & Search Engines, Google and Search Engines, Intellectual Property, Media Law, Movies & Television, Politics, Technology, Web 2.0 Print This Post One Comment