Category: Cyber Civil Rights

11

One Month in Jail: The Sentence in the Ravi Case

The judge handed down the sentence in the Dahrun Ravi case today.  For his conviction on witness- and evidence-tampering and lying to the police, Ravi will serve 30 days in jail.  For the hate crimes charge and sentence enhancement, Ravi was sentenced to three years’ probation, 300 hours of community service, counseling on cyber bullying and alternative lifestyles, and payment of $11,000 to a group that helps victims of bias crimes.  The judge included a recommendation to immigration authorities that the defendant, an Indian citizen who came to the United States as a child, not be deported.  The judge made fairly clear his thinking.  Before announcing the sentence, the judge said that he did not believe that the defendant hated Tyler Clementi but rather that he “acted out of colossal insensitivity.”  To the defendant, the judge said: “You lied to your roommate who placed his trust in you without any conditions, and you violated it.  I haven’t heard you apologize once.”  He emphasized the defendant’s attempt to “corrupt the justice system” by tampering with evidence and witnesses.  The judge explained that he took factors including Ravi’s youth and his lack of a criminal record into consideration.

Before the sentencing, many (including me) worried about a sentence that straddled the extremes.  An unduly harsh sentence might produce a backlash against using hate crime laws in instances of bigoted online harassment (including threats, privacy invasions, etc.) while an unduly light sentence would trivialize what happened to the victim, the public shaming of his sexuality and bias intimidation.  We have fallen into the latter zone.  The defendant received a sentence of probation and counseling on the hate crime that he thrice rejected in plea offerings by the prosecutor.  To make matters worse, the judge repudiated the jury’s conviction on the hate crime count when he characterized the defendant as insensitive, not bigoted.  Even so, all is not lost.  The sentence and conviction do say something important.  They make clear that engaging in online harassment and shaming of individuals from traditionally subordinated groups has a cost. The sentence is not something to shrug at: the defendant has a criminal record for a hate crime with three years’ probation (even though he might have been sentenced to far more than that, ten years).  To young people interested in bright futures, this is worth avoiding.  Viewed at a distance, the case teaches us that juries will take similar cases seriously.  It does not and should not say that such cases are easy and uncomplicated.  They are hard and deservedly belong in the public eye.  That this case made it into court with a conviction makes a difference.

 

 

0

The Turn to Infrastructure for Internet Governance

Drawing from economic theory, Brett Frischmann’s excellent new book Infrastructure: The Social Value of Shared Resources (Oxford University Press 2012) has crafted an elaborate theory of infrastructure that creates an intellectual foundation for addressing some of the most critical policy issues of our time: transportation, communication, environmental protection and beyond. I wish to take the discussion about Frischmann’s book into a slightly different direction, moving away from the question of how infrastructure shapes our social and economic lives into the question of how infrastructure is increasingly co-opted as a form of governance itself.

Arrangements of technical architecture have always inherently been arrangements of power. This is certainly the case for the technologies of Internet governance designed to keep the Internet operational. This governance is not necessarily about governments but about technical design decisions, the policies of private industry and the decisions of new global institutions. By “Infrastructures of Internet governance,” I mean the technologies and processes beneath the layer of content and inherently designed to keep the Internet operational. Some of these architectures include Internet technical protocols; critical Internet resources like Internet addresses, domain names, and autonomous system numbers; the Internet’s domain name system; and network-layer systems related to access, Internet exchange points (IXPs) and Internet security intermediaries. I have published several books about the inherent politics embedded in the design of this governance infrastructure.  But here I wish to address something different. These same Internet governance infrastructures are increasingly being co-opted for political purposes completely irrelevant to their primary Internet governance function.

The most pressing policy debates in Internet governance increasingly do not involve governance of the Internet’s infrastructure but governance using the Internet’s infrastructure.  Governments and large media companies have lost control over content through laws and policies and are recognizing infrastructure as a mechanism for regaining this control.  This is certainly the case for intellectual property rights enforcement. Copyright enforcement has moved well beyond addressing specific infringing content or individuals into Internet governance-based infrastructural enforcement. The most obvious examples include the graduated response methods that terminate the Internet access of individuals that repeatedly violate copyright laws and the domain name seizures that use the Internet’s domain name system (DNS) to redirect queries away from an entire web site rather than just the infringing content. These techniques are ultimately carried out by Internet registries, Internet registrars, or even by non-authoritative DNS operators such as Internet service providers. Domain name seizures in the United States often originate with the Immigration and Customs Enforcement agency. DNS-based enforcement was also at the heart of controversies and Internet boycotts over the legislative efforts to pass the Protect IP Act (PIPA) and the Stop Online Privacy Act (SOPA).

An even more pronounced connection between infrastructure and governance occurs in so-called “kill-switch” interventions in which governments, via private industry, enact outages of basic telecommunications and Internet infrastructures, whether via protocols, application blocking, or terminating entire cell phone or Internet access services. From Egypt to the Bay Area Rapid Transit service blockages, the collateral damage of these outages to freedom of expression and public safety is of great concern. The role of private industry in enacting governance via infrastructure was also obviously visible during the WikiLeaks CableGate saga during which financial services firms like PayPal, Visa and MasterCard opted to block the financial flow of money to WikiLeaks and Amazon and EveryDNS blocked web hosting and domain name resolution services, respectively.

This turn to governance via infrastructures of Internet governance raises several themes for this online symposium. The first theme relates to the privatization of governance whereby industry is voluntarily or obligatorily playing a heightened role in regulating content and governing expression as well as responding to restrictions on expression. Concerns here involve not only the issue of legitimacy and public accountability but also the possibly undue economic burden placed on private information intermediaries to carry out this governance. The question about private ordering is not just a question of Internet freedom but of economic freedom for the companies providing basic Internet infrastructures. The second theme relates to the future of free expression. Legal lenses into freedom of expression often miss the infrastructure-based governance sinews that already permeate the Internet’s underlying technical architecture. The third important theme involves the question of what this technique of governance via infrastructure will mean for the technical infrastructure itself.  As an engineer as well as a social scientist, my concern is for the effects of these practices on Internet stability and security, particularly the co-opting of the Internet’s domain name system for content mediation functions for which the DNS was never intended. The stability of the Internet’s infrastructure is not a given but something that must be protected from the unintended consequences of these new governance approaches.

I wish to congratulate Brett Frischmann on his new book and thank him for bringing the connection between society and infrastructure to such a broad and interdisciplinary audience.

Dr. Laura DeNardis, American University, Washington, DC.

1

Cybersecurity Legislation and the Privacy and Civil Liberties Oversight Board

Along with a lot of other privacy folks, I have a lot of concerns about the cybersecurity legislation moving through Congress.  I had an op-ed in The Hill yesterday going through some of the concerns, notably the problems with the over broad  “information sharing” provisions.

Writing the op-ed, though, prompted me to highlight one positive step that should happen in the course of the cybersecurity debate.  The Privacy and Civil Liberties Oversight Board was designed in large part to address information sharing.  This past Wednesday, the Senate Judiciary Committee had the hearing to consider the bipartisan slate of five nominees.

Here’s the point.  The debate on CISPA and other cybersecurity legislation has highlighted all the information sharing that is going on already and that may be going on in the near future.  The PCLOB is the institution designed to oversee problems with information sharing.  So let’s confirm the nominees and get the PCLOB up and running as soon as possible.

The quality of the nominees is very high.  David Medine, nominated to be Chair, helped develop the FTC’s privacy approach in the 1990’s and has worked on privacy compliance since, so he knows what should be done and what is doable.  Jim Dempsey has been at the Center of Democracy and Technology for over 15 years, and is a world-class expert on government, privacy, and civil liberties.  Pat Wald is the former Chief Judge of the DC Circuit.  Her remarkably distinguished career includes major experience on international human rights issues.  I don’t have experience with the other two nominees, but the hearing exposed no red flags for any of them.

The debates about cybersecurity legislation show the centrality of information sharing to how government will respond to cyber-threats.  So we should have the institution in place to make sure that the information sharing is done in a lawful and sensible way, to be effective and also to protect privacy and civil liberties.

13

Banning Forced Disclosure of Social Network Passwords and the Polygraph Precedent

The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords.  Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.

As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.

Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.

We do have a precedent, however.  In 1988, Congress enacted the Employee Polygraph Protection Act  (EPPA).  The EPPA says that employers don’t get to know everything an employee is thinking.  Polygraphs are flat-out banned in almost all employment settings.  The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.

The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer.  Imagine a polygraph if your boss asked what you really thought about him/her.  Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.

For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss.  That list of exceptions can be a useful baseline to consider for social network passwords.

In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives.  The social networks themselves support this ban on having employers require the passwords.  I think we should, too.

1

An Unanswered Question in the Generally Correct Opposition to a Big ITU Role in the Internet

I strongly agree with the bipartisan consensus in the U.S. that the International Telecommunications Union should not gain new governance powers over the Internet. This coming December, there will be a major ITU conference in Dubai where there have been concerns about significant changes to the underlying ITU treaty.

From talking with people involved in the issue, my sense is that the risk of bad changes has subsided considerably. An administration memorandum from January discusses the progress made in the past year in fending off damaging proposals.  Republican FCC Commissioner Robert McDowell recently published an excellent discussion of why those proposals would be bad.  (McDowell erred, however, when he gratuitously and incorrectly criticized the administration for not addressing the issue).  Civil society writers including Emma Llansó of CDT and Sophia Bekele concur.

In talking recently with one U.S. government official, however, here is one issue concerning the ITU and a possible UN role that has not been well addressed.  Many developing countries look to the UN for technical assistance and best practices.  These countries are facing a range of legal and policy issues, on topics that have been the subject of legislation in the U.S. and elsewhere: anti-spam, cybersecurity, phishing, domain name trademark disputes, data privacy, etc.  If you are working on these issues for Ghana or Sri Lanka or whatever, where do you get that technical assistance about the Internet?

That seems like a good-faith question.  Anybody have a good answer?

6

Bias Intimidation Verdict in the Ravi Trial

On Friday, the New Jersey jury convicted Dharun Ravi of bias intimidation in connection with the charge of invasion of privacy.  Here is the New Jersey bias intimidation provision:

Bias Intimidation.  A person is guilty of the crime of bias intimidation if he commits, attempts to commit, conspires with another to commit, or threatens the immediate commission of an offense specified in chapters 11 through 18 of Title 2C of the New Jersey Statutes; N.J.S.2C:33-4; N.J.S.2C:39-3; N.J.S.2C:39-4 or N.J.S.2C:39-5,

(1) with a purpose to intimidate an individual or group of individuals because of race, color, religion, gender, handicap, sexual orientation, or ethnicity; or

(2) knowing that the conduct constituting the offense would cause an individual or group of individuals to be intimidated because of race, color, religion, gender, handicap, sexual orientation, or ethnicity; or

(3) under circumstances that caused any victim of the underlying offense to be intimidated and the victim, considering the manner in which the offense was committed, reasonably believed either that (a) the offense was committed with a purpose to intimidate the victim or any person or entity in whose welfare the victim is interested because of race, color, religion, gender, handicap, sexual orientation, or ethnicity, or (b) the victim or the victim’s property was selected to be the target of the offense because of the victim’s race, color, religion, gender, handicap, sexual orientation, or ethnicity.

Let me first make sense of the verdict and the important message it sends to the public.  Then, I am going to talk about my concerns in the event that the sentence approaches (or comes close) to ten years.

The New Jersey bias law punishes the targeting of someone for intimidation — through the commission of a specified crime (privacy)–  because of their protected status and the special harm to the targeted individual and society that results.  What is that harm?  Hate conveys and does something uniquely damaging.  It demeans groups, treating them as lesser beings or inhuman “others” who do not possess equal worth.[i]  It marks groups as inferior and “not worthy of equal citizenship.”[ii]  It conveys the message that group members are objects whose autonomy can be freely snatched away because they have no shared humanity to consider.[iv]  Hate diminishes group members’ standing in society.  So, too, it incurs feelings of inferiority, shame, and humiliation.

The jury heard evidence to support the finding that bigotry drove Ravi’s decision to invade his roommate’s privacy and that his roommate Tyler Clementi was intimidated and reasonably believed Ravi invaded his privacy because he was gay. The testimony, tweets, and texts showed that Ravi set up his webcam to capture his gay roommate’s (Tyler Clementi) sexual encounter with a man and that he briefly watched the encounter with six friends.  It revealed that two days later, Ravi dared his Twitter followers to watch a live streaming of his roommate’s sexual encounter with the same man because “Yes, it’s happening again.”  With the help of two friends, Ravi ensured his webcam was working and trained on Clementi’s bed.  In discussing his camera set up in a text to a high school friend, he wrote “Keep the gays away.”  Before taking his own life, Clementi read Ravi’s tweets–over and over again, 38 times — and requested a room change from a resident assistant, describing his roommate’s behavior as “wildly inappropriate.”

There’s certainly evidence of bigotry.  Ravi demeaned Clementi by exposing his sexuality to others.  The live streaming, one accomplished and one foiled, amounted to a public shaming of Clementi for being gay.  Clementi’s persistent checking of the tweets and his immediate action to change his room spoke to his feelings of humiliation.

The jury’s conviction for invasion of privacy and bias intimidation has a powerful and important expressive role to play.  It says that society does not tolerate exposing someone’s sexuality to humiliate them.  It conveys the message that we cannot treat LGBT individuals as “others” who, in Ravi’s words, should go away.  It tells LGBT individuals that they do not have to tolerate such treatment, that they have every right to complain to law enforcement when something like this happens.  And it says to law enforcement that they ought to pursue bias intimidation claims in cases such as these.

So what’s the problem?  It’s important to recognize that the evidence wasn’t clear cut on the question of bias motive and intimidation.  Clementi told a friend he did not care about what Ravi had done.  Some evidence suggested that Ravi was not acted out of bigotry but instead that he was performing, showing off for friends.  Reading this New Yorker piece demonstrates the complexity involved in their interactions.  That has led many in the public to suggest that Ravi is guilty of being jerk and for invading Clementi’s privacy, but not for being a bigot.  I’m worried that if the judge sentences Ravi to something close to ten years, that a backlash will follow.  If people sense the verdict and sentence are unfair, we may hear for calls to revise hate crime laws and sentences to apply only to physical violence.  And we may see prosecutors refuse to pursue cases of bigoted online harassment and/or privacy invasions where the evidence isn’t mixed, where the bigotry is both clear and deeply damaging.  As it is, law enforcement routinely refuses to pursue bigoted online harassment on the grounds that victims can turn off their computers or that “boys will boys.”  And those cases are not filled with lots of grays.  The bigotry is clear and the damage overwhelming to victims.  In short, I’m concerned that this is the wrong test case, one that may erect even higher barriers(and they are too high already) to punishing and deterring bigoted online harassment.  My interview with Guy Roz of NPR’s All Things Considered spoke to these concerns, but I wanted to flesh them out further here.

 


[i] Deborah Hellman, When Is Discrimination Wrong? (Cambridge: Harvard University Press, 2008): 29.

[ii] Jeremy Waldron, “Dignity and Defamation: The Visibility of Hate,” 123 Harv. L. Rev. 1596, 1601 (2010).

[iii] Erving Goffman, Stigma: Notes on the Management of Spoiled Identity (New York: Simon & Schuster, 1963).

[iv] Martha Nussbaum, “Objectification and Internet Misogyny,” in The Offensive Internet (Cambridge: Harvard University Press 2010): 70.


0

Cyber stalking and cyber harassment, a devastating and endemic problem

I’m amidst writing a book on cyber harassment and cyber stalking called Hate 3.0 (forthcoming Harvard University Press).  Cyber harassment refers to online behavior that causes a reasonable person to suffer severe emotional distress.  Cyber stalking has a more narrow meaning: it covers online behavior that causes a reasonable person to fear for her safety.  Cyber stalking and cyber harassment often involve explicit or implicit threats of violence, calls for others to hurt victims, privacy invasions, defamation, impersonation, and/or technological attacks.  The abuse tends to appear in e-mails, instant messages, blog entries, message boards, and/or sites devoted to tormenting individuals.  The online abuse may be accompanied by offline harassment, including abusive phone calls, vandalism, threatening mail, and/or physical assault.

Stalking and harassment via networked technologies is not a one-off problem.  Thousands upon thousands of cyber harassment and cyber stalking incidents occur annually.  According to the Bureau of Justice Statistics, an estimated 850,000 people in 2006 experienced stalking with a significant online component, such as threats over e-mail and text, attack sites devoted to victims, and/or harassment in chat rooms and blogs.[i]  A special 2009 report by the Department of Justice revealed that approximately 26,000 persons are victims of GPS stalking annually, including by cellphone.  There’s evidence that harassment via networked technologies are increasing.  College students encounter more sexually harassing speech in online interactions than in face-to-face ones.[ii]  Researchers predict that thirty percent of Internet users will face some form of cyber harassment in their lives.[iii]

Yet there are serious reporting gaps, some have to do with the information that’s collected.  The Location Privacy Protection Act of 2011(S. 1223), sponsored by Senator Al Franken (D-MN) and co- sponsored by Senator Richard Blumenthal (D-CT), aims to tackle a small part of this problem.  The bill would require the National Institute of Justice to issue a study on the use of location technology in dating violence, stalking and domestic violence; to report these crimes to the FBI’s Internet Crime Complaint Center; and to require the Attorney General to develop a training curriculum so that law enforcement, courts, and victims advocates can better investigate and prosecute crimes involving the misuse of geo-location data.  An excellent proposal, one I support whole heartedly.  So, too, victims groups are working hard to help document what’s going on and to educate victims, law enforcement, and the police on tackling it.  Working to Halt Online Abuse (WHOA) — with Jayne Hitchcock at the helm — has long been on the case.  Without My Consent, a group spearheaded by tireless advocates Colette Vogele and Erica Johnstone, has joined these efforts (I’m an adviser along with my co-blogger Dan Solove, Ryan Calo, Chris Hoofnagle, Jason Schultz, and others).  It is a non-profit organization seeking to combat online invasions of privacy.  Its resources are intended to empower individuals to stand up for their privacy rights and inspire meaningful debate about the internet, accountability, free speech, and the serious problem of online invasions of privacy.  The group is supported by the Samuelson Law, Technology & Public Policy Clinic at UC Berkeley School of Law, the first legal clinic in the nation founded to provide students with the opportunity to represent the public interest in sound technology policy.  It’s also affiliated with the non-residents fellows program at Stanford’s Center for Internet and Society.



[i] Katrina Baum et al., Bureau of Justice Statistics, Special Report No. NCJ 224527, Stalking Victimization in the United States (January 2009), 5.

[ii] M. Alexis Kennedy and Melanie A. Taylor, “Online Harassment and Victimization of College Students,” Justice Policy Journal 7, no. 1 (2010), http://www.cjcj.org/files/online_harassment.pdf.

[iii] Bradford W. Reyns, “Being Pursued Online: Extent and Nature of Cyberstalking Victimization from a Lifestyle/Routine Activities Perspective,” (PhD diss., University of Cincinnati, May 7, 2010), 29–33, 98.

3

Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

1

Pakistan Scrubs the Net

Pakistan, which has long censored the Internet, has decided to upgrade its cybersieves. And, like all good bureaucracies, the government has put the initiative out for bid. According to the New York Times, Pakistan wants to spend $10 million on a system that can block up to 50 million URLs concurrently, with minimal effect on network speed. (That’s a lot of Web pages.) Internet censorship is on the march worldwide (and the U.S. is no exception). There are at least three interesting things about Pakistan’s move:

First, the country’s openness about its censorial goals is admirable. Pakistan is informing its citizens, along with the rest of us, that it wants to bowdlerize the Net. And, it is attempting to do so in a way that is more uniform than under its current system, where filtering varies by ISP. I don’t necessarily agree with Pakistan’s choice, but I do like that the country is straightforward with its citizens, who have begun to respond.

Second, the California-based filtering company Websense announced that it will not bid on the contract. That’s fascinating – a tech firm has decided that the public relations damage from helping Pakistan censor the Net is greater than the $10M in revenue it could gain. (Websense argues, of course, that its decision is a principled one. If you believe that, you are probably a member of the Ryan Braun Clean Competition fan club.)

Finally, the state is somewhat vague about what it will censor: it points to pornography, blasphemy, and material that affects national security. The last part is particularly worrisome: the national security trump card is a potent force after 9/11 and its concomitant fallout in Pakistan’s neighborhood, and censorship based on it tends to be secret. There is also real risk that national security interests = interests of the current government. America has an unpleasant history of censoring political dissent based on security worries, and Pakistan is no different.

I’ll be fascinated to see which companies take up Pakistan’s offer to propose…

Cross-posted at Info/Law.

55

Stealing the Throne

Ever-brilliant Web comic The Oatmeal has a great piece about piracy and its alternatives. (The language at the end is a bit much, but it is the character’s evil Jiminy Cricket talking.) It mirrors my opinion about Major League Baseball’s unwillingness to offer any Internet access to the postseason, which is hard on those of us who don’t own TVs (or subscribe to cable). Even if you don’t agree with my moral claims, it’s obvious that as the price of lawful access diverges from the price of unlawful access (which is either zero, or the expected present value of a copyright suit, which is darn near zero), infringement goes up.

So, if you want to see Game of Thrones (and I do), your options are: subscribe to cable plus HBO, or pirate. I think the series rocks, but I’m not paying $100 a month for it. If HBO expects me to do so, it weakens their moral claim against piracy.

Unconvinced? Imagine instead that HBO offers to let you watch Game of Thrones for free – but the only place on Earth you can view the series is in the Kodak Theater in Hollywood. You’re located in rural Iowa? Well, you’ve no cause for complaint! Fly to LA! I suspect that translating costs into physical costs makes the argument clearer: HBO charges not only for the content, but bundles it with one particular delivery medium. If that medium is unavailable to you, or unaffordable, you’re out of luck.

Unless, of course, you have broadband, and can BitTorrent.

As a minimum, I plan not to support any SOPA-like legislation until the content industries begin to offer viable Internet-based delivery mechanisms that at least begin to compete with piracy…

Cross-posted at Info/Law.