Archive for the ‘Anonymity’ Category
Bartelt’s Dog and the Continuing Vitality of the Supreme Court’s Tacit Distinction between Sense Enhancement and Sense Creation
posted by Albert Wong
Last Term, in an amicus brief in United States v. Jones, 565 U.S. __, several colleagues and I highlighted the Supreme Court’s long, albeit not always clearly stated, history of distinguishing between sense-enhancing and sense-creating technologies for Fourth Amendment purposes. As a practical matter, the Court has consistently subjected technologies in the latter category to closer scrutiny than technologies that merely bolster natural human senses. Thus, the use of searchlights, field glasses, and (to some extent) beepers and airplane-mounted cameras was not found to implicate the Fourth Amendment. As the Court explained, “[n]othing in the Fourth Amendment prohibit[s] the police from augmenting the sensory faculties bestowed upon them at birth with such enhancement as science and technology” may afford. 460 U.S. at 282 (emphasis added). In contrast, the Court has held that technologies that create a new capacity altogether, including movie projectors, wiretaps, ultrasound devices, radar flashlights, directional microphones, thermal imagers, and (as of Jones) GPS tracking devices, do trigger the Fourth Amendment. To hold otherwise, as the Court has stated, would “shrink the realm of guaranteed privacy,” leaving citizens “at the mercy of advancing technology.” 533 U.S. at 34-36.
In fact, of the landmark cases involving technology and the Fourth Amendment during the past 85 years (from United States v. Lee, 274 U.S. 559, in 1927 to Jones in 2012), only in one instance did the Supreme Court appear to deviate from this distinction between sense enhancement and sense creation. In that case, United States v. Place, 462 U.S. 696, and its successors, City of Indianapolis v. Edmond, 531 U.S. 32, and Illinois v. Caballes, 543 U.S. 405, the Court held that the use of trained narcotics-detection dogs (more apparently similar to using a new capacity than merely enhancing a natural human sense) did not implicate the Fourth Amendment. In our amicus brief in Jones, we rationalized Place, Edmond, and Caballes by arguing that dogs were unique, being natural biological creatures that had long been used by the police, even in the time of the Framers. Further, we argued, a canine sniff, unlike the use of, say, a wiretap or a thermal imager, “discloses only the presence or absence of narcotics, a contraband item.” 462 U.S. at 707 (emphasis added). Still, the apparent ‘dog exception’ was rankling. Read the rest of this post »
March 31, 2013 at 11:35 am Posted in: Anonymity, Constitutional Law, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Supreme Court, Technology, Uncategorized Print This Post 14 Comments
posted by Deven Desai
Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.
Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.
According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:
is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.
Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.
Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?
As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.
posted by Danielle Citron
Plaintiffs’ lawyers have some reason to think that they can convince courts to change their broad-sweeping view of Section 230. In the rare case, courts have pierced the safe harbor, though not because the site operators failed to engage in good faith attempts to protect against offensive or indecent material. In 2011, a federal district court permitted a woman to sue the site operator of the Dirty.com for defamation on the grounds that Section 230 is forfeited if the site owner “invites the posting of illegal materials or makes actionable postings itself.” Sarah Jones v. Dirty World Entertainment Recordings LLC, 766 F. Supp.2d 828, 836 (E.D. Kentucky 2011).
That trial judge relied on a Ninth Circuit decision, Fair Housing Council v. Roommates.com, which involved a classified ad service that helps people find suitable roommates. To sign up for the site’s service, subscribers had to fill out an online questionnaire that asked questions about their gender, race, and sexual orientation. One question asked subscribers to choose a roommate preference, such as “Straight or gay males,” only “Gay” males, or “No males.” Fair housing advocates sued the site, arguing that its questionnaires violated federal and state discrimination laws. The Ninth Circuit found that Section 230 failed to immunize the defendant site from liability because it created the questions and choice of answers and thus became the “information content provider.” The court ruled that since the site required users to answer its questions from a list of possible responses of its choosing, the site was “the developer, at least in part, of that information.” Each user’s profile page was partially the defendant’s responsibility because every profile is a “collaborative effort between [the site] and the subscriber.”
As the Ninth Circuit held (and as a few courts have followed), Section 230 does not grant immunity for helping third parties develop unlawful conduct. The court differentiated the defendant’s site from search engines whose processes might be seen as contributing to the development of content, its search results. According to the court, ordinary search engines “do not use unlawful criteria to limit the scope of searches conducted on them” and thus do not play a part in the development of unlawful searches. The court endorsed the view that sites designed to facilitate illegal activity fell outside Section 230’s safe harbor provision.
Here is the rub. To reach its conclusion, the Ninth Circuit essentially had to rewrite the statute, which defines information content providers as those responsible for the “creation and development of information provided through the Internet,” not the creation and development of illegal information. Read the rest of this post »
posted by Danielle Citron
Last week, a group of women filed a lawsuit against the revenge porn site Texxxan.com as well as the hosting company Go Daddy! Defendant Texxxan.com invites users to post nude photographs of individuals who never consented to their posting. Revenge porn sites — whether Private Voyeur, Is Anyone Down?, HunterMoore.tv (and the former IsAnyoneUp?), or Texxxan.com — mostly host women’s naked pictures next to their contact information and links to their social media profiles. Much like other forms of cyber stalking, revenge porn ruins individuals’ reputations as the pictures saturate Google searches of their names, incites third parties to email and stalk individuals, causes terrible embarrassment and shame, and risks physical stalking and harm. In the recently filed suit, victims of revenge porn have brought invasion of privacy and civil conspiracy claims against the site operator and the web hosting company, not the posters themselves who may be difficult to find. More difficult though will be getting the case past a Rule 12(b)(6) motion to dismiss.
In this post, I’m going to explain why this lawsuit is facing an uphill battle under Section 230 of the Communications Decency Act and why extending Section 230′s safe harbor to sites designed to encourage illicit activity seems out of whack with the broader purpose of CDA. In my next post, I will talk about cases that seemingly open the door for plaintiffs to bring their suit and why those cases provide a poor foundation for their arguments.
Does Section 230 give revenge porn operators free reign to ruin people’s lives (as revenge porn site operator Hunter Moore proudly describes what he does)? Sad to say, they do. Read the rest of this post »
posted by Stanford Law Review
The Stanford Law Review Online has just published a Note by Will Havemann entitled Privilege and the Belfast Project. Havemann argues that a recent First Circuit opinion goes too far and threatens the idea of academic privilege:
In 2001, two Irish scholars living in the United States set out to compile the recollections of men and women involved in the decades-long conflict in Northern Ireland. The result was the Belfast Project, an oral history project housed at Boston College that collected interviews from many who were personally involved in the violent Northern Irish “Troubles.” To induce participants to document their memories for posterity, Belfast Project historians promised all those interviewed that the contents of their testimonials would remain confidential until they died. More than a decade later, this promise of confidentiality is at the heart of a legal dispute implicating the United States’ bilateral legal assistance treaty with the United Kingdom, the so-called academic’s privilege, and the First Amendment.
Given the confusion sown by Branzburg’s fractured opinion, the First Circuit’s hardnosed decision is unsurprising. But by disavowing the balancing approach recommended in Justice Powell’s concurring Branzburg opinion, and by overlooking the considerable interests supporting the Belfast Project’s confidentiality guarantee, the First Circuit erred both as a matter of precedent and of policy. At least one Supreme Court Justice has signaled a willingness to correct the mischief done by the First Circuit, and to clarify an area of First Amendment law where the Court’s guidance is sorely needed. The rest of the Court should take note.
December 5, 2012 at 10:45 am Tags: academic privilege, academy, Civil Rights, Constitutional Law, First Amendment, international law, privilege, treaties Posted in: Anonymity, Civil Rights, Constitutional Law, Current Events, First Amendment, International & Comparative Law, Law Rev (Stanford), Media Law Print This Post 15 Comments
posted by Mary Anne Franks
My reaction to Robin West’s extraordinary scholarship always includes some mixture of distress and excitement: distress over the failures of law and humanity she describes with such devastating clarity, and excitement about the potential applications of her insights. In this post, I want to discuss how Robin’s critique of both liberal legalism and what she calls “neo-critical” legal theory in Normative Jurisprudence – particularly the former’s fetishization of individual rights and the latter’s decidedly uncritical celebration of consent – usefully illuminates the recent controversy over the outing of Michael Brutsch, aka “Violentacrez,” the man behind some of the most controversial forums on the popular social news website, reddit.com. One of these, the “/r/creepshot” forum (or “subreddit”), which encouraged users to submit surreptitious photographs of women and girls for sexual commentary, garnered national attention when it was discovered that a Georgia schoolteacher was posting pictures of his underage students. Brutsch’s outing (or “doxxing“) sparked outrage from many in the reddit community, and has led to an intriguing online and offline debate over Internet norms and practices. The defense of Brutsch and the forums he helped create – mostly sexual forums targeting women and girls – has been dominated by a highly selective conception of the right to privacy, the insistence on an unintelligibly broad conception of “consent,” and a frankly bewildering conception of the right to free speech. Attempts to criticize or curtail these forms of online abuse have also been primarily framed in terms of “rights,” to uncertain effect. Robin’s critiques of rights fetishism and the ideology of consent offer valuable insights into this developing debate.
I will attempt to briefly summarize (and no doubt oversimplify, though I hope not misrepresent) the points Robin makes that I think are most useful to this conversation. Liberal legalism’s focus on rights rests on a seductive fantasy of individual autonomy: it “prioritizes the liberty and autonomy of the independent individual, shrouds such a person in rights, grants him extraordinary powers within a wide ranging sphere of action, and in essence valorizes his freedom from the ties and bonds of community. It relegates, in turn, the interests, concerns, and cares of those of us who are not quite so autonomous or independent … those of us for whom our humanity is a function of our ties to others rather than our independence from them … to the realm of policy and political whim rather than the heightened airy domain of right, reason, and constitutional protection” (41). The critical legal studies movement attempted to correct some of this rights fetishism by pointing out that “rights” are not only radically indeterminate (i.e. rights can be interpreted and granted in conflicting ways), but that they are also legitimating (that is, bestowing the status of “right” on narrowly drawn freedoms can obscure the injustice and inequality that fall outside of them, thus insulating them from critique).
Robin persuasively demonstrates that neo-critical legal theorists held on to the indeterminacy thesis while jettisoning the critique of legitimation. Concerns about legitimation are concerns about suffering, and neo-crits are largely uninterested in, if not contemptuous of, suffering. Their primary concern is power and pleasure, which is accordingly supported by what Robin calls “the ideology of consent.” To the neo-crits, consent has the power to fully shield any act from either legal or moral critique. Robin addresses the way the ideology of consent plays out in the context of sex by looking to the work of Janet Halley. According to Robin, Halley espouses a view of sex that takes “[c]onsent to sex … as full justification for a collective blindness to both societal and individual pressures to engage in unwanted sex, so long as the sex is short of rape”(142). Sex is presumptively pleasurable, and as such presumptively immune from critique. As Robin describes Halley’s position, “sex is almost always innocent, and when consensual, there can be no ‘legitimate’ basis for criticism. Consensual sex is just too good to be circumscribed, or bound, by claims of its unwelcomeness or unwantedness. The claims that consensual sex is in fact unwelcome or unwanted are likely false in any event. The harms sustained, even if the claims are true, are trivial” (146). (I came to similar conclusions regarding Halley’s work in my review of her book, Split Decisions: How and Why to Take a Break from Feminism).
Now to apply these insights to the Michael Brutsch/creepshot controversy. The moderators of the creepshot subreddit provide this helpful definition of “creepshot” on the “subreddit details” page:
posted by Peter Swire
I just finished David Brin’s “Existence,” his biggest new novel in years. Brin, as some readers know, has won multiple Hugo and Nebula awards for best science fiction writing. He also wrote the 1999 non-fiction book “The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?”. More about that in a bit.
Existence is full of big ideas. A main focus is on the Fermi Paradox, which observes that we would expect to find other forms of life out there among the hundreds of billions of suns, but we haven’t seen evidence of that life yet. If you haven’t ever thought through the Fermi Paradox, I think it is a Genuine Big Question, and well worth contemplating. Fortunately for those who like their science mixed with fiction, Brin weaves fifty or so possible answers to the Fermi Paradox into his 550-page novel. Does climate change kill off other races? Nuclear annihilation? Do aliens upload themselves into computers once they get sophisticated (the “singularity”), so we never detect them across the void? And a lot, lot more.
It took me a little while to get into the book, but I read the last few hundred pages in a rush. I’ve had the pleasure to know Brin for a bunch of years, and find him personally and intellectually engaging. I was pleased to read this, because I think it will intrigue curious minds for a long time as our telescopic views of other planets deepen our puzzlement about the Fermi Paradox.
As for privacy, my own view is that the privacy academics didn’t take his 1999 book seriously enough as an intellectual event. One way to describe Brin’s insight is to say that surveillance in public becomes cheaper and more pervasive over time. For Brin, having “control” over your face, eye blinks, location, etc., etc. becomes futile and often counter-productive once cameras and other sensors are pervasive and searchable. Brin picked up on these themes in his earlier novel, “Earth,” when elderly people used video cameras to film would-be muggers, deterring the attacks. In the new novel, the pervasive use of the 2060 version of Google Glasses means that each person is empowered to see data overlays for any person they meet. (This part is similar to the novel “Rainbow’s End” by Brin’s friend Vernor Vinge.)
Surveillance in public is a big topic these days. I’ve worked with CDT and EFF on USvJones.com, which asked law academics to propose doctrine for surveillance in public. Facial recognition and drones are two of the hot privacy topics of the year, and each are significant steps towards the pervasive sensor world that Brin contemplated in his 1999 book.
So, if you like thinking about Big Ideas in novel form, buy Existence. And, if you would like to retain the Fair Information Principles in a near future of surveillance in public, consider Brin more carefully when you imagine how life will and should be in the coming decades.
posted by Danielle Citron
I’ve been in my book writing fox hole, so much so that when the storm hit Maryland and D.C. and I did not lose power, I had no idea that nearly half of my state and our neighboring ones had none. But enough about hiding from the world (and the Internet), there are alarming stories about voting worth sharing now with elections coming up, the only time the public seems to sniffle at the issue. Internet voting. One might say, in your dreams, pal, never going to happen. But in truth it is happening, with calls for more. Nineteen states offer some form of online voting, mostly for soldiers living overseas. The Military and Overseas Voter Empowerment Act requires states in most cases to get ballots to military and overseas voters well in advance of regularly scheduled federal elections, which has led states to adopt voting via e-mail and online for soldiers. (Other states like Maryland allow voters to download ballots online and mail them). Because these experiments have “worked,” more calls for voting online have been forthcoming on the grounds that people might then actually vote. It’s my understanding from voting activists that election boards are agitating for online voting, and it is a very bad idea. To state the utterly obvious, all things online are insecure — the infiltration of Pentagon and countless companies, including financial ones, should instill fear about the sophistication of bad actors looking to steal state secrets, trade secrets, credit card numbers, SSNs, you name it. And online elections–what a target (think about all of the people who would bother–in a word, lots). Stuffing ballot boxes in a handful of precincts is quaint as compared to the possibilities of malware, distributed denial of service attacks, and the like in a state and federal election. It is mind blowing, really.
Scott Wolchok, Eric Wustrow, Dawn Isabel, and J. Alex Halderman of the University of Michigan recently released a study on the ease with which they hacked a pilot project on Internet voting run by Washington D.C. The authors explain that within 48 hours of the system going live, they gained near-complete control of the election server, successfully changed every vote and revealed almost every secret ballot. Two business days later, election officials detected the intrusion, and probably only because the authors deliberately left a prominent clue. Some respond to these sorts of concerns with “we bank online and it is safe, so we can vote online, if we just work hard enough at it.” As the authors explain, banking and voting involve very different activities with very different needs for secrecy as between client/voter and bank/voting precinct. As the authors explain:
While Internet-based financial applications, such as online banking, share some of the threats faced by Internet voting, there is a fundamental difference in ability to deal with compromises after they have occurred. In the case of online banking, transaction records, statements, and multiple logs allow customers to detect specific fraudulent transactions and in many cases allow the bank to reverse them. Internet voting systems cannot keep such fine-grained transaction logs without violating ballot secrecy for voters. Even with these protections in place, banks suffer a significant amount of online fraud but write it off as part of the cost of doing business; fraudulent election results cannot be so easily excused.
The National Institute of Standards and Technology agrees. Chief among NIST’s concerns are malware and our lack of an infrastructure for secure electronic voter authentication. Amazingly, countries like Estonia and Switzerland have adopted Internet voting for national elections.
posted by Margot Kaminski
The Supreme Court had a busy day yesterday, and in the wake of healthcare, there’s a risk of overlooking an important addition to this Court’s First Amendment jurisprudence: U.S. v. Alvarez.
In short, the Court found that Congress can’t send you to jail just for lying. Alvarez confirms that this Court is extremely reluctant to create new FirstAmendment exceptions, and has a speech-protective understanding of the marketplace of ideas. Alvarez also leaves open some interesting questions, both doctrinal and practical.
Alvarez was prosecuted under the Stolen Valor Act (18 USC s. 704) for lying about having received the Congressional Medal of Honor. What made this case particularly interesting, and probably what split the Court, is that Alvarez did not lie to gain money, or to get a job. He didn’t lie for any apparent reason. He just lied.
The Court split 4-2-3, with six affirming the Ninth Circuit and finding the Act unconstitutional. Justice Kennedy wrote the plurality, Justice Breyer wrote the concurrence (joined by Justice Kagan), and Justice Alito rather unsurprisingly wrote the dissent.
The plurality forcefully reiterated what the Court articulated two years ago in U.S. v. Stevens (2010): content-based restrictions on speech are subject to strict scrutiny, with limited exceptions that have been clearly established in prior caselaw. What was (again!) at stake in this decision was whether the First Amendment protects all speech except for the familiar carveouts, or presents an “ad hoc balancing of relative social costs and benefits” with each new proposed exception (at 4, quoting U.S. v. Stevens (2010)).
The plurality went the First-Amendment-protective route. Its “historic and traditional categories” of First Amendment exceptions present a familiar roster: obscenity, fighting words, incitement, and the rest. False speech as false speech is not one of the historical exceptions, and the plurality made it perfectly clear that it does not plan to add to the list. In Stevens, then, the Court said what it meant about not intending to add to historical First Amendment exceptions. Future brief-writers would do well to keep this in mind.
Eugene Volokh in his Amicus brief feared that if the Court went the route of protecting false speech, the First Amendment would become a patchwork of under-theorized exceptions to that rule. The plurality proved him wrong. It both articulated theoretical underpinnings for existing exceptions that do involve false speech, and took the Government to task for advocating an overly restrictive understanding of the marketplace of ideas.
The plurality walked through two general categories of exceptions to First Amendment protection for false speech. These categories are effectively distinguished from most false speech as “false speech-plus.” Each is not just false speech, but has an additional element.
The first kind of false speech not subject to First Amendment protection is false speech where there is a legally cognizable harm to an individual, such as an invasion of privacy or legal costs. This category includes defamation and fraud (at 7). Robert Post might further add that these kinds of crimes and torts generally take place outside of the public sphere, and so are subject to less First Amendment protection because they involve individual relationships rather than public-facing speech.
The second kind of false speech not subject to First Amendment protection is false speech that impedes a government function (eg perjury or lying to a federal officer), or abuses government power without authorization (eg impersonating a Government officer). Here, no direct injury to an individual is required. The plurality found that these two types of laws are similar because both “protect the integrity of Government processes” (at 9).
The more serious and broad-sweeping theoretical debate resolved by the Alvarez plurality concerns a fundamental understanding of the marketplace of ideas.
In the historical understanding of the marketplace of ideas, speech competes with speech towards the pursuit of “truth” (although truth is more accurately understood as political truth, not just truth in the sense of non-falsity). Thus Volokh is probably correct when he writes that historically, false speech was considered of lower value in the marketplace of ideas than true speech.
However, the present-day understanding of the marketplace of ideas is that it’s impossible to determine which speech has high value, and which speech has low value. Speech competes, and listeners choose what to believe, but there’s no competition towards an absolute truth-in-the-sense-of-non-falsity, or towards higher values that have been officially designated as such. The Court acknowledged as much in Cohen v. California, which often gets misread as being a case about political speech, where it’s in fact about protecting traditionally low-value expression.
The Alvarez plurality explicitly rejects the proposal that false speech is low value speech and thus not subject to full First Amendment protections. “The remedy for speech that is false is speech that is true. This is the ordinary course in a free society.” (at 15)
The plurality thus articulates a speech-protective and autonomy-driven understanding of the marketplace of ideas, where the marketplace is self-correcting, and Congress has no place determining what is true, or good or bad, apart from protecting individuals from legally cognizable harms and from abuse of government structures and government power.
Both doctrinal and practical questions remain after Alvarez, unsurprisingly.
Doctrinally, the question is what type of scrutiny applies to false speech. The plurality employed strict scrutiny, while the concurrence used intermediate scrutiny. It is not clear what the Court will employ in the future.
Using intermediate scrutiny to strike down the Act, it should be noted, creates a strange tension between this case and commercial speech doctrine, which allocates First Amendment protection only to commercial speech that is not misleading. Intermediate scrutiny may also raise questions about trademark dilution, where no competition, commercial harm, or likelihood of confusion need be shown. The concurrence thus struggles with trademark dilution on pp 6-7, where the majority could probably get rid of —or at least restrict the scope of— the trademark problem by applying intermediate strutiny.
Practically speaking, the Act might survive on rewriting. The Act might be rewritten to require that the liar lie for the purpose of receiving a benefit. Alternatively, the Act could be rewritten to penalize lying where the liar benefited from the lie (ie, harm was accomplished as a result of the lie). If the Act were thus rewritten, it’s not clear how the plurality would treat it with respect to historic exceptions and their justifications. It also seems likely that the concurrence would switch sides.
It’s worth noting the implications of Alvarez for the ongoing discussion of anonymous speech, and the use of online personae. If Alvarez had gone the other way, the Court might have made it possible for Congress to prohibit the use of pseudonyms, or “fake names,” online. Lying about your identity is another way of describing choosing to hide your real identity, which would have brought the case into conflict with McIntyre v. Ohio and other doctrine on anonymous speech. I’m not sure that a good doctrinal distinction could be developed between positively asserting that you are another person , and choosing a pseudonym for the purpose of hiding your identity. For now, at least, thanks to Alvarez, the distinction between legal and illegal pseudonymous behavior appears to rest clearly in the additional element of harm the Court noted must be shown for fraud, or the performance of some other tort or crime.
There is another fast-developing area potentially impacted by Alvarez that the Program for the Study of Reproductive Justice at Yale has been working on all year: the regulation of Crisis Pregnancy Centers, where states require the centers to explain that they are not actually doctors and do not actually provide medical services such as abortion. On this issue, though, I’ll defer to my colleague Jennifer Keighley, who has a piece forthcoming on the matter.
But leaving all this aside, there’s a very simple reason Alvarez was correctly decided.
As Kozinski noted below, people lie an awful lot.
posted by Stanford Law Review
Volume 64 • Issue 5 • May 2012
Securities Class Actions Against Foreign Issuers
How Much Should Judges Be Paid?
June 19, 2012 at 1:37 am Posted in: Administrative Law, Anonymity, Behavioral Law and Economics, Civil Rights, Courts, Disability Law, Economic Analysis of Law, Employment Law, Financial Institutions, Law Rev (Stanford), Law Rev Contents Print This Post No Comments
posted by Danielle Citron
By now, you’ve likely heard about the the proposed EU regulation concerning the right to be forgotten. The drafters of the proposal expressed concern for social media users who have posted comments or photographs that they later regretted. Commissioner Reding explained: “If an individual no longer wants his personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system.”
Proposed Article 17 provides:
[T]he data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data which are made available by the data subject while he or she was a child, where one of the following grounds applies . . . .
Where the controller referred to in paragraph 1 has made the personal data public, it shall take all reasonable steps, including technical measures, in relation to data for the publication of which the controller is responsible, to inform third parties which are processing such data, that a data subject requests them to erase any links to, or copy or replication of that personal data. Where the controller has authorised a third party publication of personal data, the controller shall be considered responsible for that publication.
The controller shall carry out the erasure without delay, except to the extent that the retention of the personal data is necessary: (a) for exercising the right of freedom of expression in accordance with Article 80; (b) for reasons of public interest in the area of public health in accordance with Article 81; (c) for historical, statistical and scientific research purposes in accordance with Article 83; (d) for compliance with a legal obligation to retain the personal data by Union or Member State law to which the controller is subject . . . . Read the rest of this post »
posted by Deven Desai
Do you want everyone to know what book you read, film you watch, search you perform, automatically? No? Yes? Why? Why Not? It is odd to me that the ideas behind the Video Privacy Protection Act do not indicate a rather quick extension. But there is a debate about whether our intellectual consumption should have privacy protection, and if so, what that should look like. Luckily, Neil Richards has some answers. His post on Social Reading is a good read. In response to the idea that automatic sharing is wise and benefits all captures some core points:
Not so fast. The sharing of book, film, and music recommendations is important, and social networking has certainly made this easier. But a world of automatic, always-on disclosure should give us pause. What we read, watch, and listen to matter, because they are how we make up our minds about important social issues – in a very real sense, they’re how we make sense of the world.
What’s at stake is something I call “intellectual privacy” – the idea that records of our reading and movie watching deserve special protection compared to other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we try to understand the world we live in. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist, gay teen, or anti-globalization books; or visiting web sites about abortion, gun control, or cancer; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”
And before you go off and say Neil doesn’t get “it” whatever “it” may be, note that he is making a good distinction: “when we share – when we speak – we should do so consciously and deliberately, not automatically and unconsciously. Because of the constitutional magnitude of these values, our social, technological, professional, and legal norms should support rather than undermine our intellectual privacy.”
I easily recommend reading the full post. For those interested in a little more on the topic, the full paper is forthcoming in Georgetown Law Journal and available here. And, if you don’t know Neil Richards’ work (SSRN), you should. Even if you disagree with him, Neil’s writing is of that rare sort where you are better off by reading it. The clean style and sharp ideas force one to engage and think, and thus they also allow one to call out problems so that understanding moves forward. (See Orwell, Politics and the English Language). Enjoy.
posted by Stanford Law Review
The Stanford Law Review Online has just published Chief Judge Alex Kozinski’s Keynote from our 2012 Symposium, The Dead Past. Chief Judge Kozinski discusses the privacy implications of our increasingly digitized world and our role as a society in shaping the law:
I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.
I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.
Judges, legislators and law enforcement officials live in the real world. The opinions they write, the legislation they pass, the intrusions they dare engage in—all of these reflect an explicit or implicit judgment about the degree of privacy we can reasonably expect by living in our society. In a world where employers monitor the computer communications of their employees, law enforcement officers find it easy to demand that internet service providers give up information on the web-browsing habits of their subscribers. In a world where people post up-to-the-minute location information through Facebook Places or Foursquare, the police may feel justified in attaching a GPS to your car. In a world where people tweet about their sexual experiences and eager thousands read about them the morning after, it may well be reasonable for law enforcement, in pursuit of terrorists and criminals, to spy with high-powered binoculars through people’s bedroom windows or put concealed cameras in public restrooms. In a world where you can listen to people shouting lurid descriptions of their gall-bladder operations into their cell phones, it may well be reasonable to ask telephone companies or even doctors for access to their customer records. If we the people don’t consider our own privacy terribly valuable, we cannot count on government—with its many legitimate worries about law-breaking and security—to guard it for us.
Which is to say that the concerns that have been raised about the erosion of our right to privacy are, indeed, legitimate, but misdirected. The danger here is not Big Brother; the government, and especially Congress, have been commendably restrained, all things considered. The danger comes from a different source altogether. In the immortal words of Pogo: “We have met the enemy and he is us.”
April 12, 2012 at 1:32 pm Posted in: Anonymity, Blogging, Constitutional Law, Courts, Culture, Current Events, Cyberlaw, First Amendment, Google & Search Engines, Law Rev (Stanford), Politics, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Science Fiction, Supreme Court, Technology Print This Post 4 Comments
posted by Deven Desai
The Boston Phoenix has an article about what Facebook coughs up when a subpoena is sent to the company. The paper came across the material as it worked on an article called Hunting the Craigslist Killer. The issues that come to mind for me are
1. Privacy after death? In may article Property, Persona, and Preservation which uses the question of who owns email after death, I argue that privacy after death isn’t tenable. The release of information after someone dies (the man committed suicide), (From ZDNET “he man committed suicide, which meant the police didn’t care if the Facebook document was published elsewhere, after robbing two women and murdering a third.”) brings up a question Dan Solove and I have debated. What about those connected to the dead person? The facts here matter.
2. What are reasons to redact or not release information? Key facts about redaction and public records complicate the question of death and privacy. I’m assuming the person has no privacy after death. But his or her papers may reveal information about those connected to the dead person. In this case the police did not redact, but the paper did. Sort of.
This document was publicly released by Boston Police as part of the case file. In other case documents, the police have clearly redacted sensitive information. And while the police were evidently comfortable releasing Markoff’s unredacted Facebook subpoena, we weren’t. Markoff may be dead, but the very-much-alive friends in his friend list were not subpoenaed, and yet their full names and Facebook ID’s were part of the document. So we took the additional step of redacting as much identifying information as we could — knowing that any redaction we performed would be imperfect, but believing that there’s a strong argument for distributing this, not only for its value in illustrating the Markoff case, but as a rare window into the shadowy process by which Facebook deals with law enforcement.
As the comments noted and the explanation admits, the IDs and other information of the living are arguably in greater need of protection. It may have been that the police needed all the information for its case, but why release it to the public?
Obvious Closing: As we put more into the world, it will come back in ways we had not imagined. I doubt that bright line rules will ever work in this space. But it seems to me that some sort of best practices informed by research (think Lior Strahilevitz’s A Social Networks Theory of Privacy) could allow for reasonable, useful privacy practices. The hardest part for law and society in general is that this area (information-related law) is not likely to be stable for some time. That being said, I think that the insane early domain name law (yes someone could think that megacorpsucks.com is sponsored by megacorp) corrected in about 10 years. Perhaps privacy and information practices will reach an equilibrium that allows the law to stabilize. Until then, practices, businesses, science, and the law will twirl around each other as society sorts what balance makes sense (until something messes with that moment).
posted by Danielle Citron
In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech. As we noted, many intermediaries like Facebook already choose to address online hatred in some way. We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so. We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations. With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable. Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.
Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League. Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations. Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article. But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation? Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”
slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).
The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.” That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content. And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech. And Facebook employees have been transparent about why. As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy). He said, let their friends counter that speech and embarrass them for being so asinine. The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland). It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB). The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation). See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.
As Kevin said, and Chris and I enthusiastically agreed, this memo is significant. Companies should follow FB’s lead. Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before. And users can debate it and tell FB that they think the policy is wanting and why. FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means. Does the prohibited content get removed or moved on for further discussion? Do users get the choice to take down violating content first? Do they get notice? Users need to know what happens when they violate TOS. That too helps users understand their rights and responsibilities as digital citizens. In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same. Bravo to Facebook.
Some thoughts on Cohen’s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice
posted by Brett Frischmann
Julie Cohen’s book is fantastic. Unfortunately, I am late to join the symposium, but it has been a pleasure playing catch up with the previous posts. Reading over the exchanges thus far has been a treat and a learning experience. Like Ian Kerr, I felt myself reflecting on my own commitments and scholarship. This is really one of the great virtues of the book. To prepare to write something for the blog symposium, I reread portions of the book a second time; maybe a third time, since I have read many of the law review articles upon which the book is based. And frankly, each time I read Julie’s scholarship I am forced to think deeply about my own methodology, commitments, theoretical orientation, and myopias. Julie’s critical analysis of legal and policy scholarship, debate,and rhetoric is unyielding as it cuts to the core commitments and often unstated assumptions that I (we) take for granted.
I share many of the same concerns as Julie about information law and policy (and I reach similar prescriptions too), and yet I approach them from a very different perspective, one that is heavily influenced by economics. Reading her book challenged me to confront my own perspective critically. Do I share the commitments and methodological infirmities of the neoliberal economists she lambasts? Upon reflection, I don’t think so. The reason is that not all of economics boils down to reductionist models that aim to tally up quantifiable costs and benefits. I agree wholeheartdly with Julie that economic models of copyright (or creativty, innovation, or privacy) that purport to accurately sum up relevant benefits and costs and fully capture the complexity of cultural practices are inevitably, fundamentally flawed and that uncritical reliance on such models to formulate policy is distorting and biased toward seemless micromanagement and control. As she argues in her book, reliance on such models “focuses on what is known (or assumed) about benefits and costs, … [and] tends to crowd out the unknown and unpredictable, with the result that play remains a peripheral consideration, when it should be central.” Interestingly, I make nearly the same argument in my book, although my argument is grounded in economic theory and my focus is on user activities that generate public and social goods. I need to think more about the connections between her concept of play and the user activities I examine. But a key shared concept is that indeterminacy in the environment and the structure of rights and affordances sustains user capabilties and this is (might be) normatively attractive whether or not users choose to exercise the capabilities. That is, there is social (option) value is sustaining flexibility and uncertainty.
Like Julie, I have been drawn to the Capabilities Approach (CA). It provides a normatively appealing framework for thinking about what matters in information policy—that is, for articulating ends. But it seems to pay insufficient attention to the means. I have done some limited work on the CA and information policy and hope to do more in the future. Julie has provided an incredible roadmap. In chapter 9, The Structural Conditions of Human Flourishing, she goes beyond the identification of capabilities to prioritize and examines the means for enabling capabilities. In my view, this is a major contribution. Specifically, she discusses three structural conditions for human flourishing: (1) access to knowledge, (2) operational transparency,and (3) semantic discontinuity to be a major contribution. I don’t have much to say about the access to knowledge and operational transparency discussions, other than “yep.” The semantic discontinuity discussion left me wanting more, more explanation of the concept and more explanation of how to operationalize it. I wanted more because I think it is spot on. Paul and others have already discussed this, so I will not repeat what they’ve said. But, riffing off of Paul’s post, I wonder whether it is a mistake to conceptualize semantic discontinuity as “gaps” and ask privacy, copyright, and other laws to widen the gaps. I wonder whether the “space” of semantic discontinuities is better conceptualized as the default or background environment rather than the exceptional “gap.” Maybe this depends on the context or legal structure, but I think the relevant semantic discontinuities where play flourishes, our everyday social and cultural experiences, are and should be the norm. (Is the public domain merely a gap in copyright law? Or is copyright law a gap in the public domain?) Baselines matter. If the gap metaphor is still appealing, perhaps it would be better to describe them as gulfs.
posted by Derek Bambauer
Lifehacker‘s Adam Dachis has a great article on how users can deal with a world in which they infringe copyright constantly, both deliberately and inadvertently. (Disclaimer alert: I talked with Adam about the piece.) It’s a practical guide to a strict liability regime – no intent / knowledge requirement for direct infringement – that operates not as a coherent body of law, but as a series of reified bargains among stakeholders. And props to Adam for the Downfall reference! I couldn’t get by without the mockery of the iPhone or SOPA that it makes possible…
Cross-posted to Info/Law.
February 27, 2012 at 2:14 pm Posted in: Anonymity, Architecture, Culture, Current Events, Cyberlaw, DRM, Education, Google and Search Engines, Innovation, Intellectual Property, Interviews, Media Law, Movies & Television, Politics, Social Network Websites, Technology, Web 2.0 Print This Post 3 Comments
posted by Derek Bambauer
(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)
New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read the rest of this post »
February 21, 2012 at 10:20 pm Posted in: Anonymity, Blogging, Bright Ideas, Civil Rights, Conferences, Constitutional Law, Culture, Current Events, Cyber Civil Rights, Cyberlaw, Education, First Amendment, Media Law, Politics, Privacy (Gossip & Shaming), Psychology and Behavior, Race, Religion, Social Network Websites, Technology, Web 2.0 Print This Post 3 Comments
posted by Derek Bambauer
On RocketLawyer’s Legally Easy podcast, I talk with Charley Moore and Eva Arevuo about the EU’s proposed “right to be forgotten” and privacy as censorship. I was inspired by Jeff Rosen and Jane Yakowitz‘s critiques of the approach, which actually appears to be a “right to lie effectively.” If you can disappear unflattering – and truthful – information, it lets you deceive others – in other words, you benefit and they are harmed. The EU’s approach is a blunderbuss where a scalpel is needed.
Cross-posted at Info/Law.
February 17, 2012 at 12:01 pm Posted in: Anonymity, Architecture, Civil Rights, Consumer Protection Law, Culture, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Google and Search Engines, Innovation, Media Law, Political Economy, Politics, Privacy, Technology, Web 2.0 Print This Post No Comments
posted by Derek Bambauer
Cybersecurity is in the news: a network intrusion allegedly interfered with railroad signals in the Northwest in December; the Obama administration refused to support the Stop Online Piracy Act due to worries about interfering with DNSSEC; and the GAO concluded that the Department of Homeland Security is making things worse by oversharing. So, I’m fortunate that the Minnesota Law Review has just published the final version of Conundrum (available on SSRN), in which I argue that we should take an information-based approach to cybersecurity:
Cybersecurity is a conundrum. Despite a decade of sustained attention from scholars, legislators, military officials, popular media, and successive presidential administrations, little if any progress has been made in augmenting Internet security. Current scholarship on cybersecurity is bound to ill-fitting doctrinal models. It addresses cybersecurity based upon identification of actors and intent, arguing that inherent defects in the Internet’s architecture must be remedied to enable attribution. These proposals, if adopted, would badly damage the Internet’s generative capacity for innovation. Drawing upon scholarship in economics, animal behavior, and mathematics, this Article takes a radical new path, offering a theoretical model oriented around information, in distinction to the near-obsession with technical infrastructure demonstrated by other models. It posits a regulatory focus on access and alteration of data, and on guaranteeing its integrity. Counterintuitively, it suggests that creating inefficient storage and connectivity best protects user capabilities to access and alter information, but this necessitates difficult tradeoffs with preventing unauthorized interaction with data. The Article outlines how to implement inefficient information storage and connectivity through legislation. Lastly, it describes the stakes in cybersecurity debates: adopting current scholarly approaches jeopardizes not only the Internet’s generative architecture, but also key normative commitments to free expression on-line.
Conundrum, 96 Minn. L. Rev. 584 (2011).
Cross-posted at Info/Law.
January 24, 2012 at 4:13 pm Posted in: Anonymity, Architecture, Articles and Books, Current Events, Cyberlaw, Innovation, Intellectual Property, Law Rev (Minnesota), Military Law, Politics, Privacy (National Security), Technology, Web 2.0 Print This Post No Comments