Category: Anonymity

5

Lie to me: the First Amendment in US v. Alvarez

The Supreme Court had a busy day yesterday, and in the wake of healthcare, there’s a risk of overlooking an important addition to this Court’s First Amendment jurisprudence: U.S. v. Alvarez.

In short, the Court found that Congress can’t send you to jail just for lying. Alvarez confirms that this Court is extremely reluctant to create new FirstAmendment exceptions, and has a speech-protective understanding of the marketplace of ideas. Alvarez also leaves open some interesting questions, both doctrinal and practical.

Alvarez was prosecuted under the Stolen Valor Act (18 USC  s. 704) for lying about having received the Congressional Medal of Honor. What made this case particularly interesting, and probably what split the Court, is that Alvarez did not lie to gain money, or to get a job. He didn’t lie for any apparent reason. He just lied.

The Court split 4-2-3, with six affirming the Ninth Circuit and finding the Act unconstitutional. Justice Kennedy wrote the plurality, Justice Breyer wrote the concurrence (joined by Justice Kagan), and Justice Alito rather unsurprisingly wrote the dissent.

The plurality forcefully reiterated what the Court articulated two years ago in U.S. v. Stevens (2010): content-based restrictions on speech are subject to strict scrutiny, with limited exceptions that have been clearly established in prior caselaw. What was (again!) at stake in this decision was whether the First Amendment protects all speech except for the familiar carveouts, or presents an “ad hoc balancing of relative social costs and benefits” with each new proposed exception (at 4, quoting U.S. v. Stevens (2010)).

The plurality went the First-Amendment-protective route. Its “historic and traditional categories” of First Amendment exceptions present a familiar roster:  obscenity, fighting words, incitement, and the rest. False speech as false speech is not one of the historical exceptions, and the plurality made it perfectly clear that it does not plan to add to the list. In Stevens, then, the Court said what it meant about not intending to add to historical First Amendment exceptions. Future brief-writers would do well to keep this in mind.

Eugene Volokh in his Amicus brief feared that if the Court went the route of protecting false speech, the First Amendment would become a patchwork of under-theorized exceptions to that rule. The plurality proved him wrong. It both articulated theoretical underpinnings for existing exceptions that do involve false speech, and took the Government to task for advocating an overly restrictive understanding of the marketplace of ideas.

The plurality walked through two general categories of exceptions to First Amendment protection for false speech. These categories are effectively distinguished from most false speech as “false speech-plus.” Each is not just false speech, but has an additional element.

The first kind of false speech not subject to First Amendment protection is false speech where there is a legally cognizable harm to an individual, such as an invasion of privacy or legal costs. This category includes defamation and fraud (at 7). Robert Post might further add that these kinds of crimes and torts generally take place outside of the public sphere, and so are subject to less First Amendment protection because they involve individual relationships rather than public-facing speech.

The second kind of false speech not subject to First Amendment protection is false speech that impedes a government function (eg perjury or lying to a federal officer), or abuses government power without authorization (eg impersonating a Government officer). Here, no direct injury to an individual is required. The plurality found that these two types of laws are similar because both “protect the integrity of Government processes” (at 9).

The more serious and broad-sweeping theoretical debate resolved by the Alvarez plurality concerns a fundamental understanding of the marketplace of ideas.

In the historical understanding of the marketplace of ideas, speech competes with speech towards the pursuit of “truth” (although truth is more accurately understood as political truth, not just truth in the sense of non-falsity). Thus Volokh is probably correct when he writes that historically, false speech was considered of lower value in the marketplace of ideas than true speech.

However, the present-day understanding of the marketplace of ideas is that it’s impossible to determine which speech has high value, and which speech has low value. Speech competes, and listeners choose what to believe, but there’s no competition towards an absolute truth-in-the-sense-of-non-falsity, or towards higher values that have been officially designated as such. The Court acknowledged as much in Cohen v. California, which often gets misread as being a case about political speech, where it’s in fact about protecting traditionally low-value expression.

The Alvarez plurality explicitly rejects the proposal that false speech is low value speech and thus not subject to full First Amendment protections. “The remedy for speech that is false is speech that is true. This is the ordinary course in a free society.” (at 15)

The plurality thus articulates a speech-protective and autonomy-driven understanding of the marketplace of ideas, where the marketplace is self-correcting, and Congress has no place determining what is true, or good or bad, apart from protecting individuals from legally cognizable harms and from abuse of government structures and government power.

Both doctrinal and practical questions remain after Alvarez, unsurprisingly.

Doctrinally, the question is what type of scrutiny applies to false speech. The plurality employed strict scrutiny, while the concurrence used intermediate scrutiny. It is not clear what the Court will employ in the future.

Using intermediate scrutiny to strike down the Act, it should be noted, creates a strange tension between this case and commercial speech doctrine, which allocates First Amendment protection only to commercial speech that is not misleading. Intermediate scrutiny may also raise questions about trademark dilution, where no competition, commercial harm, or likelihood of confusion need be shown. The concurrence thus struggles with trademark dilution on pp 6-7, where the majority could probably get rid of —or at least restrict the scope of— the trademark problem by applying intermediate strutiny.

Practically speaking, the Act might survive on rewriting. The Act might be rewritten to require that the liar lie for the purpose of receiving a benefit. Alternatively, the Act could be rewritten to penalize lying where the liar benefited from the lie (ie, harm was accomplished as a result of the lie). If the Act were thus rewritten, it’s not clear how the plurality would treat it with respect to historic exceptions and their justifications. It also seems likely that the concurrence would switch sides.

It’s worth noting the implications of Alvarez for the ongoing discussion of anonymous speech, and the use of online personae. If Alvarez had gone the other way, the Court might have made it possible for Congress to prohibit the use of pseudonyms, or “fake names,” online. Lying about your identity is another way of describing choosing to hide your real identity, which would have brought the case into conflict with McIntyre v. Ohio and other doctrine on anonymous speech.  I’m not sure that a good doctrinal distinction could be developed between positively asserting that you are another person , and choosing a pseudonym for the purpose of hiding your identity. For now, at least, thanks to Alvarez, the distinction between legal and illegal pseudonymous behavior appears to rest clearly in the additional element of harm the Court noted must be shown for fraud, or the performance of some other tort or crime.

There is another fast-developing area potentially impacted by Alvarez that the Program for the Study of Reproductive Justice at Yale has been working on all year: the regulation of Crisis Pregnancy Centers, where states require the centers to explain that they are not actually doctors and do not actually provide medical services such as abortion. On this issue, though, I’ll defer to my colleague Jennifer Keighley, who has a piece forthcoming on the matter.

But leaving all this aside, there’s a very simple reason Alvarez was correctly decided.

As Kozinski noted below, people lie an awful lot.

 

 

0

Stanford Law Review, 64.5 (2012)

Stanford Law Review

Volume 64 • Issue 5 • May 2012

Articles
The City and the Private Right of Action
Paul A. Diller
64 Stan. L. Rev. 1109

Securities Class Actions Against Foreign Issuers
Merritt B. Fox
64 Stan. L. Rev. 1173

How Much Should Judges Be Paid?
An Empirical Study on the Effect of Judicial Pay on the State Bench

James M. Anderson & Eric Helland
64 Stan. L. Rev. 1277

Note
How Congress Could Reduce Job Discrimination by Promoting Anonymous Hiring
David Hausman
64 Stan. L. Rev. 1343

0

The Right to Be Forgotten: A Criminal’s Best Friend?

By now, you’ve likely heard about the the proposed EU regulation concerning the right to be forgotten.  The drafters of the proposal expressed concern for  social media users who have posted comments or photographs that they later regretted. Commissioner Reding explained: “If an individual no longer wants his personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system.”

Proposed Article 17 provides:

[T]he data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data which are made available by the data subject while he or she was a child, where one of the following grounds applies . . . .

Where the controller referred to in paragraph 1 has made the personal data public, it shall take all reasonable steps, including technical measures, in relation to data for the publication of which the controller is responsible, to inform third parties which are processing such data, that a data subject requests them to erase any links to, or copy or replication of that personal data. Where the controller has authorised a third party publication of personal data, the controller shall be considered responsible for that publication.

The controller shall carry out the erasure without delay, except to the extent that the retention of the personal data is necessary: (a) for exercising the right of freedom of expression in accordance with Article 80; (b) for reasons of public interest in the area of public health in accordance with Article 81; (c) for historical, statistical and scientific research purposes in accordance with Article 83; (d) for compliance with a legal obligation to retain the personal data by Union or Member State law to which the controller is subject . . . . Read More

4

Hey Look at Me! I’m Reading! (Or Not) Neil Richards on Social Reading

Do you want everyone to know what book you read, film you watch, search you perform, automatically? No? Yes? Why? Why Not? It is odd to me that the ideas behind the Video Privacy Protection Act do not indicate a rather quick extension. But there is a debate about whether our intellectual consumption should have privacy protection, and if so, what that should look like. Luckily, Neil Richards has some answers. His post on Social Reading is a good read. In response to the idea that automatic sharing is wise and benefits all captures some core points:

Not so fast. The sharing of book, film, and music recommendations is important, and social networking has certainly made this easier. But a world of automatic, always-on disclosure should give us pause. What we read, watch, and listen to matter, because they are how we make up our minds about important social issues – in a very real sense, they’re how we make sense of the world.

What’s at stake is something I call “intellectual privacy” – the idea that records of our reading and movie watching deserve special protection compared to other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we try to understand the world we live in. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist, gay teen, or anti-globalization books; or visiting web sites about abortion, gun control, or cancer; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”

And before you go off and say Neil doesn’t get “it” whatever “it” may be, note that he is making a good distinction: “when we share – when we speak – we should do so consciously and deliberately, not automatically and unconsciously. Because of the constitutional magnitude of these values, our social, technological, professional, and legal norms should support rather than undermine our intellectual privacy.”

I easily recommend reading the full post. For those interested in a little more on the topic, the full paper is forthcoming in Georgetown Law Journal and available here. And, if you don’t know Neil Richards’ work (SSRN), you should. Even if you disagree with him, Neil’s writing is of that rare sort where you are better off by reading it. The clean style and sharp ideas force one to engage and think, and thus they also allow one to call out problems so that understanding moves forward. (See Orwell, Politics and the English Language). Enjoy.

4

Stanford Law Review Online: The Dead Past

Stanford Law Review

The Stanford Law Review Online has just published Chief Judge Alex Kozinski’s Keynote from our 2012 Symposium, The Dead Past. Chief Judge Kozinski discusses the privacy implications of our increasingly digitized world and our role as a society in shaping the law:

I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.

I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.

He concludes:

Judges, legislators and law enforcement officials live in the real world. The opinions they write, the legislation they pass, the intrusions they dare engage in—all of these reflect an explicit or implicit judgment about the degree of privacy we can reasonably expect by living in our society. In a world where employers monitor the computer communications of their employees, law enforcement officers find it easy to demand that internet service providers give up information on the web-browsing habits of their subscribers. In a world where people post up-to-the-minute location information through Facebook Places or Foursquare, the police may feel justified in attaching a GPS to your car. In a world where people tweet about their sexual experiences and eager thousands read about them the morning after, it may well be reasonable for law enforcement, in pursuit of terrorists and criminals, to spy with high-powered binoculars through people’s bedroom windows or put concealed cameras in public restrooms. In a world where you can listen to people shouting lurid descriptions of their gall-bladder operations into their cell phones, it may well be reasonable to ask telephone companies or even doctors for access to their customer records. If we the people don’t consider our own privacy terribly valuable, we cannot count on government—with its many legitimate worries about law-breaking and security—to guard it for us.

Which is to say that the concerns that have been raised about the erosion of our right to privacy are, indeed, legitimate, but misdirected. The danger here is not Big Brother; the government, and especially Congress, have been commendably restrained, all things considered. The danger comes from a different source altogether. In the immortal words of Pogo: “We have met the enemy and he is us.”

Read the full article, The Dead Past by Alex Kozinski, at the Stanford Law Review Online.

0

Facebook Subpoenas, Open Court Records, Here We Go Again

The Boston Phoenix has an article about what Facebook coughs up when a subpoena is sent to the company. The paper came across the material as it worked on an article called Hunting the Craigslist Killer. The issues that come to mind for me are

1. Privacy after death? In may article Property, Persona, and Preservation which uses the question of who owns email after death, I argue that privacy after death isn’t tenable. The release of information after someone dies (the man committed suicide), (From ZDNET “he man committed suicide, which meant the police didn’t care if the Facebook document was published elsewhere, after robbing two women and murdering a third.”) brings up a question Dan Solove and I have debated. What about those connected to the dead person? The facts here matter.

2. What are reasons to redact or not release information? Key facts about redaction and public records complicate the question of death and privacy. I’m assuming the person has no privacy after death. But his or her papers may reveal information about those connected to the dead person. In this case the police did not redact, but the paper did. Sort of.

This document was publicly released by Boston Police as part of the case file. In other case documents, the police have clearly redacted sensitive information. And while the police were evidently comfortable releasing Markoff’s unredacted Facebook subpoena, we weren’t. Markoff may be dead, but the very-much-alive friends in his friend list were not subpoenaed, and yet their full names and Facebook ID’s were part of the document. So we took the additional step of redacting as much identifying information as we could — knowing that any redaction we performed would be imperfect, but believing that there’s a strong argument for distributing this, not only for its value in illustrating the Markoff case, but as a rare window into the shadowy process by which Facebook deals with law enforcement.

As the comments noted and the explanation admits, the IDs and other information of the living are arguably in greater need of protection. It may have been that the police needed all the information for its case, but why release it to the public?

Obvious Closing: As we put more into the world, it will come back in ways we had not imagined. I doubt that bright line rules will ever work in this space. But it seems to me that some sort of best practices informed by research (think Lior Strahilevitz’s A Social Networks Theory of Privacy) could allow for reasonable, useful privacy practices. The hardest part for law and society in general is that this area (information-related law) is not likely to be stable for some time. That being said, I think that the insane early domain name law (yes someone could think that megacorpsucks.com is sponsored by megacorp) corrected in about 10 years. Perhaps privacy and information practices will reach an equilibrium that allows the law to stabilize. Until then, practices, businesses, science, and the law will twirl around each other as society sorts what balance makes sense (until something messes with that moment).

HT: CyberNetwork News

3

Actualizing Digital Citizenship With Transparent TOS Policies: Facebook Style

In “Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age,” 91 B.U. L. Rev. 1435 (2011), Helen Norton and I offered moral and policy justifications in support of intermediaries who choose to engage in voluntary efforts to combat hate speech.  As we noted, many intermediaries like Facebook already choose to address online hatred in some way.  We urged intermediaries to think and speak more carefully about the harms they hope to forestall when developing hate speech policies and offered an array of definitions of hate speech to help them do so.  We argued for the adoption of a “transparency principle,” by which we meant that intermediaries can, and should, valuably advance the fight against digital hate by clearly and specifically explaining to users the harms that their hate speech policies address as well as the consequences of policy violations.  With more transparency regarding the specific reasons for choosing to address digital hate, intermediaries can make behavioral expectations more understandable.  Without it, intermediaries will be less effective in expressing what it means to be responsible users of their services.

Our call for transparency has moved an important step forward, and last night I learned how while discussing anonymity, privacy, and hate speech with CDT’s brilliant Kevin Bankston and Hogan’s privacy luminary Chris Wolf at an event sponsored by the Anti-Defamation League.  Kevin shared with us Facebook’s first leaked and then explicitly revised and released to the public “Abuse Standards 6.2,” which makes clear the company’s abuse standard violations.  Let me back up for a minute: Facebook’s Terms of Service (TOS) prohibits “hate speech,” an ambiguous terms with broad and narrow meanings, as Helen and I explored in our article.  But it, like so many intermediaries, didn’t explain to users what they mean when they said that they prohibited hate speech–did it cover just explicit demeaning threats to traditionally subordinated groups or demeaning speech that approximates intentional infliction of emotional distress, or, instead, did it more broadly cover slurs and epithets and/or group defamation?  Facebook’s leaked “Operation Manual For Live Content Moderators” helpfully explains what it means by “hate content:”

slurs or racial comments of any kind, attacking based on protected category, hate symbols, either out of context or in the context of hate phrases or support of hate groups, showing support for organizations and people primarily known for violence, depicting symbols primarily known for hate and violence, unless comments are clearly against them, photos comparing two people (or an animal and a person that resembles that animal) side by side in a “versus photo,” photo-shopped images showing the subject in a negative light, images of drunk and unconscious people, or sleeping people with things drawn on their faces, and videos of street/bar/ school yard fights even if no valid match is found (School fight videos are only confirmed if the video has been posted to continue tormenting the person targeted in the video).

The manual goes on to note that “Hate symbols are confirmed if there’s no context OR if hate phrases are used” and “Humor overrules hate speech UNLESS slur words are present or the humor is not evident.”  That seems a helpful guide to safety operators on how to navigate what seems more like humor than hate, recognizing some of the challenges that surely operators face in assessing content.  And note too Facebook’s consistency on Holocaust denial: that’s not prohibited in the U.S., only IP blocked for countries that ban such speech.  And Facebook employees have been transparent about why.  As a wise Facebook employee explained (and I’m paraphrasing here): if people want to show their ignorance about the Holocaust, let them do so in front of their friends and colleagues (hence the significant of FB’s real name policy).  He said, let their friends counter that speech and embarrass them for being so asinine.  The policy goes on to talk specifically about bullying and harassment, including barring attacks on anyone based on their status as a sexual assault or rape victim and contacting users persistently without prior solicitation or continue to do so when the other party has said that they want not further contact (sounds much like many harassment criminal laws, including Maryland).  It also bars “credible threats,” defined as including “credible threats or incitement of physical harm against anyone, credible indications of organizing acts of present or future violence,” which seems to cover groups like “Kill a Jew Day” (removed promptly by FB).  The policy also gave examples–another important step, and something we talked about last May in Stanford during a roundtable on our article with safety officers from major intermediaries (I think I can’t say who came given the Chatam House type of rules of conversation).  See the examples on sexually explicit language and sexual solicitation, they are incredibly helpful and I think incredibly important for tackling cyber gender harassment.

As Kevin said, and Chris and I enthusiastically agreed, this memo is significant.  Companies should follow FB’s lead.  Whether you agree or disagree with these definitions, users now know what FB means by hate speech, at least far more than it did before.  And users can debate it and tell FB that they think the policy is wanting and why.  FB can take those conversations into consideration–they certainly have in other instances when users expressed their displeasure about moves FB was making. Now, let me be a demanding user: I want to know what this all means.  Does the prohibited content get removed or moved on for further discussion?  Do users get the choice to take down violating content first?  Do they get notice?  Users need to know what happens when they violate TOS.  That too helps users understand their rights and responsibilities as digital citizens.  In any event, I’m hoping that this encourages FB to release future iterations of its policy to users voluntarily and that it encourages its fellow intermediaries to do the same.  Bravo to Facebook.

1

Some thoughts on Cohen’s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice

Julie Cohen’s book is fantastic.  Unfortunately, I am late to join the symposium, but it has been a pleasure playing catch up with the previous posts.  Reading over the exchanges thus far has been a treat and a learning experience.  Like Ian Kerr, I felt myself reflecting on my own commitments and scholarship.  This is really one of the great virtues of the book.  To prepare to write something for the blog symposium, I reread portions of the book a second time; maybe a third time, since I have read many of the law review articles upon which the book is based.  And frankly, each time I read Julie’s scholarship I am forced to think deeply about my own methodology, commitments, theoretical orientation, and myopias. Julie’s critical analysis of legal and policy scholarship, debate,and rhetoric is unyielding as it cuts to the core commitments and often unstated assumptions that I (we) take for granted.

I share many of the same concerns as Julie about information law and policy (and I reach similar prescriptions too), and yet I approach them from a very different perspective, one that is heavily influenced by economics.  Reading her book challenged me to confront my own perspective critically.  Do I share the commitments and methodological infirmities of the neoliberal economists she lambasts?     Upon reflection, I don’t think so.  The reason is that not all of economics boils down to reductionist models that aim to tally up quantifiable costs and benefits. I agree wholeheartdly with Julie that economic models of copyright (or creativty,  innovation, or privacy) that purport to accurately sum up relevant benefits and costs and fully capture the complexity of cultural practices are inevitably, fundamentally flawed and that uncritical reliance on such models to formulate policy is distorting and biased toward seemless micromanagement and control. As she argues in her book, reliance on such models “focuses on what is known (or assumed) about benefits and costs, … [and] tends to crowd out the unknown and unpredictable, with the result that play remains a peripheral consideration, when it should be central.”  Interestingly, I make nearly the same argument in my book, although my argument is grounded in economic theory and my focus is on user activities that generate public and social goods.  I need to think more about the connections between her concept of play and the user activities I  examine.  But a key shared concept is that indeterminacy in the environment and the structure of rights and affordances sustains user capabilties and this is (might be) normatively attractive whether or not users choose to exercise the capabilities.  That is, there is social (option) value is sustaining flexibility and uncertainty.

Like Julie, I have been drawn to the Capabilities Approach (CA). It provides a normatively appealing framework for thinking about what matters in information policy—that is, for articulating ends.  But it seems to pay insufficient attention to the means.  I have done some limited work on the CA and information policy and hope to do more in the future.  Julie has provided an incredible roadmap.  In chapter 9, The Structural Conditions of Human Flourishing, she goes beyond the identification of capabilities to prioritize and examines the means for enabling capabilities.  In my view, this is a major contribution.  Specifically, she discusses three structural conditions for human flourishing: (1) access to knowledge, (2) operational transparency,and (3) semantic discontinuity to be a major contribution.  I don’t have much to say about the access to knowledge and operational transparency discussions, other than “yep.”  The semantic discontinuity discussion left me wanting more, more explanation of the concept and more explanation of how to operationalize it.  I wanted more because I think it is spot on.  Paul and others have already discussed this, so I will not repeat what they’ve said.  But, riffing off of Paul’s post, I wonder whether it is a mistake to conceptualize semantic discontinuity as “gaps” and ask privacy, copyright, and other laws to widen the gaps.  I wonder whether the “space” of semantic discontinuities is better conceptualized as the default or background environment rather than the exceptional “gap.”  Maybe this depends on the context or legal structure, but I think the relevant semantic discontinuities where play flourishes, our everyday social and cultural experiences, are and should be the norm.  (Is the public domain merely a gap in copyright law?  Or is copyright law a gap in the public domain?)  Baselines matter.  If the gap metaphor is still appealing, perhaps it would be better to describe them as gulfs.

3

Ubiquitous Infringement

Lifehacker‘s Adam Dachis has a great article on how users can deal with a world in which they infringe copyright constantly, both deliberately and inadvertently. (Disclaimer alert: I talked with Adam about the piece.) It’s a practical guide to a strict liability regime – no intent / knowledge requirement for direct infringement – that operates not as a coherent body of law, but as a series of reified bargains among stakeholders. And props to Adam for the Downfall reference! I couldn’t get by without the mockery of the iPhone or SOPA that it makes possible…

Cross-posted to Info/Law.

3

Cyberbullying and the Cheese-Eating Surrender Monkeys

(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)

Introduction

New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read More