posted by Woodrow Hartzog
The New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression. The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.
Like with other boilerplate contracts, user restrictions on social media are legally consented to and rarely challenged, yet similar restrictions via most other regulatory mechanisms would likely be legally suspect and vigorously opposed. For example, in Facebook’s “Statement of Rights and Responsibilities,” users are not allowed to:
- Use a pseudonym
- “[S]olicit login information or access an account belonging to someone else”
- “[B]ully, intimidate, or harass any user”
- “[P]ost content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence”
- “[F]acilitate or encourage any violations of this Statement or our policies”
- “[T]ag users or send email invitations to non-users without their consent”
- “[U]se Facebook to do anything unlawful, misleading, malicious, or discriminatory”
- “[P]rovide any false personal information on Facebook”
- “[P]ost content or take any action on Facebook that infringes or violates someone else’s rights or otherwise violates the law” (emphasis added)
Taken together, it would seem challenging for any Facebook user to be able to follow all of these terms in the normal course of social interaction online. Access your partner’s Facebook account while using their computer? Breach. Tag your frenemy in an unflattering picture that you know he hates? Breach. Pretend that you drank that margarita in your recently uploaded photo, when really it was your friend who slugged it down? Breach. The literal scope of terms such as “false,” “misleading,” “intimidate,” and “harass” is expansive. While this broad scope allows the terms to cover many odious practices, the terms could also include commonplace social practices like joking, peer pressure, and excessive exaggeration.
Facebook is not an outlier here. Social media like Google, Twitter, Pinterest, and Path all have similar behavioral restrictions. One problem with these restrictions as contractual terms is the lack of guidance given to users. These terms usually lack accompanying definitions and are subject to a diverse pool of assumptions for meaning. (Some companies, like Twitter for example, do provide some guidance, however.) Social interaction is extremely messy, which makes it very difficult to legally pin down.
Of course, it’s seemingly common knowledge that these terms are sporadically, if not rarely, enforced. How many unconsented tags are added to photos the morning after a wild party and remain online indefinitely? How many students are intimidated by their classmates with no recourse? How many pseudonymous profiles are obvious (such as Santa Claus), yet never deleted? While discretion allows for scarce resources to be allocated effectively, an atmosphere where violations are routinely tolerated threatens to leave users largely guessing.
Given this information, are users to follow the broad and restrictive terms in their agreement or should they take their guidance from Facebook’s more permissive and clearly-refined policies for enforcement of those terms? In a relatively recent New York Times piece on the increasingly-common practice of password sharing for video streaming services, Jenna Wortham indicated that “the companies with whom I spoke seemed to have little to no interest in curbing our sharing behavior — in part because they can’t.” If users are told that there are going to be no serious attempts to enforce terms that restrict certain activities, are they allowed to be surprised if their account is suspended for violations of those terms? In other words, should any weight be given to the “operational reality” of the contract, which is a dynamic similar to what was identified in Quon v. Arch Wireless?
Because discretion is important in all regulatory systems, it might be worth comparing a company’s thoroughness in enforcing social media user agreements with the rigor that law enforcement officials use to enforce jaywalking, speeding, or violent crime. I’m curious to hear of your thoughts on the desirability and enforceability of these terms. Just like in high school, having a chaperone might be a good idea, but we don’t always have to like it.