Archive for the ‘Social Network Websites’ Category
posted by Frank Pasquale
The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).
I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:
For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.
Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.
posted by Daniel Solove
In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.
I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.
But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.
The Widespread Hunger for Access
Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”
Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”
In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”
June 3, 2013 at 10:51 am Posted in: Constitutional Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Social Network Websites Print This Post 3 Comments
posted by Frank Pasquale
First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:
A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .
I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.
Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read the rest of this post »
posted by Deven Desai
Gamification? Is that a word? Why yes it is, and Kevin Werbach and Dan Hunter want to tell us what it means. Better yet, they want to tell us how it works in their new book For the Win: How Game Thinking Can Revolutionize Your Business (Wharton Press). The authors get into many issues starting with a refreshing admission that the term is clunky but nonetheless captures a simple, powerful idea: one can use game concepts in non-game contexts and achieve certain results that might be missed. As they are careful to point out, this is not game theory. This is using insights from games, yes video games and the like, to structure how we interact with a problem or goal. I have questions about how well the approach will work and potential downsides (I am after all a law professor). Yet, the authors explore cases where the idea has worked, and they address concerns about where the approach can fail. I must admit I have only an excerpt so far. But it sets out the project while acknowledging possible objections that popped to mind quite well. In short, I want to read the rest. Luckily the Wharton link above or if you prefer Amazon Kindle are both quite reasonably priced. (Amazon is less expensive).
If you wonder about games, play games, and maybe have thought what is with all this badging, point accumulation, leader board stuff at work (which I did while I was at Google), this book looks to be a must read. And if you have not encountered these changes, I think you will. So reading the book may put you ahead of the group in understanding what management or companies are doing to you. The book also sets out cases and how the process works, so it may give you ideas about how to use games to help your endeavor and impress your manager. For the law folks out there, I think this area raises questions about behavioral economics and organizations that will lay ahead. In short, the authors have a tight, clear book that captures the essence of a movement. That alone merits a hearty well done.
posted by Peter Swire
The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords. Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.
As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.
Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.
We do have a precedent, however. In 1988, Congress enacted the Employee Polygraph Protection Act (EPPA). The EPPA says that employers don’t get to know everything an employee is thinking. Polygraphs are flat-out banned in almost all employment settings. The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.
The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer. Imagine a polygraph if your boss asked what you really thought about him/her. Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.
For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss. That list of exceptions can be a useful baseline to consider for social network passwords.
In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives. The social networks themselves support this ban on having employers require the passwords. I think we should, too.
April 11, 2012 at 1:14 pm Tags: Facebook, Maryland, passwords, polygraph Posted in: Administrative Law, Cyber Civil Rights, Cyberlaw, Privacy, Privacy (Consumer Privacy), Social Network Websites Print This Post 13 Comments
posted by Derek Bambauer
Pakistan, which has long censored the Internet, has decided to upgrade its cybersieves. And, like all good bureaucracies, the government has put the initiative out for bid. According to the New York Times, Pakistan wants to spend $10 million on a system that can block up to 50 million URLs concurrently, with minimal effect on network speed. (That’s a lot of Web pages.) Internet censorship is on the march worldwide (and the U.S. is no exception). There are at least three interesting things about Pakistan’s move:
First, the country’s openness about its censorial goals is admirable. Pakistan is informing its citizens, along with the rest of us, that it wants to bowdlerize the Net. And, it is attempting to do so in a way that is more uniform than under its current system, where filtering varies by ISP. I don’t necessarily agree with Pakistan’s choice, but I do like that the country is straightforward with its citizens, who have begun to respond.
Second, the California-based filtering company Websense announced that it will not bid on the contract. That’s fascinating – a tech firm has decided that the public relations damage from helping Pakistan censor the Net is greater than the $10M in revenue it could gain. (Websense argues, of course, that its decision is a principled one. If you believe that, you are probably a member of the Ryan Braun Clean Competition fan club.)
Finally, the state is somewhat vague about what it will censor: it points to pornography, blasphemy, and material that affects national security. The last part is particularly worrisome: the national security trump card is a potent force after 9/11 and its concomitant fallout in Pakistan’s neighborhood, and censorship based on it tends to be secret. There is also real risk that national security interests = interests of the current government. America has an unpleasant history of censoring political dissent based on security worries, and Pakistan is no different.
I’ll be fascinated to see which companies take up Pakistan’s offer to propose…
Cross-posted at Info/Law.
March 8, 2012 at 3:03 pm Posted in: Architecture, Current Events, Cyber Civil Rights, Cyberlaw, Google and Search Engines, Intellectual Property, Politics, Privacy (National Security), Social Network Websites, Technology, Web 2.0 Print This Post One Comment
posted by Derek Bambauer
Lifehacker‘s Adam Dachis has a great article on how users can deal with a world in which they infringe copyright constantly, both deliberately and inadvertently. (Disclaimer alert: I talked with Adam about the piece.) It’s a practical guide to a strict liability regime – no intent / knowledge requirement for direct infringement – that operates not as a coherent body of law, but as a series of reified bargains among stakeholders. And props to Adam for the Downfall reference! I couldn’t get by without the mockery of the iPhone or SOPA that it makes possible…
Cross-posted to Info/Law.
February 27, 2012 at 2:14 pm Posted in: Anonymity, Architecture, Culture, Current Events, Cyberlaw, DRM, Education, Google and Search Engines, Innovation, Intellectual Property, Interviews, Media Law, Movies & Television, Politics, Social Network Websites, Technology, Web 2.0 Print This Post 3 Comments
posted by Derek Bambauer
(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)
New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read the rest of this post »
February 21, 2012 at 10:20 pm Posted in: Anonymity, Blogging, Bright Ideas, Civil Rights, Conferences, Constitutional Law, Culture, Current Events, Cyber Civil Rights, Cyberlaw, Education, First Amendment, Media Law, Politics, Privacy (Gossip & Shaming), Psychology and Behavior, Race, Religion, Social Network Websites, Technology, Web 2.0 Print This Post 3 Comments
posted by Stanford Law Review
Our 2012 Symposium Issue, The Privacy Paradox: Privacy and Its Conflicting Values, is now available online:
- A Reasonableness Approach to Searches After the Jones GPS Tracking Case by Peter Swire (64 Stan. L. Rev. Online 57);
- Privacy in the Age of Big Data by Omer Tene & Jules Polonetsky (64 Stan. L. Rev. Online 63);
- Yes We Can (Profile You): A Brief Primer on Campaigns and Political Data by Daniel Kreiss (64 Stan. L. Rev. Online 70);
- Paving the Regulatory Road to the “Learning Health Care System” by Deven McGraw (64 Stan. L. Rev. Online 75);
- Famous for Fifteen People: Celebrity, Newsworthiness, and Fraley v. Facebook by Simon J. Frankel, Laura Brookover & Stephen Satterfield (64 Stan. L. Rev. Online 82); and
- The Right to Be Forgotten by Jeffrey Rosen (64 Stan. L. Rev. Online 88).
The text of Chief Judge Alex Kozinski’s keynote is forthcoming.
February 13, 2012 at 1:04 pm Posted in: Law Rev (Stanford), Law Rev Contents, Law School, Law School (Scholarship), Media Law, Military Law, Politics, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security), Social Network Websites, Supreme Court, Technology, Tort Law Print This Post No Comments
posted by Derek Bambauer
In the spirit of the excellent colloquy here about Marvin’s thinking on First Amendment architectures, I bring up this news item: Arizona State University blocked both Web access to, and e-mail from, the change.org Web site. ASU students had begun a petition demanding that the university reduce tuition. The university essentially made three claims as to why it did so (below, in order of increasing stupidity):
- It was a technical mistake;
- Change.org was spamming ASU; and
- ASU needs to “protect the use of our limited and valuable network resources for legitimate academic, research and administrative uses.”
#1 and #2 run together. If spam is the problem, you don’t need to block access to the Web site. However, if you are concerned that students are going to read the petition, and sign it, you do need to block access to the Web site.
For #2, sorry, ASU, this isn’t spam. Spam is unsolicited bulk commercial e-mail. Change.org is, allegedly, sending unsolicited political e-mail. And that’s protected by the First Amendment – see, for example, the Virginia Supreme Court’s analysis of that state’s anti-spam law that covered political messages. Potential political spammers have a sharp disincentive to fill recipient’s inboxes – it’s a sure-fire way to annoy them into opposing your position.
For #3, ASU doesn’t get to determine what academic and research uses are “legitimate.” If they throttle P2P apps, that’s fine. If they limit file sizes for attachments, no problem. But deciding that the message from Change.org is not “legitimate” is classic, and unconstitutional, viewpoint discrimination.
This looks like censorship. I think it’s more likely to be stupidity: someone in ASU’s IT department decided to block these messages as spam, and to filter outbound Web requests to the site contained within those messages. But: with great power over the network comes great responsibility. Well-intentioned constitutional violations are still unlawful. It would also help if ASU’s spokesperson simply admitted the mistake rather than engaging in idiotic justification.
As I mention in Orwell’s Armchair, public actors are increasingly important sources of Internet access. But when ASU and other public universities take on the role of ISP, they need to remember that they are not AOL: their technical decisions are constrained not merely by tech resources, but by our commitment to free speech. Let’s hope the Sun Devils cool off on the filtering…
Cross-posted at Info/Law.
February 10, 2012 at 5:10 pm Posted in: Architecture, Civil Rights, Constitutional Law, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Politics, Social Network Websites, Technology, Web 2.0 Print This Post No Comments
posted by Frank Pasquale
On February 14-16, we will host an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Given the great discussions at our previous symposiums for Tim Wu’s Master Switch and Jonathan Zittrain’s Future of the Internet, I’m sure this one will be a treat. Participants will include Ken Anderson, Ryan Calo, James Grimmelmann, Sonia Katyal, Ian Kerr, Andrea Matwyshyn, Deborah DeMott, Paul Ohm, Ugo Pagallo, Lawrence Solum, Ramesh Subramanian and Harry Surden. Chopra will be reading their posts and responding here, too. I discussed the book with Chopra and Grimmelmann in Brooklyn a few months ago, and I believe the audience found fascinating the many present and future scenarios raised in it. (If you’re interested in Google’s autonomous cars, drones, robots, or even the annoying little Microsoft paperclip guy, you’ll find something intriguing in the book.)
There is an introduction to the book below the fold. (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN). We look forward to hosting the discussion!
February 8, 2012 at 10:43 am Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Criminal Law, Current Events, Cyberlaw, Social Network Websites, Symposium (Autonomous Artificial Agents), Technology, Tort Law Print This Post 11 Comments
posted by Derek Bambauer
The European Commission released a draft of its revised Data Protection Directive this morning, and Jane Yakowitz has a trenchant critique up at Forbes.com. In addition to the sharp legal analysis, her article has both a Star Wars and Robot Chicken reference, which makes it basically the perfect information law piece…
January 25, 2012 at 4:32 pm Posted in: Advertising, Architecture, Civil Rights, Consumer Protection Law, Current Events, Cyber Civil Rights, Cyberlaw, Google and Search Engines, Innovation, Politics, Privacy, Privacy (Consumer Privacy), Social Network Websites, Technology, Web 2.0 Print This Post No Comments
posted by Danielle Citron
As my co-blogger Gerard notes, today is SOPA protest day. Sites like Google or WordPress have censored their logo or offered up a away to contact your congressperson, though remain live. Other sites like Wikipedia, Reddit, and Craigslist have shutdown, and more are set to shut down at some point today. There’s lots of terrific commentary on SOPA, which is designed to tackle the problem of foreign-based websites that sell pirated movies, music, and other products–but with a heavy hand that threatens free expression and due process. The Wall Street Journal’s Amy Schatz has this story and Politico has another helpful piece; The Hill’s Brendan Sasso’s Twitter feed has lots of terrific updates. Mark Lemley, David Levine, and David Post carefully explain why we ought to reject SOPA and the PROTECT IP Act in “Don’t Break the Internet” published by Stanford Law Review Online. In the face of the protest, House Judiciary Committee Chairman Lamar Smith (R-TX) vowed to bring SOPA to a vote in his committee next month. “I am committed to continuing to work with my colleagues in the House and Senate to send a bipartisan bill to the White House that saves American jobs and protects intellectual property,” he said. So, too, Senator Patrick Leahy (D-VT) pushed back against websites planning to shut down today in protest of his bill. “Much of what has been claimed about the Senate’s PROTECT IP Act is flatly wrong and seems intended more to stoke fear and concern than to shed light or foster workable solutions. The PROTECT IP Act will not affect Wikipedia, will not affect reddit, and will not affect any website that has any legitimate use,” Chairman Leahy said. Everyone’s abuzz on the issue, and rightly so. I spoke at a panel on intermediary liability at the Congressional Internet Caucus’ State of the Net conference and everyone wanted to talk about SOPA. I’m hoping that the black out and other shows of disapproval will convince our representatives in the House and Senate to back off the most troubling parts of the bill. As fabulous guest blogger Derek Bambauer argues, we need to bring greater care and thought to the issue of Internet censorship. Cybersecurity is at issue too, and we need to pay attention. Derek may be right that both bills may go nowhere, especially given Silicon Valley’s concerted lobbying efforts against the bills. But we will have to watch to see if Representative Smith lives up to his promise to bring SOPA back to committee and if Senator Leahy remains as committed to PROTECT IP Act in a few weeks as he is today.
January 18, 2012 at 10:11 am Posted in: Architecture, Civil Rights, Current Events, Cyber Civil Rights, Cyberlaw, First Amendment, Law Talk, Media Law, Social Network Websites, Technology, Web 2.0 Print This Post 2 Comments
posted by Danielle Citron
Bloomberg Businessweek reports on retailers’ use of camera surveillance to glean intelligence from shoppers’ behavior. A company called RetailNext, for instance, runs its software through a store’s security camera video feed to analyze customer behavior. It describes itself as the “leader in real-time in-store monitoring, enabling retailers and manufacturers to collect, analyze and visualize in-store data.” According to the company, it “uses best-in-class video analytics, on-shelf sensors, along with data from point-of-sale and other business systems, to automatically inform retailers about how people engage in their stores.” RetailNext’s software can integrate data from hardware such as RFID chips and motion sensors to track customers’ movements. The company explains that it “tracks more than 20 million shoppers per month by collecting data from more than 15,000 sensors in retail stores.” Its service apparently helps stores figure out where to place certain merchandise to boost sales. T-Mobile uses similar technology from another firm 3VR, whose software tracks how people move around their stores, how long they stand in front of displays, and which phones they pick up and for how long. 3VR is testing facial-recognition software that can identify shoppers’ gender and approximate age. Businessweek explains that the “software would give retailers a better handle on customer demographics and help them tailor promotions.” What we are seeing is, according to 3VR’s CEO, just “scratching the surface as someday “you’ll have the ability to measure every metric imaginable.”
Indeed. Little imagination is needed to predict the future in light of our present. As Joseph Turow‘s important new book The Daily You: How the New Advertising Industry Is Defining Your Identity and Worth (Yale University Press) explores, data collection and analysis of individuals is breathtaking. In the name of better, more relevant advertising and marketing efforts, companies like Acxiom have databases teeming with our demographic data (age, gender, race, ethnicity, address, income, marital status), interests, online and offline spending habits, and heath status based on our purchases and online comments (diabetic, allergy sufferer, and the like). Consumers are sorted into categories such as “Corporate Clout,” “Soccer and SUV,” “Mortgage Woes,” and “On the Edge.” eXelate gathers online data of over 200 million unique individuals per month through deals with hundreds of sites: their demographics, social activities, and social networks. Advertisers can add even more data to eXelate’s cookies– data from Nielsen, which includes Census Bureau data, as well as data brokers’ digital dossiers. Data firms like Lotame track the comments that people leave on sites and categorize them. Now, let’s consider weaving in facial recognition software and retailer cameras of companies like 3VR and RetailNext. And to really top things off, let’s think about linking all of this data to cellphone location information. The surveillance of networked spaces would be totalizing.
Turow’s book exposes important costs of these developments. This post will discuss a few–hopefully, I can have Professor Turow on for a Bright Ideas feature. This sort of targeting and hyper surveillance leaves many with far more narrow options and with social discrimination. Marketers use these databases to determine if Americans are worthy “targets” or not-worth-bothering with “waste.” For the “Soccer and SUV” moms between 35 and 45 who live in the West Coast and want to buy a small car, car companies may offer them serious discounts via online advertisements and e-mail. But their “On the Edge” counterparts get left in the cold with higher prices–why bother trying to attract people who don’t pay their debts? All of this sorting encourages media to offer soft stories designed to meet people’s interests, as secretly determined by those gathering and analyzing our networked lives. This discussion brings to mind to another important read: Julie Cohen‘s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press). As Professor Cohen thoughtfully explores, this sort of surveillance has a profound impact on the creative play of our everyday lives. It creates hierarchies among those watched and systematizes difference. I’ll have lots more to say about Cohen’s take on our networked society more generally, soon. In March, we will be hosting an online symposium on her book–much to look forward to in the new year.
posted by Frank Pasquale
Social sorting is big business. Bosses and bankers crave “predictive analytics:” ways of deciding who will be the best worker, borrower, or customer. Our economy is less likely to reward someone who “builds a better mousetrap” than it is to fund a startup which will identify those most likely to buy a mousetrap. The critical resource here is data, the fossil fuel of the digital economy. Privacy advocates are digital environmentalists, worried that rapid exploitation of data either violates moral principles or sets in motion destructive processes we only vaguely understand now.*
Start-up fever fuels these concerns as new services debut and others grow in importance. For example, a leader at Lenddo, “the first credit scoring service that uses your online social network to assess credit,” has called for “thousands of engineers [to work] to assess creditworthiness.” We all know how well the “quants” have run Wall Street—but maybe this time will be different. His company aims to mine data derived from digital monitoring of relationships. ITWorld headlined the development: “How Facebook Can Hurt Your Credit Rating”–”It’s time to ditch those deadbeat friends.” It also brought up the disturbing prospect of redlined portions of the “social graph.”
There’s a lot of value in such “news you can use” reporting. However, I think it misses some problematic aspects of a pervasively evaluated and scored digital world. Big data’s fans will always counter that, for every person hurt by surveillance, there’s someone else who is helped by it. Let’s leave aside, for the moment, whether the game of reputation-building is truly zero-sum, and the far more important question of whether these judgments are fair. The data-meisters’ analytics deserve scrutiny on other grounds.
Read the rest of this post »
posted by Stanford Law Review
The Stanford Law Review Online has just published a piece by Mark Lemley, David S. Levine, and David G. Post on the PROTECT IP Act and the Stop Online Piracy Act. In Don’t Break the Internet, they argue that the two bills — intended to counter online copyright and trademark infringement — “share an underlying approach and an enforcement philosophy that pose grave constitutional problems and that could have potentially disastrous consequences for the stability and security of the Internet’s addressing system, for the principle of interconnectivity that has helped drive the Internet’s extraordinary growth, and for free expression.”
These bills, and the enforcement philosophy that underlies them, represent a dramatic retreat from this country’s tradition of leadership in supporting the free exchange of information and ideas on the Internet. At a time when many foreign governments have dramatically stepped up their efforts to censor Internet communications, these bills would incorporate into U.S. law a principle more closely associated with those repressive regimes: a right to insist on the removal of content from the global Internet, regardless of where it may have originated or be located, in service of the exigencies of domestic law.
Note: Corrected typo in first paragraph.
December 19, 2011 at 3:14 am Tags: banks, credit card companies, DNS, DNS filtering, domain name seizures, domain name servers, domain names, financial institutions, Intellectual Property, Internet, internet security, internet stability, IP, IP addresses, IP rights, online advertisers, PROTECT IP Act, search engine censorship, search engines, SOPA, Stop Online Piracy Act, World Wide Web Posted in: Current Events, Cyberlaw, First Amendment, Google & Search Engines, Google and Search Engines, Innovation, Intellectual Property, International & Comparative Law, Law Rev (Stanford), Law School (Law Reviews), Movies & Television, Property Law, Social Network Websites Print This Post One Comment
posted by Daniel Solove
Increasingly, states and school districts are struggling over how to deal with teachers who communicate with students online via social network websites. One foolish way to address the issue is via strict bans, such as a law passed in Missouri earlier this year that attempted to ban teachers from friending students on social network websites. Such laws are likely violations of the First Amendment right to freedom of speech and association, and I blogged at the Huffington Post that the law was unconstitutional. Soon thereafter, a court quickly struck down the law.
The NY Times now has an article out about the challenges in crafting social media policies for teacher-student interaction, noting that “stricter guidelines are meeting resistance from some teachers because of the increasing importance of technology as a teaching tool and of using social media to engage with students.”
There are a number of considerations that schools should think about when crafting a social media policy:
1. The policy should account for the fact that there are legitimate reasons for students and teachers to communicate online. A teacher might be related to a student, and certainly a law or policy shouldn’t ban parents from friending their children. Or a teacher might be a godparent to a child or a close family friend or related in some way.
2. One middle-ground approach is to require parental consent whenever a teacher wants to friend a minor student online. This greater transparency will address the cases where teachers might have inappropriate communication with minors.
3. Clear guidelines about appropriate teacher expression should be set forth, so teachers know what things will be inappropriate to say. Teachers need to learn about their legal obligations of confidentiality, as well as avoiding invasions of privacy, defamation, harassment, threats, and other problematic forms of speech.
5. Education is key. I’ve read about a lot of cases involving improper social media use by educators, and they often stem from a lack of awareness. Teachers think they can say nearly anything and it will be protected by the First Amendment. The First Amendment law actually gives schools a lot of leeway in disciplining educators for what they say, and educators can also be sued by those whom they write about. Educators often think that if they post something anonymously, then it is okay or they can get away with it — but anonymity online is often a mirage, and comments can readily be traced back to the speaker. And educators often set the privacy settings on social media sites incorrectly. They don’t spend enough time learning the ins and outs of the privacy settings. These are actually quite tricky — even rocket scientists have trouble figuring them out.
posted by Daniel Solove
Facebook has settled with the FTC over its change to its privacy policies back in 2009. According to the FTC complaint, as summed up by the FTC press release, Facebook engaged in a number of unfair and deceptive trade practices:
- In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn’t warn users that this change was coming, or get their approval in advance.
- Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.
- Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
- Facebook had a “Verified Apps” program & claimed it certified the security of participating apps. It didn’t.
- Facebook promised users that it would not share their personal information with advertisers. It did.
- Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
- Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn’t.
The settlement, which requires auditing of Facebook for 20 years, makes a number of requirements. Facebook will be:
posted by Danielle Citron
In June, I blogged about the dreaded question (for parents of teenagers): “Mom, can I have a Facebook profile?” At the time, we talked about its benefits and drawbacks. On the one hand, it’s a gateway to socializing that she had been missing given her late birthday. Different sports leagues had Facebook groups, perhaps she needed to join, and other activities would as well. On the other hand, her privacy and reputation could be jeopardized, by her own hand or her “friends.” Facebook’s privacy settings are notoriously whimsical, and more importantly as Steve Bellovin’s work shows notoriously misunderstood–setting up an account was indeed a game of chance, or as Bob Keller notes, like giving your kid a pipe of crystal meth. We gave our thirteen year old kid the choice and told her to talk to us when she was ready to get started. The summer came and went and all was quiet. So now, a good five months later and a good five months wiser, my kid has decided that she wants to think about getting a Facebook page again. And the conversation went something like this (she did all of the talking): So I’m feeling excited about this. Facebook would let me stay in touch with my sleep-away camp friends who live all over the place and I could friend kids that I meet from other schools in the area, at games, mixers, etc. And I am jazzed about this new close friends feature that everyone’s been talking about. This way I can share photographs only with my five best pals and I don’t have to worry. (Pause). But, I really want to friend the kids from camp and want them to see what I am up to, so this close friends feature may not work. And what if those camp friends have weird friends or end up being strange themselves. I can’t de-friend them, can I and still pal around at camp? And I don’t want other people making judgments about me based on what those not-so-close friends are up to? Will colleges see what I am doing, when it comes time? And what if someone goes on my close friend’s computer and copy and pastes my silly remarks and it goes viral, like the Friday girl who ended up getting death threats and harassed. Can I put up my favorite artists? I definitely can say I like the Beatles and Elton John, but can I say Kesha? Will people think I am appropriate if I put Kesha down or Katy Perry? Some of their songs are, err, a little inappropriate.
After all of that, my kid said she needed to think about it, it all seemed so, well, complicated. That seemed just the right word: complicated. But the question seems even more tricky now than it did in June. Who is she doing this for? Taking cues from Erving Goffman, life is a performance. Some of it is just for you–a way to develop oneself, experiment, play, and figure out who you are as much as who you are not. Much of it is for others. We perform different roles for the people in our lives: friends, parents, co-workers, coach, priest/imam/rabbi, acquaintances, and strangers. Some performances are oppressive: we cover or pass as best we can in the face of stigma and prejudice. And we perform at a time of extensive social and political surveillance. We feel watched, and for good reason. Companies give us social influence scores. Employers, marketers, and businesses use those scores to benefit some, leaving others less favored and less fortunate. Maybe we perform online for them? Colleges look at social media profiles. (danah boyd has a great piece about a question a college asked her about a student’s MySpace page, which seemingly contradicted his college essay.) Do young people perform for them? At the same time, government monitors our online presence, searching for threats to critical infrastructure and the like. Government 2.0 social media sites may be keeping track of the stories we like, the friends we make, and pictures we post. Who knows? Agencies aren’t promising not to watch us, so maybe being careful is smart. Are we performing for fusion centers and our government social media friends? All of this watching brings to mind Julie Cohen’s book Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, forthcoming 2011, see her talk here)–more on that in early 2012 in our online symposium on the book. Navigating those questions every time one posts on Facebook is bewildering, especially because we can’t really control what happens to the information posted there. A commentator on my previous post basically said that I had better get a grip on reality, that nothing I did or said could influence what she did and she would hate me anyway. I guess we just fundamentally disagree. Parenting is a huge responsibility, and lots of what my kid is mulling comes from long, long conversations we have had about being a responsible and smart digital citizen. I am looking forward to talking it through again, once she has a better idea of what she wants to do.
P.S. Sorry about the light blogging, working on my first book on cyber mobs and hate (forthcoming Harvard University Press).
H/T Susan McCarty (who helped me find the db piece) , JJC
posted by Olivier Sylvain
Like Professor Zick, I am grateful for the invitation to share my view of the world with Concurring Opinions. I’d like to pick up where his post on strange expressive acts left off and, along the way, perhaps answer his question.
Flash mobs have been eliciting wide-eyed excitement for the better part of the past decade now. They were playful and glaringly pointless in their earliest manifestations. Mobbers back then were content with the playful performance art of the thing. Early proponents, at the same time, breathlessly lauded the flash mob “movement.”
Today, the flash mob has matured into something much more complex than these early proponents prophesied. For one, they involve unsupported and disaffected young people of color in cities on the one hand and, on the other, anxious and unprepared law enforcement officials. A fateful mix.
In North London in early August, mobile online social networking and messaging probably helped outrage over the police shooting of a young black man morph into misanthropic madness. Race-inflected flash mob mischief hit the U.S. this summer, too. Most major metropolitan newspapers and cable news channels this summer have run stories about young black people across the country using their idle time and fleet thumbs to organize shoplifting, beatings, and general indiscipline. This is not the first time the U.S. has seen the flash mob or something like it. (Remember the 2000 recount in Florida?) But the demographic and commercial politics of these events in particular ought to raise eyebrows.
Read the rest of this post »
September 5, 2011 at 11:52 pm Posted in: Constitutional Law, Culture, Current Events, First Amendment, Media Law, Philosophy of Social Science, Politics, Race, Social Network Websites, Sociology of Law, Technology, Web 2.0 Print This Post 8 Comments