Site Meter

Category: Social Network Websites

0

Stanford Law Review Online: The Privacy Paradox 2012 Symposium Issue

Stanford Law Review

Our 2012 Symposium Issue, The Privacy Paradox: Privacy and Its Conflicting Values, is now available online:

Essays

The text of Chief Judge Alex Kozinski’s keynote is forthcoming.

0

Tempest in Tempe: First Amendment in the Desert

In the spirit of the excellent colloquy here about Marvin’s thinking on First Amendment architectures, I bring up this news item: Arizona State University blocked both Web access to, and e-mail from, the change.org Web site. ASU students had begun a petition demanding that the university reduce tuition. The university essentially made three claims as to why it did so (below, in order of increasing stupidity):

  1. It was a technical mistake;
  2. Change.org was spamming ASU; and
  3. ASU needs to “protect the use of our limited and valuable network resources for legitimate academic, research and administrative uses.”

#1 and #2 run together. If spam is the problem, you don’t need to block access to the Web site. However, if you are concerned that students are going to read the petition, and sign it, you do need to block access to the Web site.

For #2, sorry, ASU, this isn’t spam. Spam is unsolicited bulk commercial e-mail. Change.org is, allegedly, sending unsolicited political e-mail. And that’s protected by the First Amendment – see, for example, the Virginia Supreme Court’s analysis of that state’s anti-spam law that covered political messages. Potential political spammers have a sharp disincentive to fill recipient’s inboxes – it’s a sure-fire way to annoy them into opposing your position.

For #3, ASU doesn’t get to determine what academic and research uses are “legitimate.” If they throttle P2P apps, that’s fine. If they limit file sizes for attachments, no problem. But deciding that the message from Change.org is not “legitimate” is classic, and unconstitutional, viewpoint discrimination.

This looks like censorship. I think it’s more likely to be stupidity: someone in ASU’s IT department decided to block these messages as spam, and to filter outbound Web requests to the site contained within those messages. But: with great power over the network comes great responsibility. Well-intentioned constitutional violations are still unlawful. It would also help if ASU’s spokesperson simply admitted the mistake rather than engaging in idiotic justification.

As I mention in Orwell’s Armchair, public actors are increasingly important sources of Internet access. But when ASU and other public universities take on the role of ISP, they need to remember that they are not AOL: their technical decisions are constrained not merely by tech resources, but by our commitment to free speech. Let’s hope the Sun Devils cool off on the filtering…

Cross-posted at Info/Law.

Symposium Next Week on “A Legal Theory for Autonomous Artificial Agents”

On February 14-16, we will host an online symposium on A Legal Theory for Autonomous Artificial Agents, by Samir Chopra and Laurence White. Given the great discussions at our previous symposiums for Tim Wu’s Master Switch  and Jonathan Zittrain’s Future of the Internet, I’m sure this one will be a treat.  Participants will include Ken AndersonRyan CaloJames Grimmelmann, Sonia KatyalIan KerrAndrea MatwyshynDeborah DeMottPaul Ohm,  Ugo PagalloLawrence SolumRamesh Subramanian and Harry Surden.  Chopra will be reading their posts and responding here, too.  I discussed the book with Chopra and Grimmelmann in Brooklyn a few months ago, and I believe the audience found fascinating the many present and future scenarios raised in it.  (If you’re interested in Google’s autonomous cars, drones, robots, or even the annoying little Microsoft paperclip guy, you’ll find something intriguing in the book.)

There is an introduction to the book below the fold.  (Chapter 2 of the book was published in the Illinois Journal of Law, Technology and Policy, and can be found online at SSRN).  We look forward to hosting the discussion!

Read More

0

The E.U. Data Protection Directive and Robot Chicken

The European Commission released a draft of its revised Data Protection Directive this morning, and Jane Yakowitz has a trenchant critique up at Forbes.com. In addition to the sharp legal analysis, her article has both a Star Wars and Robot Chicken reference, which makes it basically the perfect information law piece…

2

Supporting the Stop Online Piracy Act Protest Day

As my co-blogger Gerard notes, today is SOPA protest day.  Sites like Google or WordPress have censored their logo or offered up a away to contact your congressperson, though remain live.  Other sites like Wikipedia, Reddit, and Craigslist have shutdown, and more are set to shut down at some point today.  There’s lots of terrific commentary on SOPA, which is designed to tackle the problem of foreign-based websites that sell pirated movies, music, and other products–but with a heavy hand that threatens free expression and due process. The Wall Street Journal’s Amy Schatz has this story and Politico has another helpful piece; The Hill’s Brendan Sasso’s Twitter feed has lots of terrific updates.  Mark Lemley, David Levine, and David Post carefully explain why we ought to reject SOPA and the PROTECT IP Act in “Don’t Break the Internet” published by Stanford Law Review Online.  In the face of the protest, House Judiciary Committee Chairman Lamar Smith (R-TX) vowed to bring SOPA to a vote in his committee next month. “I am committed to continuing to work with my colleagues in the House and Senate to send a bipartisan bill to the White House that saves American jobs and protects intellectual property,” he said.  So, too, Senator Patrick Leahy (D-VT) pushed back against websites planning to shut down today in protest of his bill.  “Much of what has been claimed about the Senate’s PROTECT IP Act is flatly wrong and seems intended more to stoke fear and concern than to shed light or foster workable solutions. The PROTECT IP Act will not affect Wikipedia, will not affect reddit, and will not affect any website that has any legitimate use,” Chairman Leahy said. Everyone’s abuzz on the issue, and rightly so.  I spoke at a panel on intermediary liability at the Congressional Internet Caucus’ State of the Net conference and everyone wanted to talk about SOPA.  I’m hoping that the black out and other shows of disapproval will convince our representatives in the House and Senate to back off the most troubling parts of the bill.  As fabulous guest blogger Derek Bambauer argues, we need to bring greater care and thought to the issue of Internet censorship.  Cybersecurity is at issue too, and we need to pay attention.  Derek may be right that both bills may go nowhere, especially given Silicon Valley’s concerted lobbying efforts against the bills.  But we will have to watch to see if Representative Smith lives up to his promise to bring SOPA back to committee and if Senator Leahy remains as committed to PROTECT IP Act in a few weeks as he is today.

0

Surveillance, For Your Benefit?

Bloomberg Businessweek reports on retailers’ use of camera surveillance to glean intelligence from shoppers’ behavior.  A company called RetailNext, for instance, runs its software through a store’s security camera video feed to analyze customer behavior.  It describes itself as the “leader in real-time in-store monitoring, enabling retailers and manufacturers to collect, analyze and visualize in-store data.”  According to the company, it “uses best-in-class video analytics, on-shelf sensors, along with data from point-of-sale and other business systems, to automatically inform retailers about how people engage in their stores.”  RetailNext’s software can integrate data from hardware such as RFID chips and motion sensors to track customers’ movements.  The company explains that it “tracks more than 20 million shoppers per month by collecting data from more than 15,000 sensors in retail stores.”  Its service apparently helps stores figure out where to place certain merchandise to boost sales.  T-Mobile uses similar technology from another firm 3VR, whose software tracks how people move around their stores, how long they stand in front of displays, and which phones they pick up and for how long.  3VR is testing facial-recognition software that can identify shoppers’ gender and approximate age.   Businessweek explains that the “software would give retailers a better handle on customer demographics and help them tailor promotions.”  What we are seeing is, according to 3VR’s CEO, just “scratching the surface as someday “you’ll have the ability to measure every metric imaginable.”

Indeed.  Little imagination is needed to predict the future in light of our present.  As Joseph Turow‘s important new book The Daily You: How the New Advertising Industry Is Defining Your Identity and Worth (Yale University Press) explores, data collection and analysis of individuals is breathtaking.  In the name of better, more relevant advertising and marketing efforts, companies like Acxiom have databases teeming with our demographic data (age, gender, race, ethnicity, address, income, marital status), interests, online and offline spending habits, and heath status based on our purchases and online comments (diabetic, allergy sufferer, and the like).  Consumers are sorted into categories such as “Corporate Clout,” “Soccer and SUV,” “Mortgage Woes,” and “On the Edge.”  eXelate gathers online data of over 200 million unique individuals per month through deals with hundreds of sites: their demographics, social activities, and social networks.  Advertisers can add even more data to eXelate’s cookies– data from Nielsen, which includes Census Bureau data, as well as data brokers’ digital dossiers.  Data firms like Lotame track the comments that people leave on sites and categorize them.  Now, let’s consider weaving in facial recognition software and retailer cameras of companies like 3VR and RetailNext.  And to really top things off, let’s think about linking all of this data to cellphone location information.  The surveillance of networked spaces would be totalizing.

Turow’s book exposes important costs of these developments.  This post will discuss a few–hopefully, I can have Professor Turow on for a Bright Ideas feature.  This sort of targeting and hyper surveillance leaves many with far more narrow options and with social discrimination.  Marketers use these databases to determine if Americans are worthy “targets” or not-worth-bothering with “waste.”  For the “Soccer and SUV” moms between 35 and 45 who live in the West Coast and want to buy a small car, car companies may offer them serious discounts via online advertisements and e-mail.  But their “On the Edge” counterparts get left in the cold with higher prices–why bother trying to attract people who don’t pay their debts?  All of this sorting encourages media to offer soft stories designed to meet people’s interests, as secretly determined by those gathering and analyzing our networked lives.  This discussion brings to mind to another important read: Julie Cohen‘s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press).   As Professor Cohen thoughtfully explores, this sort of surveillance has a profound impact on the creative play of our everyday lives.  It creates hierarchies among those watched and systematizes difference.  I’ll have lots more to say about Cohen’s take on our networked society more generally, soon.  In March, we will be hosting an online symposium on her book–much to look forward to in the new year.

Gamifying Control of the Scored Self

Social sorting is big business. Bosses and bankers crave “predictive analytics:” ways of deciding who will be the best worker, borrower, or customer. Our economy is less likely to reward someone who “builds a better mousetrap” than it is to fund a startup which will identify those most likely to buy a mousetrap. The critical resource here is data, the fossil fuel of the digital economy. Privacy advocates are digital environmentalists, worried that rapid exploitation of data either violates moral principles or sets in motion destructive processes we only vaguely understand now.*

Start-up fever fuels these concerns as new services debut and others grow in importance. For example, a leader at Lenddo, “the first credit scoring service that uses your online social network to assess credit,” has called for “thousands of engineers [to work] to assess creditworthiness.” We all know how well the “quants” have run Wall Street—but maybe this time will be different. His company aims to mine data derived from digital monitoring of relationships. ITWorld headlined the development: “How Facebook Can Hurt Your Credit Rating”–”It’s time to ditch those deadbeat friends.” It also brought up the disturbing prospect of redlined portions of the “social graph.”

There’s a lot of value in such “news you can use” reporting. However, I think it misses some problematic aspects of a pervasively evaluated and scored digital world. Big data’s fans will always counter that, for every person hurt by surveillance, there’s someone else who is helped by it. Let’s leave aside, for the moment, whether the game of reputation-building is truly zero-sum, and the far more important question of whether these judgments are fair. The data-meisters’ analytics deserve scrutiny on other grounds.
Read More

1

Stanford Law Review Online: Don’t Break the Internet

Stanford Law Review

The Stanford Law Review Online has just published a piece by Mark Lemley, David S. Levine, and David G. Post on the PROTECT IP Act and the Stop Online Piracy Act. In Don’t Break the Internet, they argue that the two bills — intended to counter online copyright and trademark infringement — “share an underlying approach and an enforcement philosophy that pose grave constitutional problems and that could have potentially disastrous consequences for the stability and security of the Internet’s addressing system, for the principle of interconnectivity that has helped drive the Internet’s extraordinary growth, and for free expression.”

They write:

These bills, and the enforcement philosophy that underlies them, represent a dramatic retreat from this country’s tradition of leadership in supporting the free exchange of information and ideas on the Internet. At a time when many foreign governments have dramatically stepped up their efforts to censor Internet communications, these bills would incorporate into U.S. law a principle more closely associated with those repressive regimes: a right to insist on the removal of content from the global Internet, regardless of where it may have originated or be located, in service of the exigencies of domestic law.

Read the full article, Don’t Break the Internet by Mark Lemley, David S. Levine, and David G. Post, at the Stanford Law Review Online.

Note: Corrected typo in first paragraph.

3

Should Teachers Be Banned from Communicating with Students Online?

Increasingly, states and school districts are struggling over how to deal with teachers who communicate with students online via social network websites.  One foolish way to address the issue is via strict bans, such as a law passed in Missouri earlier this year that attempted to ban teachers from friending students on social network websites.  Such laws are likely violations of the First Amendment right to freedom of speech and association, and I blogged at the Huffington Post that the law was unconstitutional.  Soon thereafter, a court quickly struck down the law.

The NY Times now has an article out about the challenges in crafting social media policies for teacher-student interaction, noting that “stricter guidelines are meeting resistance from some teachers because of the increasing importance of technology as a teaching tool and of using social media to engage with students.”

There are a number of considerations that schools should think about when crafting a social media policy:

1. The policy should account for the fact that there are legitimate reasons for students and teachers to communicate online.  A teacher might be related to a student, and certainly a law or policy shouldn’t ban parents from friending their children.  Or a teacher might be a godparent to a child or a close family friend or related in some way.

2. One middle-ground approach is to require parental consent whenever a teacher wants to friend a minor student online.  This greater transparency will address the cases where teachers might have inappropriate communication with minors.

3. Clear guidelines about appropriate teacher expression should be set forth, so teachers know what things will be inappropriate to say.  Teachers need to learn about their legal obligations of confidentiality, as well as avoiding invasions of privacy, defamation, harassment, threats, and other problematic forms of speech.

4. When teachers use social network sites in the classroom — or otherwise use blogs and online posting as a teaching device — they should exercise great care, especially when requiring minors to express themselves publicly online.  I’ve seen some class blogs, where students are asked to post reactions to reading or write online journals.  Making students post their views and opinions to the public, especially at such a young age, strikes me as a problematic practice.  The Children’s Online Privacy Protection Act (COPPA) would protect minors under the age of 13, but teachers should be sensitive to minors 13 and older too.  No minor student should be required to post any personal information or class assignment on a publicly-accessible website without the student’s consent and the parent’s consent.  And all websites that involve student personal information have a privacy policy.

5. Education is key.  I’ve read about a lot of cases involving improper social media use by educators, and they often stem from a lack of awareness.  Teachers think they can say nearly anything and it will be protected by the First Amendment.  The First Amendment law actually gives schools a lot of leeway in disciplining educators for what they say, and educators can also be sued by those whom they write about.  Educators often think that if they post something anonymously, then it is okay or they can get away with it — but anonymity online is often a mirage, and comments can readily be traced back to the speaker.  And educators often set the privacy settings on social media sites incorrectly.  They don’t spend enough time learning the ins and outs of the privacy settings.  These are actually quite tricky — even rocket scientists have trouble figuring them out.

0

Facebook Settles with the FTC

Facebook has settled with the FTC over its change to its privacy policies back in 2009. According to the FTC complaint, as summed up by the FTC press release, Facebook engaged in a number of unfair and deceptive trade practices:

  • In December 2009, Facebook changed its website so certain information that users may have designated as private – such as their Friends List – was made public. They didn’t warn users that this change was coming, or get their approval in advance.
  • Facebook represented that third-party apps that users’ installed would have access only to user information that they needed to operate. In fact, the apps could access nearly all of users’ personal data – data the apps didn’t need.
  • Facebook told users they could restrict sharing of data to limited audiences – for example with “Friends Only.” In fact, selecting “Friends Only” did not prevent their information from being shared with third-party applications their friends used.
  • Facebook had a “Verified Apps” program & claimed it certified the security of participating apps. It didn’t.
  • Facebook promised users that it would not share their personal information with advertisers. It did.
  • Facebook claimed that when users deactivated or deleted their accounts, their photos and videos would be inaccessible. But Facebook allowed access to the content, even after users had deactivated or deleted their accounts.
  • Facebook claimed that it complied with the U.S.- EU Safe Harbor Framework that governs data transfer between the U.S. and the European Union. It didn’t.

 

The settlement, which requires auditing of Facebook for 20 years, makes a number of requirements.  Facebook will be:

Read More