Category: Privacy (Consumer Privacy)

4

Do We Need an Internet Ed. Class?

Classroom2.JPGWhile I was attending the excellent privacy conference Dan Solove and Chris Hoofnagle organized in D.C. a few days ago, it occurred to me that just as one takes driver’s ed. before being able to drive a car, it might make sense to have a required Internet Education class in middle school. Driving is a key way people engage in the economy, and the Internet, especially email and social networking use, is becoming as essential if not more so. Given all the benefits and problems of the Internet from meeting new people and peer production to unfortunate gossiping and dog poop events, it dawned on me that Internet Ed. might fill a gap that appeared as I listened to various people at the conference.

Read More

14

Is the Computer Fraud and Abuse Act Unconstitutionally Vague?

At the National Law Journal, attorney Nick Akerman (Dorsey & Whitney) contends that the Computer Fraud and Abuse Act (CFAA) indictment of Lori Drew (background about the case is here) is an appropriate interpretation of the statute:

While this may be the first prosecution under the CFAA for cyberbullying, the statute neatly fits the facts of this crime. Drew is charged with violating §§ 1030(a)(2)(C), (c)(2)(B)(2) of the CFAA, which make it a felony punishable up to five years imprisonment, if one “intentionally accesses a computer without authorization . . . , and thereby obtains . . . information from any protected computer if the conduct involved an interstate . . . communication” and “the offense was committed in furtherance of any . . . tortious act [in this case intentional infliction of emotional distress] in violation of the . . . laws . . . of any State.”

There is no question that the MySpace network is a “protected” computer as that term is defined by the statute. Indeed, “[e]very cell phone and cell tower is a ‘computer’ under this statute’s definition; so is every iPod, every wireless base station in the corner coffee shop, and many another gadget.” U.S. v. Mitra, 405 F.3d 492, 495 (8th Cir. 2005). There is also no question that a violation of MySpace’s TOS provides a valid predicate for proving that the defendant acted “without authorization.” What the commentators ignored in their critique of this indictment is that the “CFAA . . . is primarily a statute imposing limits on access and enhancing control by information providers.” EF Cultural Travel B.V. v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003). A company “can easily spell out explicitly what is forbidden.” Id. at 63. Thus, companies have the right to post what are in effect “No Trespassing” signs that can form the basis for a criminal prosecution.

If this interpretation of the law is correct, then the law is probably unconstitutionally vague. A vague law is one that either fails to provide the kind of notice that will enable ordinary people to understand what conduct it prohibits; or authorizes or encourages arbitrary and discriminatory enforcement. The CFAA, as construed by the prosecution in the Drew case, will probably be found vague because it authorizes or encourages arbitrary and discriminatory enforcement.

Suppose I put a notice on this post that says: “No attorneys may post a comment to this blog.” Suppose Nick Ackerman comes to this site, sees this post, and and writes a comment that is defamatory. Under his theory, he can be prosecuted for violating the CFAA. He has “trespassed” on this site. Moreover, if a blog has a policy that it will not tolerate “rude, uncivil, or off-topic comments,” then commenters who make such comments that are tortious (intentional infliction of emotional distress, public disclosure of private facts, false light, defamation, etc.) can be liable for a CFAA violation. Moreover, any use of a website that goes against whatever terms the operator of that site has set forth that constitutes a negligence tort is also criminal.

The problem here is that the CFAA’s applicability would be extremely broad — so broad that the cases likely to be prosecuted would be arbitrary. Since tort law is common law, and is very flexible, broad, and evolving, people would not have adequate notice about what conduct would be legal and not legal. There’s a reason why tort law is different from criminal law — we are willing to accept a lot more ambiguity and uncertainty in tort law than in criminal law, where the stakes involve potential imprisonment.

Moreover, Nick Akerman only focuses on the CFAA § 1030(c)(2)(B)(2), which makes it a felony to exceed authorized access if the offense was committed in furtherance of any tortious act.

The CFAA § 1020(a)(2)(C) makes it a criminal misdemeanor to “intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer if the conduct involved an interstate or foreign communication.” If I’m interpreting this correctly (and I don’t purport to be an expert on the CFAA), under the Drew prosecutor’s interpretation of the CFAA, any time a person violates a website’s terms of service and access any information from the site, there’s a criminal violation. That means that if I post on this blog a notice that says: “No attorneys may access any other parts of this blog other than the front page,” and an attorney accesses any other page on my blog, then there’s a CFAA violation. Could the law possibly be this broad? I think it would require a narrowing interpretation in order to avoid problems of unconstitutional vagueness.

The CFAA strikes me as a very poorly drafted statute. The Drew indictment demonstrates the problems with the law. Either courts should fix the CFAA interpretively by narrowing its scope, or else strike it down as unconstitutionally vague. But what clearly cannot stand is for the law to be interpreted as the Drew prosecutor seeks to interpret it.

Hat tip: Dan Slater at the WSJ Blog

5

My New Book, Understanding Privacy

Cover 5 medium.jpgI am very happy to announce the publication of my new book, UNDERSTANDING PRIVACY (Harvard University Press, May 2008). There has been a longstanding struggle to understand what “privacy” means and why it is valuable. Professor Arthur Miller once wrote that privacy is “exasperatingly vague and evanescent.” In this book, I aim to develop a clear and accessible theory of privacy, one that will provide useful guidance for law and policy. From the book jacket:

Privacy is one of the most important concepts of our time, yet it is also one of the most elusive. As rapidly changing technology makes information more and more available, scholars, activists, and policymakers have struggled to define privacy, with many conceding that the task is virtually impossible.

In this concise and lucid book, Daniel J. Solove offers a comprehensive overview of the difficulties involved in discussions of privacy and ultimately provides a provocative resolution. He argues that no single definition can be workable, but rather that there are multiple forms of privacy, related to one another by family resemblances. His theory bridges cultural differences and addresses historical changes in views on privacy. Drawing on a broad array of interdisciplinary sources, Solove sets forth a framework for understanding privacy that provides clear, practical guidance for engaging with relevant issues.

Understanding Privacy will be an essential introduction to long-standing debates and an invaluable resource for crafting laws and policies about surveillance, data mining, identity theft, state involvement in reproductive and marital decisions, and other pressing contemporary matters concerning privacy.

Here’s a brief summary of Understanding Privacy. Chapter 1 (available on SSRN) introduces the basic ideas of the book. Chapter 2 builds upon my article Conceptualizing Privacy, 90 Cal. L. Rev. 1087 (2002), surveying and critiquing existing theories of privacy. Chapter 3 contains an extensive discussion (mostly new material) explaining why I chose the approach toward theorizing privacy that I did, and why I rejected many other potential alternatives. It examines how a theory of privacy should account for cultural and historical variation yet avoid being too local in perspective. This chapter also explores why a theory of privacy should avoid being too general or too contextual. I draw significantly from historical examples to illustrate my points. I also discuss why a theory of privacy shouldn’t focus on the nature of the information, the individual’s preferences, or reasonable expectations of privacy. Chapter 4 consists of new material discussing the value of privacy. Chapter 5 builds on my article, A Taxonomy of Privacy, 154 U. Pa. L.. Rev. 477 (2006). I’ve updated the taxonomy in the book, and I’ve added a lot of new material about how my theory of privacy interfaces not only with US law, but with the privacy law of many other countries. Finally, Chapter 6 consists of new material exploring the consequences and applications of my theory and examining the nature of privacy harms.

Understanding Privacy is much broader than The Digital Person and The Future of Reputation. Whereas these other two books examined specific privacy problems, Understanding Privacy is a general theory of privacy, and I hope it will be relevant and useful in a wide range of issues and debates.

For more information about the book, please visit its website.

0

The Digital Person Free Online!

Digital-Person-free.jpgLast month, Yale University Press allowed me to put my book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet online for free. The experiment has gone quite well. The book’s website received a big bump in traffic, with many people downloading one or more chapters. The book’s sales picked up for several weeks after it was placed online for free. Sales have now returned to about the same level as before the book went online.

I’m delighted to announce that NYU Press has allowed me to put my book, The Digital Person: Technology and Privacy in the Information Age (NYU Press, 2004) online for free.

Here’s a brief synopsis of The Digital Person from the book jacket:

Seven days a week, twenty-four hours a day, electronic databases are compiling information about you. As you surf the Internet, an unprecedented amount of your personal information is being recorded and preserved forever in the digital minds of computers. These databases create a profile of activities, interests, and preferences used to investigate backgrounds, check credit, market products, and make a wide variety of decisions affecting our lives. The creation and use of these databases–which Daniel J. Solove calls “digital dossiers”–has thus far gone largely unchecked. In this startling account of new technologies for gathering and using personal data, Solove explains why digital dossiers pose a grave threat to our privacy.

Digital dossiers impact many aspects of our lives. For example, they increase our vulnerability to identity theft, a serious crime that has been escalating at an alarming rate. Moreover, since September 11th, the government has been tapping into vast stores of information collected by businesses and using it to profile people for criminal or terrorist activity. In THE DIGITAL PERSON, Solove engages in a fascinating discussion of timely privacy issues such as spyware, web bugs, data mining, the USA-Patriot Act, and airline passenger profiling.

THE DIGITAL PERSON not only explores these problems, but provides a compelling account of how we can respond to them. Using a wide variety of sources, including history, philosophy, and literature, Solove sets forth a new understanding of what privacy is, one that is appropriate for the new challenges of the Information Age. Solove recommends how the law can be reformed to simultaneously protect our privacy and allow us to enjoy the benefits of our increasingly digital world.

Book reviews are collected here.

1

Ranking Banks Based on Incidents of Identity Theft

Chris Hoofnagle just released a new report entitled Measuring Identity Theft at Top Banks. In the report, he ranks the top 25 US banks according to their relative incidence of identity theft. The report is based on consumer-submitted complaints to the FTC where the victim identified an institution.

In a previous paper called Identity Theft: Making the Unknown Knowns Known, Chris argued that there should be mandatory public disclosure of identity theft statistics by banks. Since the financial institutions don’t currently release such data, we have no idea which institutions are being more effective at reducing identity theft than others.

For his new paper, Chris made a FOIA request last year to the FTC for two years of consumer complaint data. The FTC found it too burdensome to release two years’ worth of data, so “the request was limited to three randomly-chosen months in 2006, January, March, and September. These months included data from 88,560 complaints, with 46,262 names of institutions were identified by victims.” Chris’s paper is based on an analysis of this data.

From the abstract:

There is no reliable way for consumers, regulators, and businesses to assess the relative incidence of identity fraud at major financial institutions. This lack of information prevents more vigorous competition among institutions to protect accountholders from identity theft. As part of a multiple strategy approach to obtaining more actionable data on identity theft, the Freedom of Information Act was used to obtain complaint data submitted by victims in 2006 to the Federal Trade Commission. This complaint data identifies the institution where impostors established fraudulent accounts or affected existing accounts in the name of the victim. The data show that some institutions have a far greater incidence of identity theft than others. The data further show that the major telecommunications companies had numerous identity theft events, but a metric is lacking to compare this industry with the financial institutions.

This is a first attempt to meaningfully compare institutions on their performance in avoiding identity theft. This analysis faces several challenges that are described in the methods section. The author welcomes constructive criticism, suggestions, and comments in an effort to shine light on the identity theft problem.

This is a fantastic endeavor, as more information on how institutions are protecting against identity theft is sorely needed. Chris admits that his study has some limitations and could be improved if financial institutions would supply more information to the public. But based on the information Chris could find out, this report is quite revealing. Hopefully, it will spark more transparency from financial institutions in the future.

Here is one of many charts in the paper. The chart below is of incidents of identity theft relative to the size of each institution.

hoofnagle-rate-banks.png

0

Coming Back from the Dead

lazarus2.JPGLazarus had it easy. Not so for Laura Todd, who has been trying to come back from the dead for nearly a decade. According to WSMV News in Nashville:

According to government paperwork, Laura Todd has been dead off and on for eight years, and Todd said there’s no end to the complications the situation creates.

“One time when I (was) ruled dead, they canceled my health insurance because it got that far,” she said.

Todd’s struggle started with a typo at the Social Security administration. She said the government has assured her since the problem that they have deleted her death record, but she said the problems keep cropping up.

On Wednesday, the IRS once again rejected her electronic tax return. She said she’s gone through it before.

“I will not be eligible for my refund. I’m not eligible for my rebate. I mean, I can’t do anything with it,” she said.

Channel 4’s Nancy Amons first reported about Todd’s ordeal last week, but Amons has since found out more about how common the problem is.

According to a government audit, Social Security had to resurrect more than 23,000 people in a period of less than two years. The number is the approximate equivalent to the population of Brentwood.

The audit said the lack of documentation in the Social Security computer makes it impossible for the government’s auditors to determine if the people are dead or alive.

But some of those who are alive have found more complications after their resurrection.

Illinois resident Jay Liebenow was also declared dead. He said Todd is now more vulnerable to identity theft because after someone dies, Social Security releases that person’s personal information on computer discs. He said the information is sold to anyone who wants it, like the Web site Ancestry.com.

One of the problems with modern recordkeeping is that although computers make things more efficient, they compound the effects that errors have on people’s lives. The difficulty is that the law currently does not afford people with sufficient power to clean up mistakes in their records. Since information is so readily transferred between entities, an error that is corrected in one database has often migrated to another database before the correction. The error doesn’t die. Instead, you do.

Responsibility should be placed on every entity that maintains records to ensure that information is correct and that errors are promptly fixed. Moreover, when information is shared with others, the one sharing the information should have duties to inform the others of the error; and those receiving the data should have a duty to check for corrections in the data from the source.

Right now, we’re living in a bureaucratic data hell, and that’s because that there aren’t sufficient incentives for entities to be careful with the records they keep about people.

Image: The Resurrection of Lazarus by Vincent van Gogh, 1889-90, from Wikicommons.

2

Facebook Applications: Another Privacy Concern

facebook3.jpgRecently, I’ve been complaining about Facebook’s mishaps regarding privacy. Back in 2006, Facebook sparked the ire of over 700,000 members when it launched News Feeds. In 2007, Facebook launched Beacon and Social Ads, sparking new privacy outcries. An uprising of Facebook users prompted Facebook to change its policies regarding Beacon. For more about Facebook’s recent privacy issues, see my post here.

But that’s not all. Over at CNET, Chris Soghoian reports about some severe privacy concerns with Facebook applications. An application (or “app” for short) is a program that is created by a third party that adds interesting features to one’s profile. These apps have become quite popular with Facebook users. But they come with some very serious potential dangers. Soghoian writes:

[A] new study suggests there may be a bigger problem with the applications. Many are given access to far more personal data than they need to in order to run, including data on users who never even signed up for the application. Not only does Facebook enable this, but it does little to warn users that it is even happening, and of the risk that a rogue application developer can pose. . . .

In order to install an application, a Facebook user must first agree to “allow this application to…know who I am and access my information.” Users not willing to permit the application access to all kinds of data from their profile cannot install it onto their Facebook page.

What kind of information does Facebook give the application developer access to? Practically everything. . . .

The applications don’t actually run on Facebook’s servers, but on servers owned and operated by the application developers. Whenever a Facebook user’s profile is displayed, the application servers contact Facebook, request the user’s private data, process it, and send back whatever content will be displayed to the user. As part of its terms of service, Facebook makes the developers promise to throw away any data they received from Facebook after the application content has been sent back for display to the user.

So when you use a third party application, you basically must put your trust in that third party to follow Facebook’s rules in good faith. In other words, Facebook users use applications at their own risk.

But what if an application is created by some hacker in Russia? Or is designed by a creepy child molester to harvest people’s personal information? Should Facebook be doing more to protect users against the bad-apple application developers?

Soghoian notes that in many cases, applications are being given access to much more personal data than they actually need to function:

Read More

6

Facebook’s Beacon, Blockbuster, and the Video Privacy Protection Act

facebook3.jpgblockbuster-video.gif

The news has been buzzing lately about Facebook’s Beacon, where participating websites share personal information with Facebook. Beacon originally had a poor notice and opt-out policy, but after significant public criticism, Facebook changed to an opt-in policy. Even under the new opt-in policy, however, the participating companies are still turning data over to Facebook, and that spells potential trouble for at least one of the 40 companies in the Beacon program — Blockbuster Video.

Over at Laboratorium, Professor James Grimmelmann (NY Law School) has an excellent post arguing that Blockbuster’s participation in Facebook’s Beacon violates the Video Privacy Protection Act (VPPA), 18 U.S.C. § 2710. James writes:

The VPPA states:

A video tape service provider who knowingly discloses, to any person, personally identifiable information concerning any consumer of such provider shall be liable….

18 U.S.C. § 2710(b)(1). The important first question is who’s a “video tape service provider.” That’s defined in paragraph (a)(4):

[T]he term “video tape service provider” means any person, engaged in the business, in or affecting interstate or foreign commerce, of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials. . . .

Blockbuster clearly qualifies as a video tape service provider. To the extent it transmits information to Facebook about a customer’s video purchases — no matter what Facebook ultimately does with that data (i.e. regardless of whether it appears in a person’s profile, is stored by Facebook in a database, or is deleted), Blockbuster could be liable under VPPA. The statute is an opt-in statute, requiring that the customer provide “informed written consent . . . at the time the disclosure is sought” in order for the disclosure to be permissible.

James also analyzes whether Facebook could be liable as well:

There’s the joint enterprise theory; since Facebook and Blockbuster acted together, and Blockbuster is liable, so too is Facebook. There’s a split in the VPPA caselaw as to whether liability runs only against the video tape service provider, or can run also against the person who induced the disclosure.

James concludes:

Put this all together, and the legal situation looks a bit bleak for Facebook and Blockbuster. The VPPA provides damages of $2,500 per violation, plus punitive damages and attorneys’ fees. I have no idea how many movies wound up in people’s news feeds, but it doesn’t have to be too many for the total to hurt. Class action lawyers, start your engines.

1

Facebook — the New DoubleClick?

facebook3.jpgI previously complained about Facebook’s Beacon and Social Ads, and last week Facebook appeared to back down (at least from Beacon) by changing its policy and having users opt-in before their activities on other websites is broadcast on their profiles. I applauded Facebook’s change of heart.

But there are more disturbing aspects of Beacon that have not been changed. According to Macworld:

If you think that just because you have never signed up for Facebook you’re immune to the tracking and collecting of user activities outside of this popular social networking site, think again.

Facebook’s controversial Beacon ad system tracks activities from all users in its third-party partner sites, including from people who have never signed up with Facebook or who have deactivated their accounts, CA has found.

Beacon captures detailed data on what users do on these external partner sites and sends it back to Facebook along with users’ IP addresses, Stefan Berteau, senior research engineer at CA’s Threat Research Group, said Monday in an interview.

This happens even if users delete the Facebook cookie. “The Facebook Javascript [code] is still called by the affiliate site and the information is passed in,” he said. In the case of users without accounts or with deactivated accounts, the data isn’t tied to a Facebook ID, he said.

However, it is well-known that IP addresses provide a variety of information about users, and have in some cases been used to identify individuals.

The information captured by Beacon in these cases includes the addresses of Web pages visited by the user and a string with the action taken in the partner site, Berteau said. . . .

Over the weekend, Facebook confirmed that Berteau’s report on Friday was accurate, but said that it deletes the data it gets under these circumstances.

Still, Friday’s findings deepened the privacy concerns surrounding Beacon since its introduction several weeks ago. And the admission Monday added to the concerns, since it contradicted what had, until then, been the official company line about this issue.

For more, see Michael Zimmer's post.

A while back, DoubleClick generated many privacy complaints. DoubleClick used information about people's websurfing habits to target ads on various websites. Facebook's Beacon appears to be a related incarnation of the DoubleClick advertising model.

Facebook is not the only one to blame with Beacon. About 40 websites participate in the Beacon program, including:

Read More

0

Facebook Listens and Responds

facebook3.jpgI’m quite pleased to learn that Facebook has come to a privacy epiphany. I’ve been blogging a lot lately about the privacy problems with Facebook’s new features — Beacon and Social Ads:

* Facebook’s Beacon: News Feeds All Over Again?

* The Facebook-Fandango Connection: Invasion of Privacy?

* Facebook and the Appropriation of Name or Likeness Tort

* The New Facebook Ads — Starring You: Another Privacy Debacle?

Facebook recently announced that it is changing the way it obtains people’s consent before it uses or discloses their personal information. In particular, its change in policy involves Beacon. According to the AP:

More than 40 different Web sites, including Fandango.com, Overstock.com and Blockbuster.com, had embedded Beacon in their pages to track transactions made by Facebook users.

Unless instructed otherwise, the participating sites alerted Facebook, which then notified a user’s friends within the social network about items that had been bought or products that had been reviewed.

Facebook thought the marketing feeds would help its users keep their friends better informed about their interests while also serving as “trusted referrals” that would help drive more sales to the sites using the Beacon system.

But thousands of Facebook users viewed the Beacon referrals as a betrayal of trust. Critics blasted the advertising tool as an unwelcome nuisance with flimsy privacy protections that had already exasperated and embarrassed some users.

Some users have already complained about inadvertently finding out about gifts bought for them for Christmas and Hanukkah after Beacon shared information from Overstock.com. Other users say they were unnerved when they discovered their friends had found out what movies they were watching through purchases made on Fandango.

Peter Lattman of WSJ blog was one of the ones caught off guard by Beacon, when he discovered to his dismay that Facebook announced to his friends that he bought tickets to Bee Movie on Fandango.

According to the New York Times:

Under Beacon, when Facebook members purchase movie tickets on Fandango.com, for example, Facebook sends a notice about what movie they are seeing in the News Feed on all of their friends’ pages. If a user saves a recipe on Epicurious.com or rates travel venues on NYTimes.com, friends are also notified. There is an opt-out box that appears for a few seconds, but users complain that it is hard to find.

The New York Times story explains Facebook’s change in policy:

Read More