Category: Privacy (Consumer Privacy)

0

Harvard Law Review Privacy Symposium Issue

The privacy symposium issue of the Harvard Law Review is hot off the presses.  Here are the articles:

SYMPOSIUM
PRIVACY AND TECHNOLOGY
Introduction: Privacy Self-Management and the Consent Dilemmas
Daniel J. Solove

What Privacy is For
Julie E. Cohen

The Dangers of Surveillance
Neil M. Richards

The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures
Paul M. Schwartz

Toward a Positive Theory of Privacy Law
Lior Jacob Strahilevitz

1

Privacy Self-Management and the Consent Dilemma

I’m pleased to share with you my new article in Harvard Law Review entitled Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013). You can download it for free on SSRN. This is a short piece (24 pages) so you can read it in one sitting.

Here are some key points in the Article:

1. The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” – the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Unfortunately, privacy self-management is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control over personal data.

2. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.

3. People cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.

4. Privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.

5. In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.

6. The way forward involves (1) developing a coherent approach to consent, one that accounts for the social science discoveries about how people make decisions about personal data; (2) recognizing that people can engage in privacy self-management only selectively; (3) adjusting privacy law’s timing to focus on downstream uses; and (4) developing more substantive privacy rules.

The full article is here.

Cross-posted on LinkedIn.

1

Exponential Hacks

As All Things Digital Kara Swisher reports, Living Social experienced a significant hack the other day: over 50 million users’ email, dates of birth, and encrypted passwords were leaked into the hands of Russian hackers (or so it seems). This hack comes on the heels of data breaches at LinkedIn and Zappos. That the passwords were encrypted just means that users better change their passwords and fast because in time the encryption can be broken. A few years ago, I blogged about the 500 million mark of personal data leaked. Hundreds of millions seems like child’s play today.

This raises some important questions about what we mean when we talk about personally identifiable information (PII). Paul Schwartz and my co-blogger Dan Solove have done terrific work helping legislators devise meaningful definitions of PII in a world of reidentification. Paul Ohm is currently working on an important project providing a coherent account of sensitive information in the context of current data protection laws. Is someone’s password and date of birth sensitive information deserving special privacy protection? Beyond the obvious health, credit, and financial information, what other sorts of data do we consider sensitive and why? Answers to these questions are crucial to companies formulating best practices, the FTC as  it continues its robust enforcement of privacy promises and pursuing deceptive practices, and legislators considering private sector privacy regulations of data brokers, as in Senator John Rockefeller’s current efforts.

1

“Brain Spyware”

As if we don’t have enough to worry about, now there’s spyware for your brain.  Or, there could be.  Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read More

1

Netflix, Facebook, and Social Sharing

Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.

Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.

According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:

is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.

Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.

Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?

As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.

4

New Edition of Solove & Schwartz’s Privacy Law Fundamentals: Must-Read (and Check out the Video)

Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.

Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad.  As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful.  For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor.  Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean.  I think that Psy may have a run for his money on making us smile.

4

In Honor of Alan Westin: Privacy Trailblazer, Seer, and Changemaker

Privacy leading light Alan Westin passed away this week.  Almost fifty years ago, Westin started his trailblazing work helping us understand the dangers of surveillance technologies.  Building on the work that Warren and Brandeis started in “The Right to Privacy” in 1898, Westin published Privacy and Freedom in 1967.  A year later, he took his normative case for privacy to the trenches.  As Director of the National Academy of Science’s Computer Science and Engineering Board, he and a team of researchers studied governmental, commercial, and private organizations using databases to amass, use, and share personal information.  Westin’s team interviewed 55 organizations, from local law enforcement, federal agencies like the Social Security Administration, and direct-mail companies like R.L. Polk (a predecessor to our behavioral advertising industry).

The 1972 report, Databanks in a Free Society: Computers, Record-Keeping, and Privacy, is a masterpiece.  With 14 case studies, the report made clear the extent to which public and private entities had been building substantial computerized dossiers of people’s activities and the risks to economic livelihood, reputation, and self-determination.  It demonstrated the unrestrained nature of data collection and sharing, with driver’s license bureaus selling personal information to direct-mail companies and law enforcement sharing arrest records with local and state agencies for employment and licensing matters.  Surely influenced by Westin’s earlier work, some data collectors, like the Kansas City Police Department, talked to the team about privacy protections, suggesting the need for verification of source documents, audit logs, passwords, and discipline for improper use of data. Westin’s report called for data collectors to adopt ethical procedures for data collection and sharing, including procedural protections such as notice and chance to correct inaccurate or incomplete information, data minimization requirements, and sharing limits.

Westin’s work shaped the debate about the right to privacy at the dawn of our surveillance era. His changing making agenda was front and center of  the Privacy Act of 1974.  In the early 1970s, nearly fifty congressional hearings and reports investigated a range of data privacy issues, including the use of census records, access to criminal history records, employers’ use of lie detector tests, and the military and law enforcement’s monitoring of political dissidents. State and federal executives spearheaded investigations of surveillance technologies including a proposed National Databank Center.

Just as public discourse was consumed with the “data-bank problem,” the courts began to pay attention. In Whalen v. Roe, a 1977 case involving New York’s mandatory collection of prescription drug records, the Supreme Court strongly suggested that the Constitution contains a right to information privacy based on substantive due process. Although it held that the state prescription drug database did not violate the constitutional right to information privacy because it was adequately secured, the Court recognized an individual’s interest in avoiding disclosure of certain kinds of personal information. Writing for the Court, Justice Stevens noted the “threat to privacy implicit in the accumulation of vast amounts of personal information in computerized data banks or other massive government files.”  In a concurring opinion, Justice Brennan warned that the “central storage and easy accessibility of computerized data vastly increase the potential for abuse of that information, and I am not prepared to say that future developments will not demonstrate the necessity of some curb on such technology.”

What Westin underscored so long ago, and what Whalen v. Roe signaled, technologies used for broad, indiscriminate, and intrusive public surveillance threaten liberty interests.  Last term, in United States v. Jones, the Supreme Court signaled that these concerns have Fourth Amendment salience. Concurring opinions indicate that at least five justices have serious Fourth Amendment concerns about law enforcement’s growing surveillance capabilities. Those justices insisted that citizens have reasonable expectations of privacy in substantial quantities of personal information.  In our article “The Right to Quantitative Privacy,” David Gray and I are seeking to carry forward Westin’s insights (and those of Brandeis and Warren before him) into the Fourth Amendment arena as the five concurring justices in Jones suggested.  More on that to come, but for now, let’s thank Alan Westin for his extraordinary work on the “computerized databanks” problem.

 

1

Data Brokers in the FTC’s Sights

The ethos of our age is the more data, the better, and nowhere is that more true than the data-broker industry.  Data-broker databases contain dossiers on hundreds of millions of individuals, including their Social Security numbers, property records, criminal-justice records, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, social network profiles, online activity, and drug- and food-store records.  According to FTC Chairman Jon Leibowitz, companies like Acxiom are the ‘invisible cyberazzi’ that follow us around every where we go on- and offline, or as Chris Hoofnagle has aptly called them “Little Brothers” helping Big Brother and industry.  Data brokers are largely unbridled by regulation. The FTC’s enforcement authority over data brokers stems from the Fair Credit Reporting Act (FCRA), which was passed in 1970 to protect the privacy and accuracy of information included in credit reports.  FCRA requires consumer reporting agencies to use reasonable procedures to ensure that entities to which they disclose sensitive consumer data have a permissible purpose for receiving that data.  Under FCRA, employers are required to inform individuals about intended adverse actions against them based on their credit reports.  Individuals get a chance to explain inaccurate or incomplete information and to contact credit-reporting agencies to dispute the information in the hopes of getting it corrected.  During the past two years, the FTC has gone after social media intelligence company and online people search engine on the grounds that they constituted consumer reporting agencies subject to FCRA.  In June 2012, the FTC settled charges against Spokeo, an online service that compiles and sells digital dossiers on consumers to human resource professionals, job recruiters, and other businesses.  Spokeo assembles consumer data from on- and offline sources, including social media sites, to create searchable consumer profiles.  The profiles include an individual’s full name, physical address, phone number, age range, and email address, hobbies, photos, ethnicity, religion, and social network activity.  The FTC alleged that Spokeo failed to adhere to FCRA, including its obligation to ensure the accuracy of consumer reports.  Ultimately, it obtained a $800,000 settlement with the company.  That’s helpful, to be sure, but given the FTC’s limited resources may not lead to more accurate dossiers.  (It also may mean that employers will keep online intelligence in-house and thus their use of unreliable online information outside the reach of FCRA, as my co-blogger Frank Pasquale wrote so ably about in The Offensive Internet: Speech, Privacy, and Reputation).  More recently,the FTC issued orders requiring nine data brokerage companies to provide the agency with information about how they collect and use data about consumers.  The agency will use the information to study privacy practices in the data broker industry.  The nine data brokers receiving orders from the FTC were (1) Acxiom, (2) Corelogic, (3) Datalogix, (4) eBureau, (5) ID Analytics, (6) Intelius, (7) Peekyou, (8) Rapleaf, and (9) Recorded Future.  In its press release, the FTC explained that it is seeking details about: “the nature and sources of the consumer information the data brokers collect; how they use, maintain, and disseminate the information; and the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.”  The FTC called on the data broker industry to improve the transparency of its practices as part of a Commission report, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers.  FTC Commissioner Julie Brill has been a tireless advocate for greater oversight over data brokers–here is hoping that her efforts and those of her agency produce important reforms.

 

 

 

0

Identity Theft: Coming to Screens Near You (and Not Just the Movies)

Identity theft, now so common, we can joke about it.

Or as Alan Alda’s character in Woody Allen’s Crimes and Misdemeanors says, “comedy is tragedy plus time.”  Time to transform tragedy into comedy, indeed.  Scanning the Privacy Rights Clearinghouse database demonstrates that reported data breaches are a daily occurrence.  Since January 1, 2013, private and public entities have reported over 20 major data breaches.  Included on the list were hospitals, universities, and businesses.  Sometimes, the most vulnerable are targeted.  For instance, on January 8, 2013, a dishonest employee of the Texas Department of Health and Human Services was arrested on suspicion on misusing client information to apply for credit cards and to receive medical care under their names.  Bad enough that automated systems erroneously take recipients of public benefits off the rolls, as my work on Technological Due Process explores.  Those designed to help them are destroying their medical and credit histories as well.

We have had over 600 million records breached since 2005, from approximately 3,500 reported data breaches.  Of course, those figures represented those officially reported, likely due to state data breach laws, whose requirements vary and leave lots of discretion with regard to reporting up to the entities who have little incentive to err on the side of reporting if they are not legally required to do so.  So the bad news is that identity theft is prevalent, but at least we can laugh about it.

5

The Importance of Section 230 Immunity for Most

Why leave the safe harbor provision intact for site operators, search engines, and other online service providers do not attempt to block offensive, indecent, or illegal activity but by no means encourage or are principally used to host illicit material as cyber cesspools do?  If we retain that immunity, some harassment and stalking — including revenge porn — will remain online because site operators hosting it cannot be legally required to take them down.  Why countenance that possibility?

Because of the risk of collateral censorship—blocking or filtering speech to avoid potential liability even if the speech is legally protected.  In what is often called the heckler’s veto, people may abuse their ability to complain, using the threat of liability to ensure that site operators block or remove posts for no good reason.  They might complain because they disagree with the political views expressed or dislike the posters’ disparaging tone.  Providers would be especially inclined to remove content in the face of frivolous complaints in instances where they have little interest in keeping up the complained about content.  Take, as an illustration, the popular newsgathering sites Digg.  If faced with legal liability, it might automatically take down posts even though they involve protected speech.  The news gathering site lacks a vested interest in keeping up any particular post given its overall goal of crowd sourcing vast quantities of news that people like.  Given the scale of their operation, they may lack the resources to hire enough people to cull through complaints to weed out frivolous ones.

Sites like Digg differ from revenge porn sites and other cyber cesspools whose operators have an incentive to refrain from removing complained-about content such as revenge porn and the like.  Cyber cesspools obtain economic benefits by hosting harassing material that may make it worth the risk to continue to do so.  Collateral censorship is far less likely—because it is in their economic interest to keep up destructive material.  As Slate reporter and cyber bullying expert Emily Bazelon has remarked, concerns about the heckler’s veto get more deference than it should in the context of revenge porn sites and other cyber cesspools.  (Read Bazelon’s important new book Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of Character and Empathy).  It does not justify immunizing cyber cesspool operators from liability.

Let’s be clear about what this would mean.  Dispensing with cyber cesspools’ immunity would not mean that they would be strictly liable for user-generated content.  A legal theory would need to sanction remedies against them.  Read More