Category: Privacy (Consumer Privacy)

3

There is no new thing under the sun

Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]

 

Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).

The United States: Self Regulation

The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.

 

To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.

 

This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.

Read More

6

Privacy: For the Rich or for the Poor?

Some consider the right to privacy a fundamental right for the rich, or even the rich and famous. It may be no coincidence that the landmark privacy cases in Europe feature names like Naomi Campbell, Michael Douglas, and Princess Caroline of Monaco. After all, if you lived eight-to-a-room in a shantytown in India, you would have little privacy and a lot of other problems to worry about. When viewed this way, privacy seems to be a matter of luxury; a right of spoiled teenagers living in six bedroom houses (“Mom, don’t open the door without knocking”).

 

To refute this view, scholars typically point out that throughout history, totalitarian regimes targeted the right to privacy even before they did free speech. Without privacy, individuals are cowed by authority, conform to societal norms, and self-censor dissenting speech – or even thoughts. As Michel Foucault observed in his interpretation of Jeremy Bentham’s panopticon, the gaze has disciplinary power.

 

But I’d like to discuss an entirely different counter-argument to the privacy-for-the-rich approach. This view was recently presented at the Privacy Law Scholar Conference in a great paper by Laura Moy and Amanda Conley, both 2011 NYU law graduates. In their paper, Paying the Wealthy for Being Wealthy: The Hidden Costs of Behavioral Marketing (I love a good title!), which is not yet available online, Moy and Conley argue that retailers harvest personal information to make the poor subsidize luxury goods for the rich.

 

This might seem audacious at first, but think of it this way: through various loyalty schemes, retailers collect data about consumers’ shopping habits. Naturally, retailers are most interested in data about “high value shoppers.” This is intuitively clear, given that that’s where the big money, low price sensitivity and broad margins are. It’s also backed by empirical evidence, which Moy and Conley reference. Retailers prefer to tend to those who buy saffron and Kobe Beef rather than to those who purchase salt and turkey. To woo the high value shoppers, they offer attractive discounts and promotions – use your loyalty card to buy Beluga caviar; get a free bottle of Champagne. Yet obviously the retailers can’t take a loss for their marketing efforts. Who then pays the price of the rich shoppers’ luxury goods? You guessed it, the rest of us – with price hikes on products like bread and butter.

 

Read More

3

Social Media and Chat Monitoring

Suppose a system could help alert people to online sexual predators? Many might like that. But suppose that same system could allow people to look for gun purchasers, government critics, activists of any sort; what would we say then? The tension between these possibilities is before us. Mashable reports that Facebook and other platforms are now monitoring chats to see whether criminal activity is suspected. The article focuses on the child predator use case. Words are scanned for danger signals. Then “The software pays more attention to chats between users who don’t already have a well-established connection on the site and whose profile data indicate something may be wrong, such as a wide age gap. The scanning program is also ‘smart’ — it’s taught to keep an eye out for certain phrases found in the previously obtained chat records from criminals including sexual predators.” After a flag is raised a person decides whether to notify police. The other uses of such a system are not discussed in the article. Yet again, we smash our heads against the speech, security, privacy walls. I expect some protests and some support for the move. Blood may spill on old battlegrounds. Nonetheless, I think that the problems the practice creates merit the fight. The privacy harms and the speech harms mean that even if there are small “false positives” in the sexual predator realm, why a company gets to decide to notify police, how the system might be co-opted for other uses, and the affect on people’s ability to talk online should be sorted as social platforms start to implement monitoring systems.

3

Cool but I the privacy implications are unfortunate

Ever heard of Book Depository? It is book store. So what? So let’s dance! Oh no that was Caddyshack. So they have map of what books are being bought from them and where. It is mildly mesmerizing. It seems not such a big deal, but as I was watching a book was purchased in Saskatchewan and someone bought the infamous 50 Shades trilogy elsewhere. They don’t seem to leave the history of the map up. Still, I think I’d be less than thrilled that my purchase was surfaced with location.

2

Big Data Brokers as Fiduciaries

In a piece entitled “You for Sale,” Sunday’s New York Times raised important concerns about the data broker industry.  Let us add some more perils and seek to reframe the debate about how to regulate Big Data.

Data brokers like Acxiom (and countless others) collect and mine a mind-boggling array of data about us, including Social Security numbers, property records, public-health data, criminal justice sources, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, online musings, browsing habits culled by behavioral advertisers, and the gold mine of drug- and food-store records.  They scrape our social network activity, which with a little mining can reveal our undisclosed sexual preferences, religious affiliations, political views, and other sensitive information.  They may integrate video footage of our offline shopping.  With the help of facial-recognition software, data mining algorithms factor into our dossiers the over-the-counter medicines we pick up, the books we browse, and the pesticides we contemplate buying for our backyards.  Our social media influence scores may make their way into the mix.  Companies, such as Klout, measure our social media influence, usually on a scale from one to 100.  They use variables like the number of our social media followers, frequency of updates, and number of likes, retweets, and shares.  What’s being tracked and analyzed about our online and offline behavior is accelerating – with no sign of slowing down and no assured way to find out.

As the Times piece notes, businesses buy data-broker dossiers to classify those consumers worth pursuing and those worth ignoring (so-called “waste”).  More often those already in an advantaged position get better deals and gifts while the less advantaged get nothing.  The Times piece rightly raised concerns about the growing inequality that such use of Big Data produces.  But far more is at stake.

Government is a major client for data brokers.  More than 70 fusion centers mine data-broker dossiers to detect crimes, “threats,” and “hazards.”  Individuals are routinely flagged as “threats.”  Such classifications make their way into the “information-sharing environment,” with access provided to local, state, and federal agencies as well as private-sector partners.  Troublingly, data-broker dossiers have no quality assurance.  They may include incomplete, misleading, and false data.  Let’s suppose a data broker has amassed a profile on Leslie McCann.  Social media scraped, information compiled, and videos scanned about “Leslie McCann” might include information about jazz artist “Les McCann” as well as information about criminal with a similar name and age.  Inaccurate Big Data has led to individuals’ erroneous inclusion on watch lists, denial of immigration applications, and loss of public benefits.  Read More

0

The Right to Data Portability (RDP) as a Per Se Anti-tying Rule

Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.

I think the presentation helped sharpen the antitrust concern.  The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation.  RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.

Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser.  We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago.  Plaintiffs asserted an illegal “tying” arrangement between Windows and IE.  The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together.  The court instead held that the rule of reason applies.

RDP, however, amounts to a per se rule against tying of software.  Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats.  We have the tying product (social network) and the tied product (module for export or not of data).  US antitrust law has rejected a per se rule here.  The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.

Modern US and EU antitrust law seek to enhance “consumer welfare.”  If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare.  There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing).  RDP might advance human rights to access.  It might enhance openness more generally on the Internet.  But it quite possibly reduces consumer welfare, and that deserves careful attention.

1

More Bad News About Identity Theft

The crime of identity theft is on the rise, in a big way.  A recently released Javelin report found that identity theft rose 13% from 2010 to 2011 with approximately 11.6 million victims of identity theft in the U.S.  This month’s Consumer Reports paints an even more troubling picture. In a national survey of 2,002 households, the Consumer Reports National Resource Center projected that approximately 15.9 million households experienced identity theft in the past 12 months, up almost 50% from the previous year’s study.

Another troubling finding was that almost half of the victims — 7.8 million — were notified that their personally identifiable information (PII) was hacked or lost by a public or private organization.  It’s long been explained that the biggest risk for identity theft stemmed from people who know us or who have access to our wallets or trash.  This allowed consumers to ignore reports of data breaches and hacks.  That databases of our PII were prone to leaking met with a big so what?  So what if Zappos got hacked, exposing over 24 million users’ credit card and other personal information?

Now, it is increasingly clear that insecure databases of our personal information pose serious risks of identity theft to consumers.  What is in store for identity theft victims?  Victims spend considerable time and money to restore their credit histories.  The stain of a thief’s reckless spending can make their way into data brokers’ files, with recurring impact on the ability to get hired, rent apartments, and the like.  The FTC’s recent privacy report gives some hope that we may in the future have more transparency and corrective measures with regard to data brokers.  But we are not there yet, and that’s a big problem for identity theft victims.

0

The Right to Be Forgotten: A Criminal’s Best Friend?

By now, you’ve likely heard about the the proposed EU regulation concerning the right to be forgotten.  The drafters of the proposal expressed concern for  social media users who have posted comments or photographs that they later regretted. Commissioner Reding explained: “If an individual no longer wants his personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system.”

Proposed Article 17 provides:

[T]he data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data which are made available by the data subject while he or she was a child, where one of the following grounds applies . . . .

Where the controller referred to in paragraph 1 has made the personal data public, it shall take all reasonable steps, including technical measures, in relation to data for the publication of which the controller is responsible, to inform third parties which are processing such data, that a data subject requests them to erase any links to, or copy or replication of that personal data. Where the controller has authorised a third party publication of personal data, the controller shall be considered responsible for that publication.

The controller shall carry out the erasure without delay, except to the extent that the retention of the personal data is necessary: (a) for exercising the right of freedom of expression in accordance with Article 80; (b) for reasons of public interest in the area of public health in accordance with Article 81; (c) for historical, statistical and scientific research purposes in accordance with Article 83; (d) for compliance with a legal obligation to retain the personal data by Union or Member State law to which the controller is subject . . . . Read More

4

Hey Look at Me! I’m Reading! (Or Not) Neil Richards on Social Reading

Do you want everyone to know what book you read, film you watch, search you perform, automatically? No? Yes? Why? Why Not? It is odd to me that the ideas behind the Video Privacy Protection Act do not indicate a rather quick extension. But there is a debate about whether our intellectual consumption should have privacy protection, and if so, what that should look like. Luckily, Neil Richards has some answers. His post on Social Reading is a good read. In response to the idea that automatic sharing is wise and benefits all captures some core points:

Not so fast. The sharing of book, film, and music recommendations is important, and social networking has certainly made this easier. But a world of automatic, always-on disclosure should give us pause. What we read, watch, and listen to matter, because they are how we make up our minds about important social issues – in a very real sense, they’re how we make sense of the world.

What’s at stake is something I call “intellectual privacy” – the idea that records of our reading and movie watching deserve special protection compared to other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we try to understand the world we live in. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist, gay teen, or anti-globalization books; or visiting web sites about abortion, gun control, or cancer; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”

And before you go off and say Neil doesn’t get “it” whatever “it” may be, note that he is making a good distinction: “when we share – when we speak – we should do so consciously and deliberately, not automatically and unconsciously. Because of the constitutional magnitude of these values, our social, technological, professional, and legal norms should support rather than undermine our intellectual privacy.”

I easily recommend reading the full post. For those interested in a little more on the topic, the full paper is forthcoming in Georgetown Law Journal and available here. And, if you don’t know Neil Richards’ work (SSRN), you should. Even if you disagree with him, Neil’s writing is of that rare sort where you are better off by reading it. The clean style and sharp ideas force one to engage and think, and thus they also allow one to call out problems so that understanding moves forward. (See Orwell, Politics and the English Language). Enjoy.

4

Why I Don’t Teach the Privacy Torts in My Privacy Law Class

(Partial disclaimer — I do teach the privacy torts for part of one class, just so the students realize how narrow they are.)

I was talking the other day with Chris Hoofnagle, a co-founder of the Privacy Law Scholars Conference and someone I respect very much.  He and I have both recently taught Privacy Law using the text by Dan Solove and Paul Schwartz. After the intro chapter, the text has a humongous chapter 2 about the privacy torts, such as intrusion on seclusion, false light, public revelation of private facts, and so on.  Chris and other profs I have spoken with find that the chapter takes weeks to teach.

I skip that chapter entirely. In talking with Chris, I began to articulate why.  It has to do with my philosophy of what the modern privacy enterprise is about.

For me, the modern project about information privacy is pervasively about IT systems.  There are lots of times we allow personal information to flow.  There are lots of times where it’s a bad idea.  We build our collection and dissemination systems in highly computerized form, trying to gain the advantages while minimizing the risks.  Alan Westin got it right when he called his 1970’s book “Databanks in a Free Society.”  It’s about the data.

Privacy torts aren’t about the data.  They usually are individualized revelations in a one-of-a-kind setting.  Importantly, the reasonableness test in tort is a lousy match for whether an IT system is well designed.  Torts have not done well at building privacy into IT systems, nor have they been of much use in other IT system issues, such as deciding whether an IT system is unreasonably insecure or suing software manufacturers under products liability law.  IT systems are complex and evolve rapidly, and are a terrible match with the common sense of a jury trying to decide if the defendant did some particular thing wrong.

When privacy torts don’t work, we substitute regulatory systems, such as HIPAA or Gramm-Leach-Bliley.  To make up for the failures of the intrusion tort, we create the Do Not Call list and telemarketing sales rules that precisely define how much intrusion the marketer can make into our time at home with the family.

A second reason for skipping the privacy torts is that the First Amendment has rendered unconstitutional a wide range of the practices that the privacy torts might otherwise have evolved to address.  Lots of intrusive publication about an individual is considered “newsworthy” and thus protected speech.  The Europeans have narrower free speech rights, so they have somewhat more room to give legal effect to intrusion and public revelation claims.

It’s about the data.  Torts has almost nothing to say about what data should flow in IT systems.  So I skip the privacy torts.

Other profs might have other goals.  But I expect to keep skipping chapter 2.