Site Meter

Category: Privacy (Consumer Privacy)

0

The Right to Data Portability (RDP) as a Per Se Anti-tying Rule

Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.

I think the presentation helped sharpen the antitrust concern.  The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation.  RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.

Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser.  We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago.  Plaintiffs asserted an illegal “tying” arrangement between Windows and IE.  The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together.  The court instead held that the rule of reason applies.

RDP, however, amounts to a per se rule against tying of software.  Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats.  We have the tying product (social network) and the tied product (module for export or not of data).  US antitrust law has rejected a per se rule here.  The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.

Modern US and EU antitrust law seek to enhance “consumer welfare.”  If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare.  There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing).  RDP might advance human rights to access.  It might enhance openness more generally on the Internet.  But it quite possibly reduces consumer welfare, and that deserves careful attention.

1

More Bad News About Identity Theft

The crime of identity theft is on the rise, in a big way.  A recently released Javelin report found that identity theft rose 13% from 2010 to 2011 with approximately 11.6 million victims of identity theft in the U.S.  This month’s Consumer Reports paints an even more troubling picture. In a national survey of 2,002 households, the Consumer Reports National Resource Center projected that approximately 15.9 million households experienced identity theft in the past 12 months, up almost 50% from the previous year’s study.

Another troubling finding was that almost half of the victims — 7.8 million — were notified that their personally identifiable information (PII) was hacked or lost by a public or private organization.  It’s long been explained that the biggest risk for identity theft stemmed from people who know us or who have access to our wallets or trash.  This allowed consumers to ignore reports of data breaches and hacks.  That databases of our PII were prone to leaking met with a big so what?  So what if Zappos got hacked, exposing over 24 million users’ credit card and other personal information?

Now, it is increasingly clear that insecure databases of our personal information pose serious risks of identity theft to consumers.  What is in store for identity theft victims?  Victims spend considerable time and money to restore their credit histories.  The stain of a thief’s reckless spending can make their way into data brokers’ files, with recurring impact on the ability to get hired, rent apartments, and the like.  The FTC’s recent privacy report gives some hope that we may in the future have more transparency and corrective measures with regard to data brokers.  But we are not there yet, and that’s a big problem for identity theft victims.

0

The Right to Be Forgotten: A Criminal’s Best Friend?

By now, you’ve likely heard about the the proposed EU regulation concerning the right to be forgotten.  The drafters of the proposal expressed concern for  social media users who have posted comments or photographs that they later regretted. Commissioner Reding explained: “If an individual no longer wants his personal data to be processed or stored by a data controller, and if there is no legitimate reason for keeping it, the data should be removed from their system.”

Proposed Article 17 provides:

[T]he data subject shall have the right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data which are made available by the data subject while he or she was a child, where one of the following grounds applies . . . .

Where the controller referred to in paragraph 1 has made the personal data public, it shall take all reasonable steps, including technical measures, in relation to data for the publication of which the controller is responsible, to inform third parties which are processing such data, that a data subject requests them to erase any links to, or copy or replication of that personal data. Where the controller has authorised a third party publication of personal data, the controller shall be considered responsible for that publication.

The controller shall carry out the erasure without delay, except to the extent that the retention of the personal data is necessary: (a) for exercising the right of freedom of expression in accordance with Article 80; (b) for reasons of public interest in the area of public health in accordance with Article 81; (c) for historical, statistical and scientific research purposes in accordance with Article 83; (d) for compliance with a legal obligation to retain the personal data by Union or Member State law to which the controller is subject . . . . Read More

4

Hey Look at Me! I’m Reading! (Or Not) Neil Richards on Social Reading

Do you want everyone to know what book you read, film you watch, search you perform, automatically? No? Yes? Why? Why Not? It is odd to me that the ideas behind the Video Privacy Protection Act do not indicate a rather quick extension. But there is a debate about whether our intellectual consumption should have privacy protection, and if so, what that should look like. Luckily, Neil Richards has some answers. His post on Social Reading is a good read. In response to the idea that automatic sharing is wise and benefits all captures some core points:

Not so fast. The sharing of book, film, and music recommendations is important, and social networking has certainly made this easier. But a world of automatic, always-on disclosure should give us pause. What we read, watch, and listen to matter, because they are how we make up our minds about important social issues – in a very real sense, they’re how we make sense of the world.

What’s at stake is something I call “intellectual privacy” – the idea that records of our reading and movie watching deserve special protection compared to other kinds of personal information. The films we watch, the books we read, and the web sites we visit are essential to the ways we try to understand the world we live in. Intellectual privacy protects our ability to think for ourselves, without worrying that other people might judge us based on what we read. It allows us to explore ideas that other people might not approve of, and to figure out our politics, sexuality, and personal values, among other things. It lets us watch or read whatever we want without fear of embarrassment or being outed. This is the case whether we’re reading communist, gay teen, or anti-globalization books; or visiting web sites about abortion, gun control, or cancer; or watching videos of pornography, or documentaries by Michael Moore, or even “The Hangover 2.”

And before you go off and say Neil doesn’t get “it” whatever “it” may be, note that he is making a good distinction: “when we share – when we speak – we should do so consciously and deliberately, not automatically and unconsciously. Because of the constitutional magnitude of these values, our social, technological, professional, and legal norms should support rather than undermine our intellectual privacy.”

I easily recommend reading the full post. For those interested in a little more on the topic, the full paper is forthcoming in Georgetown Law Journal and available here. And, if you don’t know Neil Richards’ work (SSRN), you should. Even if you disagree with him, Neil’s writing is of that rare sort where you are better off by reading it. The clean style and sharp ideas force one to engage and think, and thus they also allow one to call out problems so that understanding moves forward. (See Orwell, Politics and the English Language). Enjoy.

4

Why I Don’t Teach the Privacy Torts in My Privacy Law Class

(Partial disclaimer — I do teach the privacy torts for part of one class, just so the students realize how narrow they are.)

I was talking the other day with Chris Hoofnagle, a co-founder of the Privacy Law Scholars Conference and someone I respect very much.  He and I have both recently taught Privacy Law using the text by Dan Solove and Paul Schwartz. After the intro chapter, the text has a humongous chapter 2 about the privacy torts, such as intrusion on seclusion, false light, public revelation of private facts, and so on.  Chris and other profs I have spoken with find that the chapter takes weeks to teach.

I skip that chapter entirely. In talking with Chris, I began to articulate why.  It has to do with my philosophy of what the modern privacy enterprise is about.

For me, the modern project about information privacy is pervasively about IT systems.  There are lots of times we allow personal information to flow.  There are lots of times where it’s a bad idea.  We build our collection and dissemination systems in highly computerized form, trying to gain the advantages while minimizing the risks.  Alan Westin got it right when he called his 1970′s book “Databanks in a Free Society.”  It’s about the data.

Privacy torts aren’t about the data.  They usually are individualized revelations in a one-of-a-kind setting.  Importantly, the reasonableness test in tort is a lousy match for whether an IT system is well designed.  Torts have not done well at building privacy into IT systems, nor have they been of much use in other IT system issues, such as deciding whether an IT system is unreasonably insecure or suing software manufacturers under products liability law.  IT systems are complex and evolve rapidly, and are a terrible match with the common sense of a jury trying to decide if the defendant did some particular thing wrong.

When privacy torts don’t work, we substitute regulatory systems, such as HIPAA or Gramm-Leach-Bliley.  To make up for the failures of the intrusion tort, we create the Do Not Call list and telemarketing sales rules that precisely define how much intrusion the marketer can make into our time at home with the family.

A second reason for skipping the privacy torts is that the First Amendment has rendered unconstitutional a wide range of the practices that the privacy torts might otherwise have evolved to address.  Lots of intrusive publication about an individual is considered “newsworthy” and thus protected speech.  The Europeans have narrower free speech rights, so they have somewhat more room to give legal effect to intrusion and public revelation claims.

It’s about the data.  Torts has almost nothing to say about what data should flow in IT systems.  So I skip the privacy torts.

Other profs might have other goals.  But I expect to keep skipping chapter 2.

 

4

Stanford Law Review Online: The Dead Past

Stanford Law Review

The Stanford Law Review Online has just published Chief Judge Alex Kozinski’s Keynote from our 2012 Symposium, The Dead Past. Chief Judge Kozinski discusses the privacy implications of our increasingly digitized world and our role as a society in shaping the law:

I must start out with a confession: When it comes to technology, I’m what you might call a troglodyte. I don’t own a Kindle or an iPad or an iPhone or a Blackberry. I don’t have an avatar or even voicemail. I don’t text.

I don’t reject technology altogether: I do have a typewriter—an electric one, with a ball. But I do think that technology can be a dangerous thing because it changes the way we do things and the way we think about things; and sometimes it changes our own perception of who we are and what we’re about. And by the time we realize it, we find we’re living in a different world with different assumptions about such fundamental things as property and privacy and dignity. And by then, it’s too late to turn back the clock.

He concludes:

Judges, legislators and law enforcement officials live in the real world. The opinions they write, the legislation they pass, the intrusions they dare engage in—all of these reflect an explicit or implicit judgment about the degree of privacy we can reasonably expect by living in our society. In a world where employers monitor the computer communications of their employees, law enforcement officers find it easy to demand that internet service providers give up information on the web-browsing habits of their subscribers. In a world where people post up-to-the-minute location information through Facebook Places or Foursquare, the police may feel justified in attaching a GPS to your car. In a world where people tweet about their sexual experiences and eager thousands read about them the morning after, it may well be reasonable for law enforcement, in pursuit of terrorists and criminals, to spy with high-powered binoculars through people’s bedroom windows or put concealed cameras in public restrooms. In a world where you can listen to people shouting lurid descriptions of their gall-bladder operations into their cell phones, it may well be reasonable to ask telephone companies or even doctors for access to their customer records. If we the people don’t consider our own privacy terribly valuable, we cannot count on government—with its many legitimate worries about law-breaking and security—to guard it for us.

Which is to say that the concerns that have been raised about the erosion of our right to privacy are, indeed, legitimate, but misdirected. The danger here is not Big Brother; the government, and especially Congress, have been commendably restrained, all things considered. The danger comes from a different source altogether. In the immortal words of Pogo: “We have met the enemy and he is us.”

Read the full article, The Dead Past by Alex Kozinski, at the Stanford Law Review Online.

13

Banning Forced Disclosure of Social Network Passwords and the Polygraph Precedent

The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords.  Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.

As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.

Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.

We do have a precedent, however.  In 1988, Congress enacted the Employee Polygraph Protection Act  (EPPA).  The EPPA says that employers don’t get to know everything an employee is thinking.  Polygraphs are flat-out banned in almost all employment settings.  The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.

The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer.  Imagine a polygraph if your boss asked what you really thought about him/her.  Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.

For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss.  That list of exceptions can be a useful baseline to consider for social network passwords.

In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives.  The social networks themselves support this ban on having employers require the passwords.  I think we should, too.

0

The Buzzword of the Year: “Multistakeholder”

Greetings to Concurring Opinion readers. I thank the editors for inviting me to guest blog. I am looking forward to the opportunity to write more informally than I have done for a long time. I am out of the administration, and don’t have to go through the painful process of “clearing” every statement. And I am focusing on researching and writing rather than having clients. So the comments are just my own.

I suspect I’ll be writing about quite a range of privacy and tech issues. Many of my blog-sized musings will likely be about the European Union proposed Data Protection Regulation, and the contemporaneous flowering of privacy policy at the Federal Trade Commission and in the Administration.

From the latter, I propose “multistakeholder” as the buzzword of the year so far. (“Context” is a close second, which I may discuss another time.) The Department of Commerce has received public comments on what should be done in the privacy multistakeholder process. (My own comment focused on the importance of defining “de-identified” information.)

Separately, the administration has been emphasizing the importance of multistakeholder processes for Internet governance, such as in a speech by Larry Strickling, Administrator of the National Telecommunications and Information Administration.

Here’s a try at making sense of this buzzword. On the privacy side, my view is that “multistakeholder” is mostly a substitute for the old term “self regulation.” Self regulation was the organizing theme when the U.S. negotiated the Safe Harbor agreement with the EU in 2000 for privacy. Barbara Wellbery (who lamentably is no longer with us) used “self regulation” repeatedly to explain the U.S. approach. The term accurately describes the legal regime under Section 5 of the FTC Act – an entity (all by itself) makes a promise, and then it’s legally enforceable by others. As I have written since the mid-1990’s, this self regulatory approach can be better than other approaches, depending on the context.

The term “self regulation”, however, has taken on a bad odor. Many European regulators consider “self regulation” as the theme of the Safe Harbor, which they consider weaker than it should have been. Many privacy advocates have also justifiably said that the term puts too much emphasis on the “self”, the company that decides what promises to make.

Enter stage left with the new term, “multistakeholder.” The term directly addresses the advocates’ issue. Advocates should be in the room, along with regulators, entities from affected industries, and perhaps a lot of other stakeholders. It’s not “self regulation” by a “selfish” company. It is instead a process that includes the range of players whose interests should be considered.

I am comfortable with the new term “multistakeholder” for the old “self regulation.” The two are different in the way that the new term includes more of those affected. They are the same, however, because they stand in contrast to top-down regulation by the government. Depending on the facts, multistakeholder may be better, or worse, than the government alternative.

Shifting to Internet governance, “multistakeholder” is a term that resonates with the bottom-up processes that led to the spectacular flowering of the Internet. Examples include organizations such as the Internet Engineering Task Force and the World Wide Web Consortium. Somehow, almost miraculously, the Web grew in twenty years from a tiny community to one numbering in the billions.

The term “multi-stakeholder” is featured in the important OECD Council Recommendation On Principles for Internet Policy Making, garnering 13 mentions in 10 pages. As I hope to discuss in a future blog post, this bottom-up process contrasts sharply with efforts, led by countries including Russia and China, to have the International Telecommunications Union play a major role in Internet governance. Emma Llansó at CDT has explained what is at stake. I am extremely skeptical about an expanded ITU role.

So, administration support for “multi stakeholder process” in both privacy and Internet governance. Similar in hoping that bottom-up beats top-down regulation. Different, I suspect, in how well the bottom-up has done historically. The IETF and the W3C have quite likely earned a grade in the A range for what they have achieved in Internet governance. I doubt that many people would give an A overall to industry self-regulation in the privacy area.

Reason to be cautious. The same word can work differently in different settings.

0

Dockets and Data Breach Litigation

Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation.  Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version).  From the abstract:

In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.

A few thoughts follow after the jump.

Read More

0

Stanford Law Review Online: The Privacy Paradox 2012 Symposium Issue

Stanford Law Review

Our 2012 Symposium Issue, The Privacy Paradox: Privacy and Its Conflicting Values, is now available online:

Essays

The text of Chief Judge Alex Kozinski’s keynote is forthcoming.