Author: Peter Swire

1

Does the EU Right to Data Portability Violate Antitrust Law?

On May 16 I am going to speak on a panel on privacy and antitrust at George Mason Law School.  Back in 2007, I testified on one issue concerning privacy and antitrust, how privacy can be a non-price aspect of competition.  Now I think I’ve found another way the two fields are related, which as far as I can tell has not received any real analysis to date.

Article 18 of the EU draft privacy Regulation sets forth a new Right of Data Portability: “The data subject shall have the right, where personal data are processed by electronic means and in a structured and commonly used format, to obtain from the controller a copy of data undergoing processing in an electronic and structured format which is commonly used and allows for further use by the data subject.”  In the future, the Commission will specific what electronic formats count as “structured and commonly used,” as well technical standards for how the data controller shares the data with the individual data subject.

This Right of Data Portability feels similar to Google’s Data Liberation Front, whose “singular goal is to make it easier for users to move their data in and out of Google products.”  It is one time that the Google philosophy of open data flows converges with the EU’s support for individuals’ rights over their data.

I’m wondering, though, whether a mandated right of data portability makes any sense in light of antitrust law.  I think antitrust scholars would look at this issue as a vertical restraint, with the two markets being, say, a social network service and a property right for the customer to remove data.  Similarly, antitrust scholars could see a tying arrangement, where the (popular) social network service is “tied” to the (unpopular) expense to the consumer in removing data.

Modern antitrust analysis, in both Brussels and DC, has become highly suspicious of government intervention concerning tying and other vertical restraints.  Quite strong showings of market power are generally needed for the government to intervene, and the Regulation’s Right of Data Portability would exist generally and not based on that kind of showing of market power.

In the utilitarian calculus of U.S. antitrust law, my instinct is that the Right to Data Portability would be seen as welfare-reducing.  I think that would be true under European competition law as well.

In response, a supporter of the Right to Data Portability could say that the right here should trump the utility loss.  A Data Liberation supporter could emphasize that the proposal advances what I have called “data empowerment” as well as data protection, and so there is an additional rights-based argument to ignore antitrust law.

These are my initial musings.  I welcome comments as I prepare for the May panel.

4

Why I Don’t Teach the Privacy Torts in My Privacy Law Class

(Partial disclaimer — I do teach the privacy torts for part of one class, just so the students realize how narrow they are.)

I was talking the other day with Chris Hoofnagle, a co-founder of the Privacy Law Scholars Conference and someone I respect very much.  He and I have both recently taught Privacy Law using the text by Dan Solove and Paul Schwartz. After the intro chapter, the text has a humongous chapter 2 about the privacy torts, such as intrusion on seclusion, false light, public revelation of private facts, and so on.  Chris and other profs I have spoken with find that the chapter takes weeks to teach.

I skip that chapter entirely. In talking with Chris, I began to articulate why.  It has to do with my philosophy of what the modern privacy enterprise is about.

For me, the modern project about information privacy is pervasively about IT systems.  There are lots of times we allow personal information to flow.  There are lots of times where it’s a bad idea.  We build our collection and dissemination systems in highly computerized form, trying to gain the advantages while minimizing the risks.  Alan Westin got it right when he called his 1970’s book “Databanks in a Free Society.”  It’s about the data.

Privacy torts aren’t about the data.  They usually are individualized revelations in a one-of-a-kind setting.  Importantly, the reasonableness test in tort is a lousy match for whether an IT system is well designed.  Torts have not done well at building privacy into IT systems, nor have they been of much use in other IT system issues, such as deciding whether an IT system is unreasonably insecure or suing software manufacturers under products liability law.  IT systems are complex and evolve rapidly, and are a terrible match with the common sense of a jury trying to decide if the defendant did some particular thing wrong.

When privacy torts don’t work, we substitute regulatory systems, such as HIPAA or Gramm-Leach-Bliley.  To make up for the failures of the intrusion tort, we create the Do Not Call list and telemarketing sales rules that precisely define how much intrusion the marketer can make into our time at home with the family.

A second reason for skipping the privacy torts is that the First Amendment has rendered unconstitutional a wide range of the practices that the privacy torts might otherwise have evolved to address.  Lots of intrusive publication about an individual is considered “newsworthy” and thus protected speech.  The Europeans have narrower free speech rights, so they have somewhat more room to give legal effect to intrusion and public revelation claims.

It’s about the data.  Torts has almost nothing to say about what data should flow in IT systems.  So I skip the privacy torts.

Other profs might have other goals.  But I expect to keep skipping chapter 2.

 

13

Banning Forced Disclosure of Social Network Passwords and the Polygraph Precedent

The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords.  Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.

As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.

Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.

We do have a precedent, however.  In 1988, Congress enacted the Employee Polygraph Protection Act  (EPPA).  The EPPA says that employers don’t get to know everything an employee is thinking.  Polygraphs are flat-out banned in almost all employment settings.  The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.

The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer.  Imagine a polygraph if your boss asked what you really thought about him/her.  Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.

For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss.  That list of exceptions can be a useful baseline to consider for social network passwords.

In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives.  The social networks themselves support this ban on having employers require the passwords.  I think we should, too.

1

An Unanswered Question in the Generally Correct Opposition to a Big ITU Role in the Internet

I strongly agree with the bipartisan consensus in the U.S. that the International Telecommunications Union should not gain new governance powers over the Internet. This coming December, there will be a major ITU conference in Dubai where there have been concerns about significant changes to the underlying ITU treaty.

From talking with people involved in the issue, my sense is that the risk of bad changes has subsided considerably. An administration memorandum from January discusses the progress made in the past year in fending off damaging proposals.  Republican FCC Commissioner Robert McDowell recently published an excellent discussion of why those proposals would be bad.  (McDowell erred, however, when he gratuitously and incorrectly criticized the administration for not addressing the issue).  Civil society writers including Emma Llansó of CDT and Sophia Bekele concur.

In talking recently with one U.S. government official, however, here is one issue concerning the ITU and a possible UN role that has not been well addressed.  Many developing countries look to the UN for technical assistance and best practices.  These countries are facing a range of legal and policy issues, on topics that have been the subject of legislation in the U.S. and elsewhere: anti-spam, cybersecurity, phishing, domain name trademark disputes, data privacy, etc.  If you are working on these issues for Ghana or Sri Lanka or whatever, where do you get that technical assistance about the Internet?

That seems like a good-faith question.  Anybody have a good answer?

0

The Buzzword of the Year: “Multistakeholder”

Greetings to Concurring Opinion readers. I thank the editors for inviting me to guest blog. I am looking forward to the opportunity to write more informally than I have done for a long time. I am out of the administration, and don’t have to go through the painful process of “clearing” every statement. And I am focusing on researching and writing rather than having clients. So the comments are just my own.

I suspect I’ll be writing about quite a range of privacy and tech issues. Many of my blog-sized musings will likely be about the European Union proposed Data Protection Regulation, and the contemporaneous flowering of privacy policy at the Federal Trade Commission and in the Administration.

From the latter, I propose “multistakeholder” as the buzzword of the year so far. (“Context” is a close second, which I may discuss another time.) The Department of Commerce has received public comments on what should be done in the privacy multistakeholder process. (My own comment focused on the importance of defining “de-identified” information.)

Separately, the administration has been emphasizing the importance of multistakeholder processes for Internet governance, such as in a speech by Larry Strickling, Administrator of the National Telecommunications and Information Administration.

Here’s a try at making sense of this buzzword. On the privacy side, my view is that “multistakeholder” is mostly a substitute for the old term “self regulation.” Self regulation was the organizing theme when the U.S. negotiated the Safe Harbor agreement with the EU in 2000 for privacy. Barbara Wellbery (who lamentably is no longer with us) used “self regulation” repeatedly to explain the U.S. approach. The term accurately describes the legal regime under Section 5 of the FTC Act – an entity (all by itself) makes a promise, and then it’s legally enforceable by others. As I have written since the mid-1990’s, this self regulatory approach can be better than other approaches, depending on the context.

The term “self regulation”, however, has taken on a bad odor. Many European regulators consider “self regulation” as the theme of the Safe Harbor, which they consider weaker than it should have been. Many privacy advocates have also justifiably said that the term puts too much emphasis on the “self”, the company that decides what promises to make.

Enter stage left with the new term, “multistakeholder.” The term directly addresses the advocates’ issue. Advocates should be in the room, along with regulators, entities from affected industries, and perhaps a lot of other stakeholders. It’s not “self regulation” by a “selfish” company. It is instead a process that includes the range of players whose interests should be considered.

I am comfortable with the new term “multistakeholder” for the old “self regulation.” The two are different in the way that the new term includes more of those affected. They are the same, however, because they stand in contrast to top-down regulation by the government. Depending on the facts, multistakeholder may be better, or worse, than the government alternative.

Shifting to Internet governance, “multistakeholder” is a term that resonates with the bottom-up processes that led to the spectacular flowering of the Internet. Examples include organizations such as the Internet Engineering Task Force and the World Wide Web Consortium. Somehow, almost miraculously, the Web grew in twenty years from a tiny community to one numbering in the billions.

The term “multi-stakeholder” is featured in the important OECD Council Recommendation On Principles for Internet Policy Making, garnering 13 mentions in 10 pages. As I hope to discuss in a future blog post, this bottom-up process contrasts sharply with efforts, led by countries including Russia and China, to have the International Telecommunications Union play a major role in Internet governance. Emma Llansó at CDT has explained what is at stake. I am extremely skeptical about an expanded ITU role.

So, administration support for “multi stakeholder process” in both privacy and Internet governance. Similar in hoping that bottom-up beats top-down regulation. Different, I suspect, in how well the bottom-up has done historically. The IETF and the W3C have quite likely earned a grade in the A range for what they have achieved in Internet governance. I doubt that many people would give an A overall to industry self-regulation in the privacy area.

Reason to be cautious. The same word can work differently in different settings.