Category: Privacy

Reining in the Data Brokers

I’ve been alarmed by data brokers’ ever-expanding troves of personal information for some time. My book outlines the problem, explaining how misuse of data undermines equal opportunity. I think extant legal approaches–focusing on notice and consent–put too much of a burden on consumers. This NYT opinion piece sketches an alternate approach:

[D]ata miners, brokers and resellers have now taken creepy classification to a whole new level. They have created lists of victims of sexual assault, and lists of people with sexually transmitted diseases. Lists of people who have Alzheimer’s, dementia and AIDS. Lists of the impotent and the depressed.

***

Privacy protections in other areas of the law can and should be extended to cover consumer data. The Health Insurance Portability and Accountability Act, or Hipaa, obliges doctors and hospitals to give patients access to their records. The Fair Credit Reporting Act gives loan and job applicants, among others, a right to access, correct and annotate files maintained by credit reporting agencies.

It is time to modernize these laws by applying them to all companies that peddle sensitive personal information. If the laws cover only a narrow range of entities, they may as well be dead letters. For example, protections in Hipaa don’t govern the “health profiles” that are compiled and traded by data brokers, which can learn a great deal about our health even without access to medical records.

There’s more online, but given the space constraints, I couldn’t go into all the details that the book discloses. I hope everyone enjoys the opinion piece, and that it whets appetites for the book!

The Right to be Forgotten: Not an Easy Question

I’ve previously written on regulation of European data processing here. I’ll be presenting on the “right to be forgotten” (RtbF) in Chicago this Spring. I’ll be writing a series of posts here to prepare for that lecture.

Julia Powles offers an excellent summary of the right in question. As she explains, the European Court of Justice (ECJ) has ruled that, “in some circumstances—notably, where personal information online is inaccurate, inadequate, irrelevant, or excessive in relation to data-processing purposes—links should be removed from Google’s search index.” The Costeja case which led to this ruling involved Google’s prominent display of results relating to the plaintiff’s financial history.

Unfortunately, some US commentators’ views are rapidly congealing toward a reflexively rejectionist position when it comes to such regulation of search engine results–despite the Fair Credit Reporting Act’s extensive regulation of consumer reporting agencies in very similar situations. Jeffrey Toobin’s recent article mentions some of these positions. For example, Jules Polonetsky says, “The decision will go down in history as one of the most significant mistakes that Court has ever made.” I disagree, and I think the opposite result would itself have been far more troubling.

Internet regulation must recognize the power of certain dominant firms to shape impressions of individuals. Their reputational impact can be extraordinarily misleading and malicious, and the potential for harm is only growing as hacking becomes more widespread. Consider the following possibility: What if a massive theft of medical records occurs, the records are made public, and then shared virally among different websites? Are the critics of the RtbF really willing to just shrug and say, “Well, they’re true facts and the later-publishing websites weren’t in on the hack, so leave them up”? And in the case of future intimate photo hacks, do we simply let firms keep the photos available in perpetuity?
Read More

Enter Privacy Profession 01
2

Advice on How to Enter the Privacy Profession

Over at LinkedIn, I have a long post with advice for how law students can enter into the privacy profession.   I hope that this post can serve as a useful guide to students who want to pursue careers in privacy.

The privacy law field is growing dramatically, and demand for privacy lawyers is high.  I think that many in the academy who don’t follow privacy law, cyberlaw, or law and technology might not realize what’s going on in the field.  The field is booming.

The International Association of Privacy Professionals (IAPP), the field’s primary association, has been growing by about 30% each year.  It now has more than 17,000 members.  And this is only a subset of privacy professionals, as many privacy officials in healthcare aren’t members of IAPP and instead are members of the American Health Information Management Association (AHIMA) or the Health Care Compliance Association (HCCA).

There remains a bottleneck at the entry point to the field, but that can be overcome.  Once in the club, the opportunities are plentiful and there’s the ability to rise quickly.   I’ve been trying to push for solutions to make entry into the field easier, and this is an ongoing project of mine.

If you have students who are interested in entering the privacy law profession, please share my post with them.  I hope it will help.

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

0

Privacy and Data Security Harms

Privacy Harm 01

I recently wrote a series of posts on LinkedIn exploring privacy and data security harms.  I thought I’d share them here, so I am re-posting all four of these posts together in one rather long post.

I. PRIVACY AND DATA SECURITY VIOLATIONS: WHAT’S THE HARM?

“It’s just a flesh wound.”

Monty Python and the Holy Grail

Suppose your personal data is lost, stolen, improperly disclosed, or improperly used. Are you harmed?

Suppose a company violates its privacy policy and improperly shares your data with another company. Does this cause a harm?

In most cases, courts say no. This is the case even when a company is acting negligently or recklessly. No harm, no foul.

Strong Arguments on Both Sides

Some argue that courts are ignoring serious harms caused when data is not properly protected and used.

Yet others view the harm as trivial or non-existent. For example, given the vast number of records compromised in data breaches, the odds that any one instance will result in identity theft or fraud are quite low.

Read More

5

What’s ailing the right to be forgotten (and some thoughts on how to fix it)

The European Court of Justice’s recent “right to be forgotten” ruling is going through growing pains.  “A politician, a pedophile and a would-be killer are among the people who have already asked Google to remove links to information about their pasts.”  Add to that list former Merill Lynch Executive Stan O’Neal, who requested that Google hide links to an unflattering BBC News articles about him.

Screen Shot 2014-07-09 at 9.21.19 AMAll told, Google “has removed tens of thousands of links—possibly more than 100,000—from its European search results,” encompassing removal requests from 91,000 individuals (apparently about 50% of all requests are granted).  The company has been pulled into discussions with EU regulators about its implementation of the rules, with one regulator opining that the current system “undermines the right to be forgotten.”

The list of questions EU officials recently sent Google suggests they are more or less in the dark about the way providers are applying the ECJ’s ruling.  Meanwhile, European companies like forget.me (pictured) are looking to reap a profit from the uncertainty surrounding the application of these new rules.  The quote at the end of the Times article sums up the current state of affairs:

“No one really knows what the criteria is,” he said, in reference to Google’s response to people’s online requests. “So far, we’re getting a lot of noes. It’s a complete no man’s land.”

What (if anything) went wrong? As I’ll argue* below, a major flaw in the current implementation is that it puts the initial adjudication of right to be forgotten decisions in the hands of search engine providers, rather than representatives of the public interest.  This process leads to a lack of transparency and potential conflicts of interest in implementing what may otherwise be sound policy.

The EU could address these problems by reforming the current procedures to limit search engine providers’ discretion in day-to-day right to be forgotten determinations.  Inspiration for such an alternative can be found in other areas of law regulating the conduct of third party service providers, including the procedures for takedown of copyright-infringing content under the DMCA and those governing law enforcement requests for online emails.

I’ll get into more detail about the current implementation of the right to be forgotten and some possible alternatives after the jump.

Read More

5

Carrie Goldberg: IT’S CLEAR: CREATING AMATEUR PORN WITHOUT A PARTICIPANT’S KNOWLEDGE IS ILLEGAL IN NY

This post is by Carrie Goldberg who is the founding attorney at C. A. Goldberg, PLLC in Brooklyn, New York focusing on litigation relating to electronic sexual privacy invasions. She is a volunteer attorney at The Cyber Civil Rights Initiative and its End Revenge Porn campaign.Carrie

Earlier this year, the New York City tabloids and “Saturday Night Live” poked fun at a story about a handsome former Wall Street financial advisor who, after being indicted for recording himself having sex without the women’s permission, blamed the taping on his hyper-vigilant “doggie cam.”

Last week the story re—emerged with an interview by two of the three 30-something year old victims complaining that they’d been wrongly portrayed by the media and the defendant’s high profile criminal team as jealous stalkers when in reality their energetic efforts to reach him was upon discovery of the videos and centered around begging him to destroy them. The humiliation sustained during the ongoing criminal process, such as being forced to view the sex videos alongside the jurists, is palpable.

Many New Yorkers may be unaware that recording yourself having sex without the other person’s knowledge constitutes a sex crime in the state (NY Penal § 250.45) and also breaches our federal video voyeurism laws (18 USCA § 1801). With the proliferation of smart phones and tablets enabling people to­ secretly videotape sexual encounters – including apps that allow for stealth recording – this law is increasingly violated. The harm to victims is palpable and real. It’s deeply humiliating to be turned into an object of pornography without consent.

In 2003, then-Governor George E. Pataki signed New York’s unlawful surveillance statute, known as Stephanie’s Law, making it illegal to use a device to secretly record or broadcast a person undressing or having sex when that person has a reasonable expectation of privacy. The statute is named for Stephanie Fuller, whose landlord taped her using a camera hidden in the smoke detector above her bed. Read More

Facebook’s Hidden Persuaders

hidden-persuadersMajor internet platforms are constantly trying new things out on users, to better change their interfaces. Perhaps they’re interested in changing their users, too. Consider this account of Facebook’s manipulation of its newsfeed:

If you were feeling glum in January 2012, it might not have been you. Facebook ran an experiment on 689,003 users to see if it could manipulate their emotions. One experimental group had stories with positive words like “love” and “nice” filtered out of their News Feeds; another experimental group had stories with negative words like “hurt” and “nasty” filtered out. And indeed, people who saw fewer positive posts created fewer of their own. Facebook made them sad for a psych experiment.

James Grimmelmann suggests some potential legal and ethical pitfalls. Julie Cohen has dissected the larger political economy of modulation. For now, I’d just like to present a subtle shift in Silicon Valley rhetoric:

c. 2008: “How dare you suggest we’d manipulate our users! What a paranoid view.”
c. 2014: “Of course we manipulate users! That’s how we optimize time-on-machine.”

There are many cards in the denialists’ deck. An earlier Facebook-inspired study warns of “greater spikes in global emotion that could generate increased volatility in everything from political systems to financial markets.” Perhaps social networks will take on the dampening of inconvenient emotions as a public service. For a few glimpses of the road ahead, take a look at Bernard Harcourt (on Zunzuneo), Jonathan Zittrain, Robert Epstein, and N. Katherine Hayles.

T
0

The U.S. Supreme Court’s 4th Amendment and Cell Phone Case and Its Implications for the Third Party Doctrine

Today, the U.S. Supreme Court handed down a decision on two cases involving the police searching cell phones incident to arrest. The Court held 9-0 in an opinion written by Chief Justice Roberts that the Fourth Amendment requires a warrant to search a cell phone even after a person is placed under arrest.

The two cases are Riley v. California and United States v. Wurie, and they are decided in the same opinion with the title Riley v. California. The Court must have chosen toname the case after Riley to make things hard for criminal procedure experts, as there is a famous Fourth Amendment case called Florida v. Riley, 488 U,S, 445 (1989), which will now create confusion whenever someone refers to the “Riley case.”

Fourth Amendment Warrants

As a general rule, the government must obtain a warrant before engaging in a search. A warrant is an authorization by an independent judge or magistrate that is given to law enforcement officials after they properly justify their reason for conducting the search. There must be probable cause to search — a reasonable belief that the search will turn up evidence of a crime. The warrant requirement is one of the key protections of privacy because it ensures that the police just can’t search on a whim or a hunch. They must have a justified basis to search, and that must be proven before an independent decisionmaker (the judge or magistrate).

The Search Incident to Arrest Exception

But there are dozens of exceptions where government officials don’t need a warrant to conduct a search. One of these exceptions is a search incident to arrest. This exception allows police officers to search property on or near a person who has been arrested. In Chimel v. California, 395 U.S. 752 (1969), the Supreme Court held that the police could search the area near an arrestee’s immediate control. The rationale was that waiting to get a warrant might put police officers in danger in the event arrestees had hidden dangerous items hidden on them or that arrestees would have time to destroy evidence. In United States v. Robinson, 414 U.S. 218 (1973), the Court held that there doesn’t need to be identifiable danger in any specific case in order to justify searches incident to arrest. Police can just engage in such a search as a categorical rule.

What About Searching Cell Phones Incident to Arrest?

In today’s Riley case, the Court examined whether the police are allowed to search data on a cell phone incident to arrest without first obtaining a warrant. The Court held that cell phone searches should be treated differently from typical searches incident to arrest because cell phones contain so much data and present a greater invasion of privacy than more limited searches for physical objects: “Cell phones, however, place vast quantities of personal information literally in the hands of individuals. A search of the information on a cell phone bears little resemblance to the type of brief physical search considered in Robinson.”

Read More

0

The data retention judgment, the Irish Facebook case, and the future of EU data transfer regulation

On April 8 the Court of Justice of the European Union (CJEU) announced its judgment in the case C-293/12 and C-594/12 Digital Rights Ireland. Based on EU fundamental rights law, the Court invalidated the EU Data Retention Directive, which obliged telecommunications service providers and Internet service providers in the EU to retain telecommunications metadata and make it available to European law enforcement authorities under certain circumstances. The case illustrates both the key role that the EU Charter of Fundamental Rights plays in EU data protection law, and the CJEU’s seeming disinterest in the impact of its recent data protection rulings on other fundamental rights. In addition, the recent referral to the CJEU by an Irish court of a case involving data transfers by Facebook under the EU-US Safe Harbor holds the potential to further tighten EU rules for data transfers, and to reduce the possibility of EU-wide harmonization in this area.

In considering the implications of Digital Rights Ireland for the regulation of international data transfers, I would like to focus on a passage occurring towards the end of the judgment, where the Court criticizes the Data Retention Directive as follows (paragraph 68):

“[I]t should be added that that directive does not require the data in question to be retained within the European Union, with the result that it cannot be held that the control, explicitly required by Article 8(3) of the Charter, by an independent authority of compliance with the requirements of protection and security, as referred to in the two previous paragraphs, is fully ensured. Such a control, carried out on the basis of EU law, is an essential component of the protection of individuals with regard to the processing of personal data…”

This statement caught many observers by surprise. The CJEU is famous for the concise and self-referential style of its opinions, and the case revolved around the legality of the Directive in general, not around whether data stored under it could be transferred outside the EU. This issue was also not raised in the submission of the case to the Court, and first surfaced in the advisory opinion issued by one of the Court’s advocates-general prior to the judgment (see paragraph 78 of that Opinion).

In US constitutional law, the question “does the constitution follow the flag?” generally arises in the context of whether the Fourth Amendment to the US Constitution applies to government activity overseas (e.g., when US law enforcement abducts a fugitive abroad and brings him back to the US). In the context discussed here, the question is rather whether EU data protection law applies to personal data as they are transferred outside the EU, i.e., “whether the EU flag follows EU data”. As I explained in my book on the regulation of transborder data flows that was published last year by Oxford University Press, in many cases EU data protection law remains applicable to personal data transferred to other regions. For example, in introducing its proposed reform of EU data protection law, the European Commission stated in 2012 that one of its key purposes is to “ensure a level of protection for data transferred out of the EU similar to that within the EU”.

EU data protection law is based on constitutional provisions protecting fundamental rights (e.g., Article 8 of the EU Charter of Fundamental Rights), and the CJEU has emphasized in cases involving the independence of the data protection authorities (DPAs) in Austria, Germany, and Hungary that control of data processing by an independent DPA is an essential element of the fundamental right to data protection (without ever discussing independent supervision in the context of data processing outside the EU). In light of those previous cases, the logical consequence of the Court’s statement in Digital Rights Ireland would seem to be that fundamental rights law requires oversight of data processing by the DPAs also with regard to the data of EU individuals that are transferred to other regions.

This conclusion raises a number of questions. For example, how can it be reconciled with the fact that the enforcement jurisdiction of the DPAs ends at the borders of their respective EU Member States (see Article 28 of the EU Data Protection Directive 95/46)? If supervision by the EU DPAs extends already by operation of law to the storage of EU data in other regions, then why do certain EU legal mechanisms in addition force the parties to data transfers to explicitly accept the extraterritorial regulatory authority of the DPAs (e.g., Clause 5(e) of the EU standard contractual clauses of 2010)? And how does the Court’s statement fit with its 2003 Lindqvist judgment, where it held that EU data protection law should not be interpreted to apply to the entire Internet (see paragraph 69 of that judgment)? The offhand way in which the Court referred to DPA supervision over data processing outside the EU in the Digital Rights Ireland judgment gives the impression that it was unaware of, or disinterested in, such questions.

On June 18 the Irish High Court referred a case to the CJEU that may develop further its line of thought in the Digital Rights Ireland judgment. The High Court’s judgment in Schrems v. Data Protection Commissioner involved a challenge by Austrian student Max Schrems to the transfer of personal data to the US by Facebook under the Safe Harbor. The High Court announced that it would refer to the CJEU the questions of whether the European Commission’s adequacy decision of 2000 creating the Safe Harbor should be re-evaluated in light of the Charter of Fundamental Rights and widespread access to data by US law enforcement, and of whether the individual DPAs should be allowed to determine whether the Safe Harbor provides adequate protection (see paragraphs 71 and 84). The linkage between the two cases is evidenced by the Irish High Court’s frequent citation of Digital Rights Ireland, and by the CJEU’s conclusion that interference with the right to data protection caused by widespread data retention for law enforcement purposes without notice being given to individuals was “particularly serious” (see paragraph 37 of Digital Rights Ireland and paragraph 44 of Schrems v. Data Protection Commissioner). The High Court also criticized the Safe Harbor and the system of oversight of law enforcement data access in the US as failing to provide oversight “carried out on European soil” (paragraph 62), which seems inspired by paragraph 68 of the Digital Rights Ireland judgment.

The Irish referral to the CJEU also holds implications for the possibility of harmonized EU rules regarding international data transfers. If each DPA is allowed to override Commission adequacy decisions based on its individual view of what the Charter of Fundamental Rights requires, then there would be no point to such decisions in the first place (and the current disagreement over the “one stop shop” in the context of the proposed EU General Data Protection Regulation shows the difficulty of reaching agreement on pan-European rules where fundamental rights are at stake). Also, one wonders if other data transfer mechanisms beyond the Safe Harbor could also be at risk (e.g., standard contractual clauses, binding corporate rules, etc.), given that they also allow data to be turned over to non-EU law enforcement authorities. The proposed EU General Data Protection Regulation could eliminate some of these risks, but its passage is still uncertain, and the interpretation by the Court of the role of the Charter of Fundamental Rights would still be relevant under it. Whatever the CJEU eventually decides, it seems inevitable that the case will result in a tightening of EU rules on international data transfers.

The referral by the Irish High Court also raises the question (which the High Court did not address) of how other important fundamental rights, such as freedom of expression and the right to communicate internationally (meaning, in essence, the freedom to communicate on the Internet), should be balanced with the right to data protection. In its recent jurisprudence, the CJEU seems to regard data protection as a “super right” that has preference over other ones; thus, in its recent judgment in the case C-131/12 Google Spain v. AEPD and Mario Costeja Gonzalez involving the “right to be forgotten”, the Court never even refers to Article 11 of the Charter of Fundamental Rights that protects freedom of expression and the right to “receive and impart information and ideas without interference by public authority and regardless of frontiers”. In its zeal to protect personal data transferred outside the EU, it is important that the CJEU not forget that, as it has stated in the past, data protection is not an absolute right, and must be considered in relation to its function in society (see, for example, Joined Cases C-92/09 and C-93/09 Volker und Markus Schecke, paragraph 48), and that there must be some territorial limit to EU data protection law, if it is not to become a system of universal application that applies to the entire world (as the Court held in Lindqvist). Thus, there is an urgent need for an authoritative and dispassionate analysis of the territorial limits to EU data protection law, and of how a balance can be struck between data protection and other fundamental rights, guidance which unfortunately the CJEU seems unwilling to provide.