Site Meter

Author: Omer Tene

1

Privacy’s midlife crisis

Privacy law is suffering from a midlife crisis. Despite well-recognized tectonic shifts in the socio-technological-business arena, the privacy framework continues to stumble along like an aging protagonist in a rejuvenated cast. The framework’s fundamental concepts are outdated; its goals and justifications in need of reassessment; and yet existing reform processes remain preoccupied with corporate organizational measures, which yield questionable benefits to individuals’ privacy rights. At best, the current framework strains to keep up with new developments; at worst, it has become irrelevant.

More than three decades have passed since the introduction of the OECD Privacy Guidelines; and 15 years since the EU Data Protection Directive was put in place and the “notice and choice” approach gained credence in the United States. This period has seen a surge in the value of personal information for governments, businesses and society at large. Innovations and breakthroughs, particularly in information technologies, have transformed business models and affected individuals’ lives in previously unimaginable ways. Not only technologies but also individuals’ engagement with the data economy has radically changed. Individuals now proactively disseminate large amounts of personal information online via platform service providers, which act as facilitators rather than initiators of data flows. Data transfers, once understood as point-to-point transmissions, have become ubiquitous, geographically indeterminate, and typically “residing” in the cloud.

In a new article titled Privacy law’s midlife crisis: A critical assessment of the second wave of global privacy laws, which will be presented at the upcoming Ohio State Law Journal Symposium on The Second Wave of Global Privacy Protection, I address the challenges posed to the existing privacy framework by three main socio-technological-business shifts: the surge in big data and analytics; the social networking revolution; and the migration of personal data processing to the cloud. The term big data refers to the ability of organizations to collect, store and analyze previously unimaginable amounts of unstructured information in order to find patterns and correlations and draw useful conclusions. Social networking services have reshaped the relationship between individuals and organizations. Those creating, storing, using, and disseminating personal information are no longer just organizations but also geographically dispersed individuals who post photos, submit ratings, and share their location online. The term cloud computing encompasses (at least) three distinct models of utilizing computing resources through a network – software, platform and infrastructure as a service. The advantages of cloud computing abound and include reduced cost, increased reliability, scalability, and security; however, the processing of personal information in the cloud poses new risks to privacy.

Read More

6

PETs, Law and Surveillance

In Europe, privacy is considered a fundamental human right. Section 8 of the European Convention of Human Rights (ECHR) limits the power of the state to interfere in citizens’ privacy, ”except such as is in accordance with the law and is necessary in a democratic society”. Privacy is also granted constitutional protection in the Fourth Amendment to the United States Constitution. Both the ECHR and the US Constitution establish the right to privacy as freedom from government surveillance (I’ll call this “constitutional privacy”). Over the past 40 years, a specific framework has emerged to protect informational privacy (see here and here and here and here); yet this framework (“information privacy”) provides little protection against surveillance by either government or private sector organizations. Indeed, the information privacy framework presumes that a data controller (i.e., a government or business organization collecting, storing and using personal data) is a trusted party, essentially acting as a steward of individual rights. In doing so, it overlooks the fact that organizations often have strong incentives to subject individuals to persistent surveillance; to monetize individuals’ data; and to maximize information collection, storage and use.

Read More

0

More on government access to private sector data

Last week I blogged here about a comprehensive survey on systematic government access to private sector data, which will be published in the next issue of International Data Privacy Law, an Oxford University Press law journal edited by Christopher Kuner. Several readers have asked whether the results of the survey are available online. Well, now they are – even before publication of the special issue. The project, which was organized by Fred Cate and Jim Dempsey and supported by The Privacy Projects, covered government access laws in AustraliaCanadaChinaGermanyIsraelJapanUnited Kingdom and United States.

Peter Swire’s thought provoking piece on the increased importance of government access to the cloud in an age of encrypted communications appears here. Also see the special issue’s editorial, by Fred, Jim and Ira Rubinstein.

 

2

On systematic government access to private sector data

The Sixth Circuit Court of Appeals has recently decided in United States v. Skinner that police does not need a warrant to obtain GPS location data for mobile phones. The decision, based on the holding of the Supreme Court in US v. Jones, highlights the need for a comprehensive reform of rules on government access to communications non-contents information (“communications data”). Once consisting of only a list of phone numbers dialed by a customer (a “pen register”), communications data have become rife with personal information, including location, clickstream, social contacts and more.

To a non-American, the US v. Jones ruling is truly astounding in its narrow scope. Clearly, the Justices aimed to sidestep the obvious question of expectation of privacy in public spaces. The Court did hold that the attachment of a GPS tracking device to a vehicle and its use to monitor the vehicle’s movements constitutes a Fourth Amendment “search”. But it based its holding not on the persistent surveillance of the suspect’s movements but rather on a “trespass to chattels” inflicted when a government agent ever-so-slightly touched the suspect’s vehicle to attach the tracking device. In the opinion of the Court, it was the clearly insignificant “occupation of property” (touching a car!) rather than the obviously weighty location tracking that triggered constitutional protection.

Suffice it to say, that to an outside observer, the property infringement appears to have been a side issue in both Jones and Skinner. The main issue of course is government power to remotely access information about an individual’s life, which is increasingly stored by third parties in the cloud. In most cases past – and certainly present and future – there is little need to trespass on an individual’s property in order to monitor her every move. Our lives are increasingly mediated by technology. Numerous third parties possess volumes of information about our finances, health, online endeavors, geographical movements, etc. For effective surveillance, the government typically just needs to ask.

This is why an upcoming issue of International Data Privacy Law (IDPL) (an Oxford University Press law journal), which is devoted to systematic government access to private sector data, is so timely and important. The special issue covers rules on government access in multiple jurisdictions, including the US, UK, Germany, Israel, Japan, China, India, Australia and Canada.

Read More

3

Big Data for All

Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.

The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.

Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.

Read More

1

On Reverse Engineering Privacy Law

Michael Birnhack, a professor at Tel Aviv University Faculty of Law, is one of the leading thinkers about privacy and data protection today (for some of his previous work see here and here and here; he’s also written a deep, thoughtful, innovative book in Hebrew about the theory of privacy. See here). In a new article, Reverse Engineering Informational Privacy Law, which is about to be published in the Yale Journal of Law & Technology, Birnhack sets out to unearth the technological underpinnings of the EU Data Protection Directive (DPD). The DPD, enacted in 1995 and currently undergoing a process of thorough review, is surely the most influential legal instrument concerning data privacy all over the world. It has been heralded by proponents as “technology neutral” – a recipe for longevity in a world marked by rapid technological change. Alas, Birnhack unveils the highly technology-specific fundamentals of the DPD, thereby putting into doubt its continued relevance.

 

The first part of Birnhack’s article analyzes what technological neutrality of a legal framework means and why it’s a sought after trait. He posits that the idea behind it is simple: “the law should not name, specify or describe a particular technology, but rather speak in broader terms that can encompass more than one technology and hopefully, would cover future technologies that are not yet known at the time of legislation.” One big advantage is flexibility (the law can apply to a broad, continuously shifting set of technologies); consider the continued viability of the tech-neutral Fourth Amendment versus the obviously archaic nature of the tech-specific ECPA . Another advantage is the promotion of innovation; tech-specific legislation can lock-in a specific technology thereby stifling innovation.

 

Birnhack continues by creating a typology of tech-related legislation. He examines factors such as whether the law regulates technology as a means or as an end; whether it actively promotes, passively permits or directly restricts technology; at which level of abstraction it relates to technology; and who is put in charge of regulation. Throughout the discussion, Birnhack’s broad, rich expertise in everything law and technology is evident; his examples range from copyright and patent law to nuclear non-proliferation.

Read More

0

Privacy, Masks and Religion

Basking & masking. In China, where sun tan is negatively stigmatized, beach goers wear masks.

One of the most significant developments for privacy law over the past few years has been the rapid erosion of privacy in public. As recently as a decade ago, we benefitted from a fair degree of de facto privacy when walking the streets of a city or navigating a shopping mall. To be sure, we were in plain sight; someone could have seen and followed us; and we would certainly be noticed if we took off our clothes. After all, a public space was always less private than a home. Yet with the notable exception of celebrities, we would have generally benefitted from a fair degree of anonymity or obscurity. A great deal of effort, such as surveillance by a private investigator or team of FBI agents, was required to reverse that. [This, by the way, isn’t a post about US v. Jones, which I will write about later].

 

Now, with mobile tracking devices always on in our pockets; with GPS enabled cars; surveillance cameras linked to facial recognition technologies; smart signage (billboards that target passersby based on their gender, age, or eventually identity); and devices with embedded RFID chips – privacy in public is becoming a remnant of the past.

 

Location tracking is already a powerful tool in the hands of both law enforcement and private businesses, offering a wide array of localized services from restaurant recommendations to traffic reports. Ambient social location apps, such as Glancee and Banjo, are increasingly popular, creating social contexts based on users’ location and enabling users to meet and interact.

 

Facial recognition is becoming more prevalent. This technology too can be used by law enforcement for surveillance or by businesses to analyze certain characteristics of their customers, such as their age, gender or mood (facial detection) or downright identify them (facial recognition). One such service, which was recently tested, allows individuals to check-in to a location on Facebook through facial scanning.

 

Essentially, our face is becoming equivalent to a cookie, the ubiquitous online tracking device. Yet unlike cookies, faces are difficult to erase. And while cellular phones could in theory be left at home, we very rarely travel without them. How will individuals react to a world in which all traces of privacy in public are lost?

Read More

3

There is no new thing under the sun

Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]

 

Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).

The United States: Self Regulation

The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.

 

To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.

 

This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.

Read More

6

Privacy: For the Rich or for the Poor?

Some consider the right to privacy a fundamental right for the rich, or even the rich and famous. It may be no coincidence that the landmark privacy cases in Europe feature names like Naomi Campbell, Michael Douglas, and Princess Caroline of Monaco. After all, if you lived eight-to-a-room in a shantytown in India, you would have little privacy and a lot of other problems to worry about. When viewed this way, privacy seems to be a matter of luxury; a right of spoiled teenagers living in six bedroom houses (“Mom, don’t open the door without knocking”).

 

To refute this view, scholars typically point out that throughout history, totalitarian regimes targeted the right to privacy even before they did free speech. Without privacy, individuals are cowed by authority, conform to societal norms, and self-censor dissenting speech – or even thoughts. As Michel Foucault observed in his interpretation of Jeremy Bentham’s panopticon, the gaze has disciplinary power.

 

But I’d like to discuss an entirely different counter-argument to the privacy-for-the-rich approach. This view was recently presented at the Privacy Law Scholar Conference in a great paper by Laura Moy and Amanda Conley, both 2011 NYU law graduates. In their paper, Paying the Wealthy for Being Wealthy: The Hidden Costs of Behavioral Marketing (I love a good title!), which is not yet available online, Moy and Conley argue that retailers harvest personal information to make the poor subsidize luxury goods for the rich.

 

This might seem audacious at first, but think of it this way: through various loyalty schemes, retailers collect data about consumers’ shopping habits. Naturally, retailers are most interested in data about “high value shoppers.” This is intuitively clear, given that that’s where the big money, low price sensitivity and broad margins are. It’s also backed by empirical evidence, which Moy and Conley reference. Retailers prefer to tend to those who buy saffron and Kobe Beef rather than to those who purchase salt and turkey. To woo the high value shoppers, they offer attractive discounts and promotions – use your loyalty card to buy Beluga caviar; get a free bottle of Champagne. Yet obviously the retailers can’t take a loss for their marketing efforts. Who then pays the price of the rich shoppers’ luxury goods? You guessed it, the rest of us – with price hikes on products like bread and butter.

 

Read More