Tagged: Privacy

3

Big Data for All

Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.

The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.

Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.

Read More

1

On Reverse Engineering Privacy Law

Michael Birnhack, a professor at Tel Aviv University Faculty of Law, is one of the leading thinkers about privacy and data protection today (for some of his previous work see here and here and here; he’s also written a deep, thoughtful, innovative book in Hebrew about the theory of privacy. See here). In a new article, Reverse Engineering Informational Privacy Law, which is about to be published in the Yale Journal of Law & Technology, Birnhack sets out to unearth the technological underpinnings of the EU Data Protection Directive (DPD). The DPD, enacted in 1995 and currently undergoing a process of thorough review, is surely the most influential legal instrument concerning data privacy all over the world. It has been heralded by proponents as “technology neutral” – a recipe for longevity in a world marked by rapid technological change. Alas, Birnhack unveils the highly technology-specific fundamentals of the DPD, thereby putting into doubt its continued relevance.

 

The first part of Birnhack’s article analyzes what technological neutrality of a legal framework means and why it’s a sought after trait. He posits that the idea behind it is simple: “the law should not name, specify or describe a particular technology, but rather speak in broader terms that can encompass more than one technology and hopefully, would cover future technologies that are not yet known at the time of legislation.” One big advantage is flexibility (the law can apply to a broad, continuously shifting set of technologies); consider the continued viability of the tech-neutral Fourth Amendment versus the obviously archaic nature of the tech-specific ECPA . Another advantage is the promotion of innovation; tech-specific legislation can lock-in a specific technology thereby stifling innovation.

 

Birnhack continues by creating a typology of tech-related legislation. He examines factors such as whether the law regulates technology as a means or as an end; whether it actively promotes, passively permits or directly restricts technology; at which level of abstraction it relates to technology; and who is put in charge of regulation. Throughout the discussion, Birnhack’s broad, rich expertise in everything law and technology is evident; his examples range from copyright and patent law to nuclear non-proliferation.

Read More

0

Privacy, Masks and Religion

Basking & masking. In China, where sun tan is negatively stigmatized, beach goers wear masks.

One of the most significant developments for privacy law over the past few years has been the rapid erosion of privacy in public. As recently as a decade ago, we benefitted from a fair degree of de facto privacy when walking the streets of a city or navigating a shopping mall. To be sure, we were in plain sight; someone could have seen and followed us; and we would certainly be noticed if we took off our clothes. After all, a public space was always less private than a home. Yet with the notable exception of celebrities, we would have generally benefitted from a fair degree of anonymity or obscurity. A great deal of effort, such as surveillance by a private investigator or team of FBI agents, was required to reverse that. [This, by the way, isn’t a post about US v. Jones, which I will write about later].

 

Now, with mobile tracking devices always on in our pockets; with GPS enabled cars; surveillance cameras linked to facial recognition technologies; smart signage (billboards that target passersby based on their gender, age, or eventually identity); and devices with embedded RFID chips – privacy in public is becoming a remnant of the past.

 

Location tracking is already a powerful tool in the hands of both law enforcement and private businesses, offering a wide array of localized services from restaurant recommendations to traffic reports. Ambient social location apps, such as Glancee and Banjo, are increasingly popular, creating social contexts based on users’ location and enabling users to meet and interact.

 

Facial recognition is becoming more prevalent. This technology too can be used by law enforcement for surveillance or by businesses to analyze certain characteristics of their customers, such as their age, gender or mood (facial detection) or downright identify them (facial recognition). One such service, which was recently tested, allows individuals to check-in to a location on Facebook through facial scanning.

 

Essentially, our face is becoming equivalent to a cookie, the ubiquitous online tracking device. Yet unlike cookies, faces are difficult to erase. And while cellular phones could in theory be left at home, we very rarely travel without them. How will individuals react to a world in which all traces of privacy in public are lost?

Read More

3

There is no new thing under the sun

Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]

 

Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).

The United States: Self Regulation

The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.

 

To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.

 

This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.

Read More

6

Privacy: For the Rich or for the Poor?

Some consider the right to privacy a fundamental right for the rich, or even the rich and famous. It may be no coincidence that the landmark privacy cases in Europe feature names like Naomi Campbell, Michael Douglas, and Princess Caroline of Monaco. After all, if you lived eight-to-a-room in a shantytown in India, you would have little privacy and a lot of other problems to worry about. When viewed this way, privacy seems to be a matter of luxury; a right of spoiled teenagers living in six bedroom houses (“Mom, don’t open the door without knocking”).

 

To refute this view, scholars typically point out that throughout history, totalitarian regimes targeted the right to privacy even before they did free speech. Without privacy, individuals are cowed by authority, conform to societal norms, and self-censor dissenting speech – or even thoughts. As Michel Foucault observed in his interpretation of Jeremy Bentham’s panopticon, the gaze has disciplinary power.

 

But I’d like to discuss an entirely different counter-argument to the privacy-for-the-rich approach. This view was recently presented at the Privacy Law Scholar Conference in a great paper by Laura Moy and Amanda Conley, both 2011 NYU law graduates. In their paper, Paying the Wealthy for Being Wealthy: The Hidden Costs of Behavioral Marketing (I love a good title!), which is not yet available online, Moy and Conley argue that retailers harvest personal information to make the poor subsidize luxury goods for the rich.

 

This might seem audacious at first, but think of it this way: through various loyalty schemes, retailers collect data about consumers’ shopping habits. Naturally, retailers are most interested in data about “high value shoppers.” This is intuitively clear, given that that’s where the big money, low price sensitivity and broad margins are. It’s also backed by empirical evidence, which Moy and Conley reference. Retailers prefer to tend to those who buy saffron and Kobe Beef rather than to those who purchase salt and turkey. To woo the high value shoppers, they offer attractive discounts and promotions – use your loyalty card to buy Beluga caviar; get a free bottle of Champagne. Yet obviously the retailers can’t take a loss for their marketing efforts. Who then pays the price of the rich shoppers’ luxury goods? You guessed it, the rest of us – with price hikes on products like bread and butter.

 

Read More

2

The Vanishing Distinction Between Real-time and Historical Location Data

A congressional inquiry, which recently revealed that cell phone carriers disclose a huge amount of subscriber information to the government, has increased the concern that Big Brother tracks our cell phones. The New York Times reported that, in 2011, carriers responded to 1.3 million law enforcement demands for cell phone subscriber information, including text messages and location information. Because each request can acquire information on multiple people, law enforcement agencies have clearly obtained such information about many more of us than could possibly be worthy of suspicion. Representative Markey, who spearheaded the inquiry, has followed up with a thorough letter to Attorney General Holder that asks how the Justice Department could possibly protect privacy and civil liberties while acquiring such a massive amount of information.

Among many important questions, Representative Markey’s letter asks whether the DOJ continues to legally differentiate between historical (those produced from carrier records) and real-time (those produced after an order is issued) cell site location information and what legal standard the DOJ meets for each (or both). Traditionally, courts have accorded less protection to historical location data, which I have criticized as a matter of Fourth Amendment law in my amicus briefs and in my scholarship. The government’s applications for historical data in the Fifth Circuit case, which is currently considering whether agents seeking historical location data must obtain a warrant, provide additional evidence that the distinction between real-time and historical location data makes no sense.

Some background. Under the current legal rules for location acquisition by law enforcement, which are complex, confusing, and contested, law enforcement agents have generally been permitted to acquire historical location data without establishing probable cause and obtaining a warrant. Instead, they have had to demonstrate that the records are relevant to a law enforcement investigation, which can dramatically widen the scope of an inquiry beyond those actually suspected of criminal activity and yield the large number of disclosures that the recent congressional inquiry revealed. Generally, prospective (real-time) location information has required a higher standard, often a warrant based on probable cause, which has made it more burdensome to acquire and therefore more protected against excessive disclosure.

Some commentators and judges have questioned whether historical location data should be available on an easier to satisfy standard, positing the hypothetical that law enforcement agents could wait just a short amount of time for real-time information to become a record, and then request it under the lower standard. Doing so would clearly be an end run around both the applicable statute (ECPA) and the Fourth Amendment, which arguably accord less protection to historical information because it is stored as an ordinary business record and not because of the fortuity that it is stored for a short period of time.

It turns out that this hypothetical is more than just the product of concerned people’s imagination. The three applications in the Fifth Circuit case requested that stored records be created on an ongoing basis. For example, just after a paragraph that requests “historical cell-site information… for the sixty (60) days prior” to the order, one application requests “For the Target Device, after receipt and storage, records of other information… provided to the United States on a continuous basis contemporaneous with” the start or end of a call, or during a call if that information is available. The other two applications clarify that “after receipt and storage” is “intended to ensure that the information” requested “is first captured and recorded by the provider before being sent.” In other words, the government is asking the carrier to create stored records and then send them on as soon as they are stored.

To be clear, only one of the three applications applied for only a relevance-based court order to obtain the continuously-created stored data. That court order, used for historical data, has never been deemed sufficient for forward-looking data (as the continuously-created data would surely be as it would be generated after the order). The other two applications used a standard less than probable cause but more than just a relevance order. It is not clear if the request for forward-looking data under the historical standard was an inadvertent mistake or an attempt to mislead. But applications in other cases have much more clearly asked for forward-looking prospective data, and didn’t require that data to be momentarily stored. Why would the applications in this case request temporary storage if not at least to encourage the judge considering the application to grant it on a lower standard?

I am optimistic that the DOJ’s response to Representative Markey’s letter will yield important information about current DOJ practices and will further spur reform. In the meantime, the government’s current practice of using this intrusive tool to gather too much information about too many people cries out for formal legal restraint. Congress should enact a law requiring a warrant based on probable cause for all location data. It should not codify a meaningless distinction between historical and real-time data that further confuses judges and encourages manipulative behavior by the government.

0

The Right to Data Portability (RDP) as a Per Se Anti-tying Rule

Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.

I think the presentation helped sharpen the antitrust concern.  The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation.  RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.

Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser.  We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago.  Plaintiffs asserted an illegal “tying” arrangement between Windows and IE.  The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together.  The court instead held that the rule of reason applies.

RDP, however, amounts to a per se rule against tying of software.  Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats.  We have the tying product (social network) and the tied product (module for export or not of data).  US antitrust law has rejected a per se rule here.  The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.

Modern US and EU antitrust law seek to enhance “consumer welfare.”  If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare.  There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing).  RDP might advance human rights to access.  It might enhance openness more generally on the Internet.  But it quite possibly reduces consumer welfare, and that deserves careful attention.

0

Stanford Law Review Online: How the War on Drugs Distorts Privacy Law

Stanford Law Review

The Stanford Law Review Online has just published an Essay by Jane Yakowitz Bambauer entitled How the War on Drugs Distorts Privacy Law. Professor Yakowitz analyzes the opportunity the Supreme Court has to rewrite certain privacy standards in Florida v. Jardines:

The U.S. Supreme Court will soon determine whether a trained narcotics dog’s sniff at the front door of a home constitutes a Fourth Amendment search. The case, Florida v. Jardines, has privacy scholars abuzz because it presents two possible shifts in Fourth Amendment jurisprudence. First, the Court might expand the physical spaces rationale from Justice Scalia’s majority opinion in United States v. Jones. A favorable outcome for Mr. Jardines could reinforce that the home is a formidable privacy fortress, protecting all information from government detection unless that information is visible to the human eye.

Alternatively, and more sensibly, the Court may choose to revisit its previous dog sniff cases, United States v. Place and Illinois v. Caballes. This precedent has shielded dog sniffs from constitutional scrutiny by finding that sniffs of luggage and a car, respectively, did not constitute searches. Their logic is straightforward: since a sniff “discloses only the presence or absence of narcotics, a contraband item,” a search incident to a dog’s alert cannot offend reasonable expectations of privacy. Of course, the logical flaw is equally obvious: police dogs often alert when drugs are not present, resulting in unnecessary suspicionless searches.

She concludes:

Jardines offers the Court an opportunity to carefully assess a mode of policing that subjects all constituents to the burdens of investigation and punishment, not just the “suspicious.” Today, drug-sniffing dogs are unique law enforcement tools that can be used without either individualized suspicion or a “special needs” checkpoint. Given their haphazard deployment and erratic performance, police dogs deserve the skepticism many scholars and courts have expressed. But the wrong reasoning in Jardines could fix indefinitely an assumption that police technologies and civil liberties are always at odds. This would be unfortunate. New technologies have the potential to be what dogs never were—accurate and fair. Explosive detecting systems may eventually meet the standards for this test, and DNA-matching and pattern-based data mining offer more than mere hypothetical promise. Responsible use of these emerging techniques requires more transparency and even application than police departments are accustomed to, but decrease in law enforcement discretion is its own achievement. With luck, the Court will find a search in Jardines while avoiding a rule that reflexively hampers the use of new technologies.

Read the full article, How the War on Drugs Distorts Privacy Law by Jane Yakowitz Bambauer, at the Stanford Law Review Online.

0

The Buzzword of the Year: “Multistakeholder”

Greetings to Concurring Opinion readers. I thank the editors for inviting me to guest blog. I am looking forward to the opportunity to write more informally than I have done for a long time. I am out of the administration, and don’t have to go through the painful process of “clearing” every statement. And I am focusing on researching and writing rather than having clients. So the comments are just my own.

I suspect I’ll be writing about quite a range of privacy and tech issues. Many of my blog-sized musings will likely be about the European Union proposed Data Protection Regulation, and the contemporaneous flowering of privacy policy at the Federal Trade Commission and in the Administration.

From the latter, I propose “multistakeholder” as the buzzword of the year so far. (“Context” is a close second, which I may discuss another time.) The Department of Commerce has received public comments on what should be done in the privacy multistakeholder process. (My own comment focused on the importance of defining “de-identified” information.)

Separately, the administration has been emphasizing the importance of multistakeholder processes for Internet governance, such as in a speech by Larry Strickling, Administrator of the National Telecommunications and Information Administration.

Here’s a try at making sense of this buzzword. On the privacy side, my view is that “multistakeholder” is mostly a substitute for the old term “self regulation.” Self regulation was the organizing theme when the U.S. negotiated the Safe Harbor agreement with the EU in 2000 for privacy. Barbara Wellbery (who lamentably is no longer with us) used “self regulation” repeatedly to explain the U.S. approach. The term accurately describes the legal regime under Section 5 of the FTC Act – an entity (all by itself) makes a promise, and then it’s legally enforceable by others. As I have written since the mid-1990’s, this self regulatory approach can be better than other approaches, depending on the context.

The term “self regulation”, however, has taken on a bad odor. Many European regulators consider “self regulation” as the theme of the Safe Harbor, which they consider weaker than it should have been. Many privacy advocates have also justifiably said that the term puts too much emphasis on the “self”, the company that decides what promises to make.

Enter stage left with the new term, “multistakeholder.” The term directly addresses the advocates’ issue. Advocates should be in the room, along with regulators, entities from affected industries, and perhaps a lot of other stakeholders. It’s not “self regulation” by a “selfish” company. It is instead a process that includes the range of players whose interests should be considered.

I am comfortable with the new term “multistakeholder” for the old “self regulation.” The two are different in the way that the new term includes more of those affected. They are the same, however, because they stand in contrast to top-down regulation by the government. Depending on the facts, multistakeholder may be better, or worse, than the government alternative.

Shifting to Internet governance, “multistakeholder” is a term that resonates with the bottom-up processes that led to the spectacular flowering of the Internet. Examples include organizations such as the Internet Engineering Task Force and the World Wide Web Consortium. Somehow, almost miraculously, the Web grew in twenty years from a tiny community to one numbering in the billions.

The term “multi-stakeholder” is featured in the important OECD Council Recommendation On Principles for Internet Policy Making, garnering 13 mentions in 10 pages. As I hope to discuss in a future blog post, this bottom-up process contrasts sharply with efforts, led by countries including Russia and China, to have the International Telecommunications Union play a major role in Internet governance. Emma Llansó at CDT has explained what is at stake. I am extremely skeptical about an expanded ITU role.

So, administration support for “multi stakeholder process” in both privacy and Internet governance. Similar in hoping that bottom-up beats top-down regulation. Different, I suspect, in how well the bottom-up has done historically. The IETF and the W3C have quite likely earned a grade in the A range for what they have achieved in Internet governance. I doubt that many people would give an A overall to industry self-regulation in the privacy area.

Reason to be cautious. The same word can work differently in different settings.

0

The Wake Forest Law Review Online: “The Myth of Perfection”

The Wake Forest Law Review Online

The Wake Forest Law Review Online has published an essay on internet privacy, online censorship and intellectual property rights: The Myth of Perfection by Derek E. Bambauer.

In The Myth of Perfection, Derek Bambauer explores the impact of the pursuit of perfection on internet privacy, online censorship and intellectual property protection.  Bambauer argues that the “obsession” with perfection may threaten innovation and detract from more pressing privacy concerns.  Ultimately, Bambauer concludes  that in the place of perfection, “we should adopt the more realistic, and helpful, conclusion that often good enough is . . . good enough.”

Preferred citation:

Derek E. Bambauer, The Myth of Perfection, 2 Wake Forest L. Rev. Online 22 (2012), http://wakeforestlawreview.com/the-myth-of-perfection.