Category: Privacy (Consumer Privacy)

0

Opportunities and Roadblocks Along the Electronic Silk Road

977574_288606077943048_524618202_oLast week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.

International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.

But this trade also brought communication—with ideas and critiques circulated around the world.  The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.

Read More

4

Who Is The More Active Privacy Enforcer: FTC or OCR?

Those who follow FTC privacy activities are already aware of the hype that surrounds the FTC’s enforcement actions.  For years, American businesses and the Department of Commerce have loudly touted the FTC as a privacy enforcer equivalent to EU Data Protection Authorities.  The Commission is routinely cited as providing the enforcement mechanism for commercial privacy self-regulatory activities, for the EU-US Safe Harbor Framework, and for the Department of Commerce sponsored Multistakeholder process.  American business and the Commerce Department have exhausted themselves in international privacy forums promoting the virtues of FTC privacy enforcement.

I want to put FTC privacy activities into a perspective by comparing the FTC with the Office of Civil Rights (OCR), Department of Health and Human Services.  OCR enforces health privacy and security standards based on the Health Insurance Portability and Accountability Act (HIPAA).

Let’s begin with the FTC’s statistics.  The Commission maintains a webpage with information on all of its cases since 1997.  The FTC’s website is http://business.ftc.gov/legal-resources/8/35.  I’ve found that the link provided does not work consistently or properly at times.  I can’t reach some pages to confirm everything I would like to, but I am sure enough of the basics to be able to make these comments.

The Commission reports 153 cases from 1997 through February 2013.  That’s roughly 15 years, with an average of about ten cases a year.  The number of cases for 2012, the last full year, was 24, much higher than the fifteen-year average.  The Commission clearly stepped up its privacy and security enforcement activities of late.  I haven’t reviewed the quality or significance of the cases brought, just the number.

Read More

0

The FTC and the New Common Law of Privacy

I recently posted a draft of my new article, The FTC and the New Common Law of Privacy (with Professor Woodrow Hartzog).

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:

  • Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
  • Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
  • Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
  • Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?

 

 

The claims we make in this article include:

  • The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
  • The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.

 

You can download the article draft here on SSRN.

0

Brave New World of Biometric Identification

120px-Fingerprint_scanner_identificationProfessor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:

a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.

Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.

Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:

FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.

Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’

Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.

Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered.  In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.

 

2

Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.

0

What Is Personally Identifiable Information (PII)? Finding Common Ground in the EU and US

This post was co-authored by Professor Paul Schwartz.

We recently released a draft of our new essay, Reconciling Personal Information in the European Union and the United States, and we want to highlight some of its main points here.

The privacy law of the United States (US) and European Union (EU) differs in many fundamental ways, greatly complicating commerce between the US and EU.  At the broadest level, US privacy law focuses on redressing consumer harm and balancing privacy with efficient commercial transactions.  In the EU, privacy is hailed as a fundamental right that trumps other interests.  The result is that EU privacy protections are much more restrictive on the use and transfer of personal data than US privacy law.

Numerous attempts have been made to bridge the gap between US and EU privacy law, but a very large initial hurdle stands in the way.  The two bodies of law can’t even agree on the scope of protection let alone the substance of the protections.  The scope of protection of privacy laws turns on the definition of “personally identifiable information” (PII).  If there is PII, privacy laws apply.  If PII is absent, privacy laws do not apply.

In the US, the law provides multiple definitions of PII, most focusing on whether the information pertains to an identified person.  In contrast, in the EU, there is a single definition of personal data to encompass all information identifiable to a person.  Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.

In our essay, Reconciling Personal Information in the European Union and the United States, we argue that both the US and EU approaches to defining PII are flawed.  We also contend that a tiered approach to the concept of PII can bridge the differences between the US and EU approaches.

Read More

0

Prism and Its Relationship to Clouds, Security, Jurisdiction, and Privacy

In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.

3

Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More

0

Harvard Law Review Privacy Symposium Issue

The privacy symposium issue of the Harvard Law Review is hot off the presses.  Here are the articles:

SYMPOSIUM
PRIVACY AND TECHNOLOGY
Introduction: Privacy Self-Management and the Consent Dilemmas
Daniel J. Solove

What Privacy is For
Julie E. Cohen

The Dangers of Surveillance
Neil M. Richards

The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures
Paul M. Schwartz

Toward a Positive Theory of Privacy Law
Lior Jacob Strahilevitz

1

Privacy Self-Management and the Consent Dilemma

I’m pleased to share with you my new article in Harvard Law Review entitled Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013). You can download it for free on SSRN. This is a short piece (24 pages) so you can read it in one sitting.

Here are some key points in the Article:

1. The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” – the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Unfortunately, privacy self-management is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control over personal data.

2. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.

3. People cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.

4. Privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.

5. In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.

6. The way forward involves (1) developing a coherent approach to consent, one that accounts for the social science discoveries about how people make decisions about personal data; (2) recognizing that people can engage in privacy self-management only selectively; (3) adjusting privacy law’s timing to focus on downstream uses; and (4) developing more substantive privacy rules.

The full article is here.

Cross-posted on LinkedIn.