Category: Privacy (Consumer Privacy)

I
0

4 Points About the Target Breach and Data Security

There seems to be a surge in data security attacks lately. First came news of the Target attack. Then Neiman Marcus. Then the U.S Courts. Then Michael’s. Here are four points to consider about data security:

1. Beware of fraudsters engaging in post-breach fraud.

After the Target breach, fraudsters sent out fake emails purporting to be from Target about the breach and trying to trick people into providing personal data. It can be hard to distinguish the real email from an organization having a data breach from a fake one by fraudsters. People are more likely to fall prey to a phishing scheme because they are anxious and want to take steps to protect themselves. Post-breach trickery is now a growing technique of fraudsters, and people must be educated about it and be on guard.

2. Credit card fraud and identity theft are not the same.

The news media often conflates credit card fraud with identity theft. Although there is one point of overlap, for the most part they are very different. Credit card fraud involving the improper use of credit card data can be stopped when the card is cancelled and replaced. An identity theft differs because it involves the use of personal information such as Social Security number, birth date, and other data that cannot readily be changed. It is thus much harder to stop identity theft. The point of overlap is when an identity thief uses a person’s data to obtain a credit card. But when a credit card is lost or stolen, or when credit card data is leaked or improperly accessed, this is credit card fraud, and not identity theft.

3. Data breaches cause harm.

What’s the harm when data is leaked? This question has confounded courts, which often don’t recognize a harm. If your credit card is just cancelled and replaced, and you don’t pay anything, are you harmed? If your data is leaked, but you don’t suffer from identity theft, are you harmed? I believe that there is a harm. The harm of credit card fraud is that it can take a long time to replace all the credit card information in various accounts. People have card data on file with countless businesses and organizations for automatic charges and other transactions. Replacing all this data can be a major chore. People’s time has a price. That price will vary, but it rarely is zero.

Read More

Surveillance Man 02
0

10 Reasons Why Privacy Matters

Why does privacy matter? Often courts and commentators struggle to articulate why privacy is valuable. They see privacy violations as often slight annoyances. But privacy matters a lot more than that. Here are 10 reasons why privacy matters.

1. Limit on Power

Privacy is a limit on government power, as well as the power of private sector companies. The more someone knows about us, the more power they can have over us. Personal data is used to make very important decisions in our lives. Personal data can be used to affect our reputations; and it can be used to influence our decisions and shape our behavior. It can be used as a tool to exercise control over us. And in the wrong hands, personal data can be used to cause us great harm.

2. Respect for Individuals

Privacy is about respecting individuals. If a person has a reasonable desire to keep something private, it is disrespectful to ignore that person’s wishes without a compelling reason to do so. Of course, the desire for privacy can conflict with important values, so privacy may not always win out in the balance. Sometimes people’s desires for privacy are just brushed aside because of a view that the harm in doing so is trivial. Even if this doesn’t cause major injury, it demonstrates a lack of respect for that person. In a sense it is saying: “I care about my interests, but I don’t care about yours.”

3. Reputation Management

Privacy enables people to manage their reputations. How we are judged by others affects our opportunities, friendships, and overall well-being. Although we can’t have complete control over our reputations, we must have some ability to protect our reputations from being unfairly harmed. Protecting reputation depends on protecting against not only falsehoods but also certain truths. Knowing private details about people’s lives doesn’t necessarily lead to more accurate judgment about people. People judge badly, they judge in haste, they judge out of context, they judge without hearing the whole story, and they judge with hypocrisy. Privacy helps people protect themselves from these troublesome judgments.

Read More

0

With Great Power Comes Great Responsibility

In a sentence, Anupam Chander’s The Electronic Silk Road contains the good, the bad and the ugly of the modern interconnected and globalized world.

How many times do we use terms like “network” and “global”? In Professor Chander’s book you may find not only the meanings, but also the possible legal, economical and ethical implications that these terms may include today.

It’s well known that we are facing a revolution, despite of recent Bill Gates’ words that “The internet is not going to save the world”. I partly agree with Mr. Gates. Probably the internet will not save the world, but for sure it has already changed the world as we know it, making possible the opportunities that are well described in The Electronic Silk Road.

However, I would like to use my spot in this Symposium not to write about the wonders of the Trade 2.0, but to share some concerns that , as a privacy scholar, I have.

The problem is well known and is connected to the risk of the big data companies, that base their business model on consumer-profiling for selling advertisement or additional services to the companies.

“[T]he more the network provider knows about you, the more it can earn” writes Chander, and as noted by V. Mayer-Schönberger and K. Cukier in their recent book Big Data, the risks that could be related with the “dark side” of the big data are not just about the privacy of individuals, but also about the processing of those data, with the “possibility of using big data predictions about people to judge and punish them even before they’ve acted.”.

This is, probably, the good and the bad of big data companies as modern caravans of the electronic silk road: they bring a lot of information, and the information can be used, or better processed, for so many different purposes that we can’t imagine what will happen tomorrow, and not only the risk of a global surveillance is around the corner (on this topic I suggest to read the great post by D. K. Citron and D. Gray Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards), but also the risk of a dictatorship of data.

This possible circumstance, as Professor Solove write in the book Nothing To Hide “[…] not only frustate the individual by creating a sense of helpness and powerlessness, they also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.”

Thus, I guess that the privacy and data protection ground could be the real challenge for the electronic silk road.

Professor Chander’s book is full of examples about the misuse of data (see the Paragraph Yahoo! in China), the problem of protection of sensitive data shared across the world (see the Paragraph Boston Brahmins and Bangalore Doctors), the problem about users’ privacy posed by social networks (see Chapter 5 Facebookistan).

But Professor Chander was able also to see the possible benefits of big data analysis (see the Paragraph Predictions and Predilections), for example in healthcare, thus is important to find a way to regulate the unstoppable flowing of data across the world.

In a so complex debate about a right that is subject to different senses and definitions across the world (what is “privacy” or “personal data” is different between USA, Canada, Europe and China for example), I find very interesting the recipe suggested by Anupam Chander.

First of all, we have to embrace some ground principles that are good both for providers and for law and policy makers: 1) do no evil; 2) technology is neutral; 3) the cyberspace need a dematerialized architecture.

Using these principles, it will be easy to follow Professor Chander’s fundamental rule: “harmonization where possible, glocalization where necessary”.

A practical implementation of this rule, as described in Chapter 8, will satisfy the different view of data privacy in a highly liberal regimes and in a highly repressive regime, pushing the glocalization (global services adapt to local rules) against the deregulation in the highly liberal regimes and the “do no evil” principle against the oppression in the highly repressive regime.

This seems reasonable to me, and at the end of my “journey” in Professor Chander’s book, I want to thank him for giving us some fascinating, but above all usable, theories for the forthcoming international cyberlaw.

0

Opportunities and Roadblocks Along the Electronic Silk Road

977574_288606077943048_524618202_oLast week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.

International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.

But this trade also brought communication—with ideas and critiques circulated around the world.  The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.

Read More

4

Who Is The More Active Privacy Enforcer: FTC or OCR?

Those who follow FTC privacy activities are already aware of the hype that surrounds the FTC’s enforcement actions.  For years, American businesses and the Department of Commerce have loudly touted the FTC as a privacy enforcer equivalent to EU Data Protection Authorities.  The Commission is routinely cited as providing the enforcement mechanism for commercial privacy self-regulatory activities, for the EU-US Safe Harbor Framework, and for the Department of Commerce sponsored Multistakeholder process.  American business and the Commerce Department have exhausted themselves in international privacy forums promoting the virtues of FTC privacy enforcement.

I want to put FTC privacy activities into a perspective by comparing the FTC with the Office of Civil Rights (OCR), Department of Health and Human Services.  OCR enforces health privacy and security standards based on the Health Insurance Portability and Accountability Act (HIPAA).

Let’s begin with the FTC’s statistics.  The Commission maintains a webpage with information on all of its cases since 1997.  The FTC’s website is http://business.ftc.gov/legal-resources/8/35.  I’ve found that the link provided does not work consistently or properly at times.  I can’t reach some pages to confirm everything I would like to, but I am sure enough of the basics to be able to make these comments.

The Commission reports 153 cases from 1997 through February 2013.  That’s roughly 15 years, with an average of about ten cases a year.  The number of cases for 2012, the last full year, was 24, much higher than the fifteen-year average.  The Commission clearly stepped up its privacy and security enforcement activities of late.  I haven’t reviewed the quality or significance of the cases brought, just the number.

Read More

0

The FTC and the New Common Law of Privacy

I recently posted a draft of my new article, The FTC and the New Common Law of Privacy (with Professor Woodrow Hartzog).

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:

  • Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
  • Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
  • Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
  • Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?

 

 

The claims we make in this article include:

  • The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
  • The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.

 

You can download the article draft here on SSRN.

0

Brave New World of Biometric Identification

120px-Fingerprint_scanner_identificationProfessor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:

a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.

Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.

Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:

FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.

Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’

Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.

Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered.  In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.

 

2

Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.

0

What Is Personally Identifiable Information (PII)? Finding Common Ground in the EU and US

This post was co-authored by Professor Paul Schwartz.

We recently released a draft of our new essay, Reconciling Personal Information in the European Union and the United States, and we want to highlight some of its main points here.

The privacy law of the United States (US) and European Union (EU) differs in many fundamental ways, greatly complicating commerce between the US and EU.  At the broadest level, US privacy law focuses on redressing consumer harm and balancing privacy with efficient commercial transactions.  In the EU, privacy is hailed as a fundamental right that trumps other interests.  The result is that EU privacy protections are much more restrictive on the use and transfer of personal data than US privacy law.

Numerous attempts have been made to bridge the gap between US and EU privacy law, but a very large initial hurdle stands in the way.  The two bodies of law can’t even agree on the scope of protection let alone the substance of the protections.  The scope of protection of privacy laws turns on the definition of “personally identifiable information” (PII).  If there is PII, privacy laws apply.  If PII is absent, privacy laws do not apply.

In the US, the law provides multiple definitions of PII, most focusing on whether the information pertains to an identified person.  In contrast, in the EU, there is a single definition of personal data to encompass all information identifiable to a person.  Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.

In our essay, Reconciling Personal Information in the European Union and the United States, we argue that both the US and EU approaches to defining PII are flawed.  We also contend that a tiered approach to the concept of PII can bridge the differences between the US and EU approaches.

Read More

0

Prism and Its Relationship to Clouds, Security, Jurisdiction, and Privacy

In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.