Archive for the ‘Privacy (Consumer Privacy)’ Category
posted by Pierluigi Perri
In a sentence, Anupam Chander’s The Electronic Silk Road contains the good, the bad and the ugly of the modern interconnected and globalized world.
How many times do we use terms like “network” and “global”? In Professor Chander’s book you may find not only the meanings, but also the possible legal, economical and ethical implications that these terms may include today.
It’s well known that we are facing a revolution, despite of recent Bill Gates’ words that “The internet is not going to save the world”. I partly agree with Mr. Gates. Probably the internet will not save the world, but for sure it has already changed the world as we know it, making possible the opportunities that are well described in The Electronic Silk Road.
However, I would like to use my spot in this Symposium not to write about the wonders of the Trade 2.0, but to share some concerns that , as a privacy scholar, I have.
The problem is well known and is connected to the risk of the big data companies, that base their business model on consumer-profiling for selling advertisement or additional services to the companies.
“[T]he more the network provider knows about you, the more it can earn” writes Chander, and as noted by V. Mayer-Schönberger and K. Cukier in their recent book Big Data, the risks that could be related with the “dark side” of the big data are not just about the privacy of individuals, but also about the processing of those data, with the “possibility of using big data predictions about people to judge and punish them even before they’ve acted.”.
This is, probably, the good and the bad of big data companies as modern caravans of the electronic silk road: they bring a lot of information, and the information can be used, or better processed, for so many different purposes that we can’t imagine what will happen tomorrow, and not only the risk of a global surveillance is around the corner (on this topic I suggest to read the great post by D. K. Citron and D. Gray Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards), but also the risk of a dictatorship of data.
This possible circumstance, as Professor Solove write in the book Nothing To Hide “[…] not only frustate the individual by creating a sense of helpness and powerlessness, they also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.”
Thus, I guess that the privacy and data protection ground could be the real challenge for the electronic silk road.
Professor Chander’s book is full of examples about the misuse of data (see the Paragraph Yahoo! in China), the problem of protection of sensitive data shared across the world (see the Paragraph Boston Brahmins and Bangalore Doctors), the problem about users’ privacy posed by social networks (see Chapter 5 Facebookistan).
But Professor Chander was able also to see the possible benefits of big data analysis (see the Paragraph Predictions and Predilections), for example in healthcare, thus is important to find a way to regulate the unstoppable flowing of data across the world.
In a so complex debate about a right that is subject to different senses and definitions across the world (what is “privacy” or “personal data” is different between USA, Canada, Europe and China for example), I find very interesting the recipe suggested by Anupam Chander.
First of all, we have to embrace some ground principles that are good both for providers and for law and policy makers: 1) do no evil; 2) technology is neutral; 3) the cyberspace need a dematerialized architecture.
Using these principles, it will be easy to follow Professor Chander’s fundamental rule: “harmonization where possible, glocalization where necessary”.
A practical implementation of this rule, as described in Chapter 8, will satisfy the different view of data privacy in a highly liberal regimes and in a highly repressive regime, pushing the glocalization (global services adapt to local rules) against the deregulation in the highly liberal regimes and the “do no evil” principle against the oppression in the highly repressive regime.
This seems reasonable to me, and at the end of my “journey” in Professor Chander’s book, I want to thank him for giving us some fascinating, but above all usable, theories for the forthcoming international cyberlaw.
posted by Anupam Chander
Last week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.
International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.
But this trade also brought communication—with ideas and critiques circulated around the world. The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.
October 28, 2013 at 5:46 pm Posted in: Consumer Protection Law, Cyberlaw, First Amendment, Intellectual Property, International & Comparative Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Symposium (The Electronic Silk Road) Print This Post No Comments
posted by Robert Gellman
Those who follow FTC privacy activities are already aware of the hype that surrounds the FTC’s enforcement actions. For years, American businesses and the Department of Commerce have loudly touted the FTC as a privacy enforcer equivalent to EU Data Protection Authorities. The Commission is routinely cited as providing the enforcement mechanism for commercial privacy self-regulatory activities, for the EU-US Safe Harbor Framework, and for the Department of Commerce sponsored Multistakeholder process. American business and the Commerce Department have exhausted themselves in international privacy forums promoting the virtues of FTC privacy enforcement.
I want to put FTC privacy activities into a perspective by comparing the FTC with the Office of Civil Rights (OCR), Department of Health and Human Services. OCR enforces health privacy and security standards based on the Health Insurance Portability and Accountability Act (HIPAA).
Let’s begin with the FTC’s statistics. The Commission maintains a webpage with information on all of its cases since 1997. The FTC’s website is http://business.ftc.gov/legal-resources/8/35. I’ve found that the link provided does not work consistently or properly at times. I can’t reach some pages to confirm everything I would like to, but I am sure enough of the basics to be able to make these comments.
The Commission reports 153 cases from 1997 through February 2013. That’s roughly 15 years, with an average of about ten cases a year. The number of cases for 2012, the last full year, was 24, much higher than the fifteen-year average. The Commission clearly stepped up its privacy and security enforcement activities of late. I haven’t reviewed the quality or significance of the cases brought, just the number.
posted by Daniel Solove
One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.
In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:
- Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
- Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
- Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
- Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?
The claims we make in this article include:
- The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
- The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.
August 20, 2013 at 12:02 pm Posted in: Administrative Law, Articles and Books, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Technology Print This Post No Comments
posted by Danielle Citron
Professor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:
a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.
Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.
Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:
FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.
Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’
Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.
Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered. In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.
posted by Danielle Citron
Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.
What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.
H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.
posted by Daniel Solove
This post was co-authored by Professor Paul Schwartz.
We recently released a draft of our new essay, Reconciling Personal Information in the European Union and the United States, and we want to highlight some of its main points here.
The privacy law of the United States (US) and European Union (EU) differs in many fundamental ways, greatly complicating commerce between the US and EU. At the broadest level, US privacy law focuses on redressing consumer harm and balancing privacy with efficient commercial transactions. In the EU, privacy is hailed as a fundamental right that trumps other interests. The result is that EU privacy protections are much more restrictive on the use and transfer of personal data than US privacy law.
Numerous attempts have been made to bridge the gap between US and EU privacy law, but a very large initial hurdle stands in the way. The two bodies of law can’t even agree on the scope of protection let alone the substance of the protections. The scope of protection of privacy laws turns on the definition of “personally identifiable information” (PII). If there is PII, privacy laws apply. If PII is absent, privacy laws do not apply.
In the US, the law provides multiple definitions of PII, most focusing on whether the information pertains to an identified person. In contrast, in the EU, there is a single definition of personal data to encompass all information identifiable to a person. Even if the data alone cannot be linked to a specific individual, if it is reasonably possible to use the data in combination with other information to identify a person, then the data is PII.
In our essay, Reconciling Personal Information in the European Union and the United States, we argue that both the US and EU approaches to defining PII are flawed. We also contend that a tiered approach to the concept of PII can bridge the differences between the US and EU approaches.
posted by Deven Desai
In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:
A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.
Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.
June 24, 2013 at 1:44 pm Posted in: Intellectual Property, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Technology Print This Post No Comments
posted by Daniel Solove
In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.
I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.
But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.
The Widespread Hunger for Access
Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”
Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”
In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”
June 3, 2013 at 10:51 am Posted in: Constitutional Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Social Network Websites Print This Post 3 Comments
posted by Daniel Solove
The privacy symposium issue of the Harvard Law Review is hot off the presses. Here are the articles:
PRIVACY AND TECHNOLOGY
Introduction: Privacy Self-Management and the Consent Dilemmas
Daniel J. Solove
What Privacy is For
Julie E. Cohen
The Dangers of Surveillance
Neil M. Richards
The EU-U.S. Privacy Collision: A Turn to Institutions and Procedures
Paul M. Schwartz
Toward a Positive Theory of Privacy Law
Lior Jacob Strahilevitz
posted by Daniel Solove
I’m pleased to share with you my new article in Harvard Law Review entitled Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013). You can download it for free on SSRN. This is a short piece (24 pages) so you can read it in one sitting.
Here are some key points in the Article:
1. The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” – the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Unfortunately, privacy self-management is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control over personal data.
2. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.
3. People cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.
4. Privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.
5. In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.
6. The way forward involves (1) developing a coherent approach to consent, one that accounts for the social science discoveries about how people make decisions about personal data; (2) recognizing that people can engage in privacy self-management only selectively; (3) adjusting privacy law’s timing to focus on downstream uses; and (4) developing more substantive privacy rules.
The full article is here.
Cross-posted on LinkedIn.
posted by Danielle Citron
As All Things Digital Kara Swisher reports, Living Social experienced a significant hack the other day: over 50 million users’ email, dates of birth, and encrypted passwords were leaked into the hands of Russian hackers (or so it seems). This hack comes on the heels of data breaches at LinkedIn and Zappos. That the passwords were encrypted just means that users better change their passwords and fast because in time the encryption can be broken. A few years ago, I blogged about the 500 million mark of personal data leaked. Hundreds of millions seems like child’s play today.
This raises some important questions about what we mean when we talk about personally identifiable information (PII). Paul Schwartz and my co-blogger Dan Solove have done terrific work helping legislators devise meaningful definitions of PII in a world of reidentification. Paul Ohm is currently working on an important project providing a coherent account of sensitive information in the context of current data protection laws. Is someone’s password and date of birth sensitive information deserving special privacy protection? Beyond the obvious health, credit, and financial information, what other sorts of data do we consider sensitive and why? Answers to these questions are crucial to companies formulating best practices, the FTC as it continues its robust enforcement of privacy promises and pursuing deceptive practices, and legislators considering private sector privacy regulations of data brokers, as in Senator John Rockefeller’s current efforts.
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Deven Desai
Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.
Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.
According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:
is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.
Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.
Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?
As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.
posted by Danielle Citron
Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.
Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad. As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful. For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor. Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean. I think that Psy may have a run for his money on making us smile.
March 8, 2013 at 8:42 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 4 Comments
posted by Danielle Citron
Privacy leading light Alan Westin passed away this week. Almost fifty years ago, Westin started his trailblazing work helping us understand the dangers of surveillance technologies. Building on the work that Warren and Brandeis started in “The Right to Privacy” in 1898, Westin published Privacy and Freedom in 1967. A year later, he took his normative case for privacy to the trenches. As Director of the National Academy of Science’s Computer Science and Engineering Board, he and a team of researchers studied governmental, commercial, and private organizations using databases to amass, use, and share personal information. Westin’s team interviewed 55 organizations, from local law enforcement, federal agencies like the Social Security Administration, and direct-mail companies like R.L. Polk (a predecessor to our behavioral advertising industry).
The 1972 report, Databanks in a Free Society: Computers, Record-Keeping, and Privacy, is a masterpiece. With 14 case studies, the report made clear the extent to which public and private entities had been building substantial computerized dossiers of people’s activities and the risks to economic livelihood, reputation, and self-determination. It demonstrated the unrestrained nature of data collection and sharing, with driver’s license bureaus selling personal information to direct-mail companies and law enforcement sharing arrest records with local and state agencies for employment and licensing matters. Surely influenced by Westin’s earlier work, some data collectors, like the Kansas City Police Department, talked to the team about privacy protections, suggesting the need for verification of source documents, audit logs, passwords, and discipline for improper use of data. Westin’s report called for data collectors to adopt ethical procedures for data collection and sharing, including procedural protections such as notice and chance to correct inaccurate or incomplete information, data minimization requirements, and sharing limits.
Westin’s work shaped the debate about the right to privacy at the dawn of our surveillance era. His changing making agenda was front and center of the Privacy Act of 1974. In the early 1970s, nearly fifty congressional hearings and reports investigated a range of data privacy issues, including the use of census records, access to criminal history records, employers’ use of lie detector tests, and the military and law enforcement’s monitoring of political dissidents. State and federal executives spearheaded investigations of surveillance technologies including a proposed National Databank Center.
Just as public discourse was consumed with the “data-bank problem,” the courts began to pay attention. In Whalen v. Roe, a 1977 case involving New York’s mandatory collection of prescription drug records, the Supreme Court strongly suggested that the Constitution contains a right to information privacy based on substantive due process. Although it held that the state prescription drug database did not violate the constitutional right to information privacy because it was adequately secured, the Court recognized an individual’s interest in avoiding disclosure of certain kinds of personal information. Writing for the Court, Justice Stevens noted the “threat to privacy implicit in the accumulation of vast amounts of personal information in computerized data banks or other massive government files.” In a concurring opinion, Justice Brennan warned that the “central storage and easy accessibility of computerized data vastly increase the potential for abuse of that information, and I am not prepared to say that future developments will not demonstrate the necessity of some curb on such technology.”
What Westin underscored so long ago, and what Whalen v. Roe signaled, technologies used for broad, indiscriminate, and intrusive public surveillance threaten liberty interests. Last term, in United States v. Jones, the Supreme Court signaled that these concerns have Fourth Amendment salience. Concurring opinions indicate that at least five justices have serious Fourth Amendment concerns about law enforcement’s growing surveillance capabilities. Those justices insisted that citizens have reasonable expectations of privacy in substantial quantities of personal information. In our article “The Right to Quantitative Privacy,” David Gray and I are seeking to carry forward Westin’s insights (and those of Brandeis and Warren before him) into the Fourth Amendment arena as the five concurring justices in Jones suggested. More on that to come, but for now, let’s thank Alan Westin for his extraordinary work on the “computerized databanks” problem.
February 24, 2013 at 10:18 am Posted in: Criminal Procedure, Current Events, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement) Print This Post 4 Comments
posted by Danielle Citron
The ethos of our age is the more data, the better, and nowhere is that more true than the data-broker industry. Data-broker databases contain dossiers on hundreds of millions of individuals, including their Social Security numbers, property records, criminal-justice records, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, social network profiles, online activity, and drug- and food-store records. According to FTC Chairman Jon Leibowitz, companies like Acxiom are the ‘invisible cyberazzi’ that follow us around every where we go on- and offline, or as Chris Hoofnagle has aptly called them “Little Brothers” helping Big Brother and industry. Data brokers are largely unbridled by regulation. The FTC’s enforcement authority over data brokers stems from the Fair Credit Reporting Act (FCRA), which was passed in 1970 to protect the privacy and accuracy of information included in credit reports. FCRA requires consumer reporting agencies to use reasonable procedures to ensure that entities to which they disclose sensitive consumer data have a permissible purpose for receiving that data. Under FCRA, employers are required to inform individuals about intended adverse actions against them based on their credit reports. Individuals get a chance to explain inaccurate or incomplete information and to contact credit-reporting agencies to dispute the information in the hopes of getting it corrected. During the past two years, the FTC has gone after social media intelligence company and online people search engine on the grounds that they constituted consumer reporting agencies subject to FCRA. In June 2012, the FTC settled charges against Spokeo, an online service that compiles and sells digital dossiers on consumers to human resource professionals, job recruiters, and other businesses. Spokeo assembles consumer data from on- and offline sources, including social media sites, to create searchable consumer profiles. The profiles include an individual’s full name, physical address, phone number, age range, and email address, hobbies, photos, ethnicity, religion, and social network activity. The FTC alleged that Spokeo failed to adhere to FCRA, including its obligation to ensure the accuracy of consumer reports. Ultimately, it obtained a $800,000 settlement with the company. That’s helpful, to be sure, but given the FTC’s limited resources may not lead to more accurate dossiers. (It also may mean that employers will keep online intelligence in-house and thus their use of unreliable online information outside the reach of FCRA, as my co-blogger Frank Pasquale wrote so ably about in The Offensive Internet: Speech, Privacy, and Reputation). More recently,the FTC issued orders requiring nine data brokerage companies to provide the agency with information about how they collect and use data about consumers. The agency will use the information to study privacy practices in the data broker industry. The nine data brokers receiving orders from the FTC were (1) Acxiom, (2) Corelogic, (3) Datalogix, (4) eBureau, (5) ID Analytics, (6) Intelius, (7) Peekyou, (8) Rapleaf, and (9) Recorded Future. In its press release, the FTC explained that it is seeking details about: “the nature and sources of the consumer information the data brokers collect; how they use, maintain, and disseminate the information; and the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.” The FTC called on the data broker industry to improve the transparency of its practices as part of a Commission report, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers. FTC Commissioner Julie Brill has been a tireless advocate for greater oversight over data brokers–here is hoping that her efforts and those of her agency produce important reforms.
posted by Danielle Citron
Identity theft, now so common, we can joke about it.
Or as Alan Alda’s character in Woody Allen’s Crimes and Misdemeanors says, “comedy is tragedy plus time.” Time to transform tragedy into comedy, indeed. Scanning the Privacy Rights Clearinghouse database demonstrates that reported data breaches are a daily occurrence. Since January 1, 2013, private and public entities have reported over 20 major data breaches. Included on the list were hospitals, universities, and businesses. Sometimes, the most vulnerable are targeted. For instance, on January 8, 2013, a dishonest employee of the Texas Department of Health and Human Services was arrested on suspicion on misusing client information to apply for credit cards and to receive medical care under their names. Bad enough that automated systems erroneously take recipients of public benefits off the rolls, as my work on Technological Due Process explores. Those designed to help them are destroying their medical and credit histories as well.
We have had over 600 million records breached since 2005, from approximately 3,500 reported data breaches. Of course, those figures represented those officially reported, likely due to state data breach laws, whose requirements vary and leave lots of discretion with regard to reporting up to the entities who have little incentive to err on the side of reporting if they are not legally required to do so. So the bad news is that identity theft is prevalent, but at least we can laugh about it.
posted by Danielle Citron
Why leave the safe harbor provision intact for site operators, search engines, and other online service providers do not attempt to block offensive, indecent, or illegal activity but by no means encourage or are principally used to host illicit material as cyber cesspools do? If we retain that immunity, some harassment and stalking — including revenge porn — will remain online because site operators hosting it cannot be legally required to take them down. Why countenance that possibility?
Because of the risk of collateral censorship—blocking or filtering speech to avoid potential liability even if the speech is legally protected. In what is often called the heckler’s veto, people may abuse their ability to complain, using the threat of liability to ensure that site operators block or remove posts for no good reason. They might complain because they disagree with the political views expressed or dislike the posters’ disparaging tone. Providers would be especially inclined to remove content in the face of frivolous complaints in instances where they have little interest in keeping up the complained about content. Take, as an illustration, the popular newsgathering sites Digg. If faced with legal liability, it might automatically take down posts even though they involve protected speech. The news gathering site lacks a vested interest in keeping up any particular post given its overall goal of crowd sourcing vast quantities of news that people like. Given the scale of their operation, they may lack the resources to hire enough people to cull through complaints to weed out frivolous ones.
Sites like Digg differ from revenge porn sites and other cyber cesspools whose operators have an incentive to refrain from removing complained-about content such as revenge porn and the like. Cyber cesspools obtain economic benefits by hosting harassing material that may make it worth the risk to continue to do so. Collateral censorship is far less likely—because it is in their economic interest to keep up destructive material. As Slate reporter and cyber bullying expert Emily Bazelon has remarked, concerns about the heckler’s veto get more deference than it should in the context of revenge porn sites and other cyber cesspools. (Read Bazelon’s important new book Sticks and Stones: Defeating the Culture of Bullying and Rediscovering the Power of Character and Empathy). It does not justify immunizing cyber cesspool operators from liability.
Let’s be clear about what this would mean. Dispensing with cyber cesspools’ immunity would not mean that they would be strictly liable for user-generated content. A legal theory would need to sanction remedies against them. Read the rest of this post »
posted by Daniel Solove
For my privacy and security training company, TeachPrivacy, I recently created this 2-minute comical cartoon vignette to teach about the importance of privacy and apps. No login is required. Click the link above or the image below to see the video.