Archive for the ‘Privacy (ID Theft)’ Category
posted by Daniel Solove
One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.
In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:
- Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
- Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
- Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
- Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?
The claims we make in this article include:
- The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
- The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.
August 20, 2013 at 12:02 pm Posted in: Administrative Law, Articles and Books, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Technology Print This Post No Comments
posted by Danielle Citron
Professor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:
a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.
Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.
Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:
FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.
Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’
Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.
Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered. In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Danielle Citron
Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.
Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad. As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful. For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor. Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean. I think that Psy may have a run for his money on making us smile.
March 8, 2013 at 8:42 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 4 Comments
posted by Dave Hoffman
My co-author Sasha Romanosky asks me to post the following:
I am involved in a research project that examines state laws affecting the flow of personal information in some way. This information could relate to patients, employees, financial or retail customers, or even just individuals. And by “flow” we are interested in laws that affect the collection, use, storage, sale, sharing, disclosure, or even destruction of this information.
For example, some state laws require that companies notify you when your personal information has been hacked, while other state laws require notice if the firm plans to sell your information. In addition, laws in other
states restrict the sale of personal health information; enable law enforcement to track cell phone usage without a warrant; or prohibit the collection of a customer’s zip code during a credit card purchase.
Given the huge variation among states in their information laws, we would like to ask readers of Concurring Opinions to help us collect examples of such laws. You are welcome to either post a response to this blog entry or
reply to me directly at sromanos at cmu dot edu.
Sasha is a good guy, and a really careful researcher. Let’s help him!
September 10, 2012 at 9:58 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 3 Comments
posted by Deven Desai
Researcher Mark Nixon at the University of Southampton “believes that using photos of individual ears matched against a comparative database could be as distinctive a form of identification as fingerprints.”
According to the University’s news site the claim is that: “Using ears for identification has clear advantages over other kinds of biometric identification, as, once developed, the ear changes little throughout a person’s life. This provides a cradle-to-grave method of identification.”
Ok so they are not taking ears. The method involves cameras, scans, and techniques you may know about from facial recognition. This article has a little more detail. As an A.I. system it probably is pretty cool. Still, it sounds so odd that I wonder whether this work has considered the whole piercing, large gauge trend. I can imagine security that now requires removing ear decorations regardless of what they are made of. Also if really used for less invasive ID, will wearing earmuffs be cause to think someone is hiding or should we remember that folks get cold. For the sci-fi inclined, bet that a movie will entail cutting off an ear for identification just like past films have involved cutting off fingers and hands to fake an identity.
posted by Danielle Citron
In a piece entitled “You for Sale,” Sunday’s New York Times raised important concerns about the data broker industry. Let us add some more perils and seek to reframe the debate about how to regulate Big Data.
Data brokers like Acxiom (and countless others) collect and mine a mind-boggling array of data about us, including Social Security numbers, property records, public-health data, criminal justice sources, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, online musings, browsing habits culled by behavioral advertisers, and the gold mine of drug- and food-store records. They scrape our social network activity, which with a little mining can reveal our undisclosed sexual preferences, religious affiliations, political views, and other sensitive information. They may integrate video footage of our offline shopping. With the help of facial-recognition software, data mining algorithms factor into our dossiers the over-the-counter medicines we pick up, the books we browse, and the pesticides we contemplate buying for our backyards. Our social media influence scores may make their way into the mix. Companies, such as Klout, measure our social media influence, usually on a scale from one to 100. They use variables like the number of our social media followers, frequency of updates, and number of likes, retweets, and shares. What’s being tracked and analyzed about our online and offline behavior is accelerating – with no sign of slowing down and no assured way to find out.
As the Times piece notes, businesses buy data-broker dossiers to classify those consumers worth pursuing and those worth ignoring (so-called “waste”). More often those already in an advantaged position get better deals and gifts while the less advantaged get nothing. The Times piece rightly raised concerns about the growing inequality that such use of Big Data produces. But far more is at stake.
Government is a major client for data brokers. More than 70 fusion centers mine data-broker dossiers to detect crimes, “threats,” and “hazards.” Individuals are routinely flagged as “threats.” Such classifications make their way into the “information-sharing environment,” with access provided to local, state, and federal agencies as well as private-sector partners. Troublingly, data-broker dossiers have no quality assurance. They may include incomplete, misleading, and false data. Let’s suppose a data broker has amassed a profile on Leslie McCann. Social media scraped, information compiled, and videos scanned about “Leslie McCann” might include information about jazz artist “Les McCann” as well as information about criminal with a similar name and age. Inaccurate Big Data has led to individuals’ erroneous inclusion on watch lists, denial of immigration applications, and loss of public benefits. Read the rest of this post »
June 19, 2012 at 5:08 pm Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 2 Comments
posted by Deven Desai
The Boston Phoenix has an article about what Facebook coughs up when a subpoena is sent to the company. The paper came across the material as it worked on an article called Hunting the Craigslist Killer. The issues that come to mind for me are
1. Privacy after death? In may article Property, Persona, and Preservation which uses the question of who owns email after death, I argue that privacy after death isn’t tenable. The release of information after someone dies (the man committed suicide), (From ZDNET “he man committed suicide, which meant the police didn’t care if the Facebook document was published elsewhere, after robbing two women and murdering a third.”) brings up a question Dan Solove and I have debated. What about those connected to the dead person? The facts here matter.
2. What are reasons to redact or not release information? Key facts about redaction and public records complicate the question of death and privacy. I’m assuming the person has no privacy after death. But his or her papers may reveal information about those connected to the dead person. In this case the police did not redact, but the paper did. Sort of.
This document was publicly released by Boston Police as part of the case file. In other case documents, the police have clearly redacted sensitive information. And while the police were evidently comfortable releasing Markoff’s unredacted Facebook subpoena, we weren’t. Markoff may be dead, but the very-much-alive friends in his friend list were not subpoenaed, and yet their full names and Facebook ID’s were part of the document. So we took the additional step of redacting as much identifying information as we could — knowing that any redaction we performed would be imperfect, but believing that there’s a strong argument for distributing this, not only for its value in illustrating the Markoff case, but as a rare window into the shadowy process by which Facebook deals with law enforcement.
As the comments noted and the explanation admits, the IDs and other information of the living are arguably in greater need of protection. It may have been that the police needed all the information for its case, but why release it to the public?
Obvious Closing: As we put more into the world, it will come back in ways we had not imagined. I doubt that bright line rules will ever work in this space. But it seems to me that some sort of best practices informed by research (think Lior Strahilevitz’s A Social Networks Theory of Privacy) could allow for reasonable, useful privacy practices. The hardest part for law and society in general is that this area (information-related law) is not likely to be stable for some time. That being said, I think that the insane early domain name law (yes someone could think that megacorpsucks.com is sponsored by megacorp) corrected in about 10 years. Perhaps privacy and information practices will reach an equilibrium that allows the law to stabilize. Until then, practices, businesses, science, and the law will twirl around each other as society sorts what balance makes sense (until something messes with that moment).
posted by Dave Hoffman
Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation. Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version). From the abstract:
In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.
A few thoughts follow after the jump.
February 19, 2012 at 1:33 pm Posted in: Economic Analysis of Law, Empirical Analysis of Law, Privacy, Privacy (Consumer Privacy), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Daniel Solove
Here’s a list of notable privacy books published in 2011.
|Saul Levmore & Martha Nussbaum, eds., The Offensive Internet (Harvard 2011)
This is a great collection of essays about the clash of free speech and privacy online. I have a book chapter in this volume along with Martha Nussbaum, Cass Sunstein, Brian Leiter, Danielle Citron, Frank Pasquale, Geoffrey Stone, and many others.
|Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security (Yale 2011)
Nothing to Hide “succinctly and persuasively debunks the arguments that have contributed to privacy’s demise, including the canard that if you have nothing to hide, you have nothing to fear from surveillance. Privacy, he reminds us, is an essential aspect of human existence, and of a healthy liberal democracy—a right that protects the innocent, not just the guilty.” — David Cole, New York Review of Books
|Jeff Jarvis, Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live (Simon & Schuster 2011)
I strongly disagree with a lot of what Jarvis says, but the book is certainly provocative and engaging.
|Daniel J. Solove & Paul M. Schwartz, Privacy Law Fundamentals (IAPP 2011)
“A key resource for busy professional practitioners. Solove and Schwartz have succeeded in distilling the fundamentals of privacy law in a manner accessible to a broad audience.” – Jules Polonetsky, Future of Privacy Forum
|Eli Pariser, The Filter Bubble (Penguin 2011)
An interesting critique of the personalization of the Internet. We often don’t see the Internet directly, but through tinted goggles designed by others who determine what we want to see.
|Siva Vaidhyanathan, The Googlization of Everything (U. California 2011)
A vigorous critique of Google and other companies that shape the Internet. With regard to privacy, Vaidhyanathan explains how social media and other companies encourage people’s sharing of information through their architecture — and often confound people in their ability to control their reputation.
|Susan Landau, Surveillance or Security? The Risk Posed by New Wiretapping Technologies (MIT 2011)
A compelling argument for how designing technologies around surveillance capabilities will undermine rather than promote security.
|Kevin Mitnick, Ghost in the Wires (Little Brown 2011)
A fascinating account of the exploits of Kevin Mitnick, the famous ex-hacker who inspired War Games. His tales are quite engaging, and he demonstrates that hacking is often not just about technical wizardry but old-fashioned con-artistry.
|Matt Ivester, lol . . . OMG! (CreateSpace 2011)
Ivester created Juicy Campus, the notorious college gossip website. After the site’s demise, Ivester changed his views about online gossip, recognizing the problems with Juicy Campus and the harms it caused. In this book, he offers thoughtful advice for students about what they post online.
|Joseph Epstein, Gossip: The Untrivial Pursuit (Houghton Mifflin Harcourt 2011)
A short engaging book that is filled with interesting stories and quotes about gossip. Highly literate, this book aims to expose gossip’s bad and good sides, and how new media are transforming gossip in troublesome ways.
|Anita Allen, Unpopular Privacy (Oxford 2011)
My blurb: “We live in a world of increasing exposure, and privacy is increasingly imperiled by the torrent of information being released online. In this powerful book, Anita Allen examines when the law should mandate privacy and when it shouldn’t. With nuance and thoughtfulness, Allen bravely tackles some of the toughest questions about privacy law — those involving the appropriate level of legal paternalism. Unpopular Privacy is lively, engaging, and provocative. It is filled with vivid examples, complex and fascinating issues, and thought-provoking ideas.”
|Frederick Lane, Cybertraps for the Young (NTI Upstream 2011)
A great overview of the various problems the Internet poses for children such as cyberbullying and sexting. This book is a very accessible overview for parents.
|Clare Sullivan, Digital Identity (University of Adelaide Press 2011)
Australian scholar Clare Sullivan explores the rise of “digital identity,” which is used for engaging in various transactions. Instead of arguing against systematized identification, she sees the future as heading inevitably in that direction and proposes a robust set of rights individuals should have over such identities. This is a thoughtful and pragmatic book, with a great discussion of Australian, UK, and EU law.
December 29, 2011 at 11:12 pm Posted in: Articles and Books, Book Reviews, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Daniel Solove
The new edition of my casebook, Information Privacy Law (4th edition) (with Paul M. Schwartz) is hot off the presses. And there’s a new edition of my casebook, Privacy, Information, and Technology (3rd edition) (with Paul M. Schwartz). Copies should be sent out to adopters very soon. If you’re interested in adopting the book and are having any difficulties getting a hold of a copy, please let me know.
You also might be interested in my concise guide to privacy law, also with Paul Schwartz, entitled Privacy Law Fundamentals. This short book was published earlier this year. You can order it on Amazon or via IAPP. It might make for a useful reference tool for students.
December 13, 2011 at 1:31 am Posted in: Articles and Books, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post No Comments
posted by Daniel Solove
Professor Paul Schwartz (Berkeley School of Law) and I recently published a new book, PRIVACY LAW FUNDAMENTALS. This book is a distilled guide to the essential elements of U.S. data privacy law. In an easily-digestible format, the book covers core concepts, key laws, and leading cases.
The book explains the major provisions of all of the major privacy statutes, regulations, cases, including state privacy laws and FTC enforcement actions. It provides numerous charts and tables summarizing the privacy statutes (i.e. statutes with private rights of action, preemption, and liquidated damages, among other things). Topics covered include: the media, domestic law enforcement, national security, government records, health and genetic data, financial information, consumer data and business records, government access to private sector records, data security law, school privacy, employment privacy, and international privacy law.
This book provides an concise yet comprehensive overview of the field of privacy law for those who do not want to labor through lengthy treatises. Paul and I worked hard to keep it under 200 pages — our goal was to include a lot of information yet do so as succinctly as possible. PRIVACY LAW FUNDAMENTALS is written for those who want a handy reference, a bird’s eye view of the field, or a primer for courses in privacy law.
We wrote this book to be a useful reference for practitioners — ideally, a book they’d keep at the corner of their desks or in their briefcases.
We also think it can serve as a useful study aid for students taking privacy law courses.
You can check it out here, where you can download the table of contents.
March 21, 2011 at 12:44 am Posted in: Articles and Books, Book Reviews, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post No Comments
posted by Sasha Romanosky
In previous posts (here and here), I suggested that analytical modeling can be useful to better understand data breaches, information disclosure laws and the costs to both companies and individuals because of these laws. I’d like to now expand on those ideas.
To be clear, there are many kinds of models and modeling approaches but what I’m interested in is the economic analysis of tort law. For those not aware, this approach is concerned with the cost of accidents to an injurer and a victim and it analyzes how various policy rules (typically regulation or liability) can minimize the sum of those costs.
The way I’ve come to interpret and apply models (e.g. mathematical equations) is to illustrate how agent’s incentives change under different policy interventions. For example, if companies are forced to notify consumers of a data breach, will they be induced to spend more or less money protecting consumer data? Will individuals take more or less care once notified? Will these actions together increase or decrease overall social costs?
posted by Sasha Romanosky
In addition to empirical work on data breaches and breach disclosure laws, I’ve also become very interested in data breach litigation. While plaintiffs have seen very little success with legal actions brought against companies that suffer data breaches, I still believe there is some very interesting empirical work that can be done regarding these lawsuits.
In a recent post, Daniel Solove cited a paper by Andrew Serwin (found here) who described in great detail the legal theories and statutes that plaintiffs use when bringing legal actions against companies that suffer data breaches. It isn’t my purpose to repeat that work, but rather to identify an interesting pattern that appears to have emerged over the past 5 to 10 years of privacy breach litigation. Special thanks to Paul Bond of Reed Smith LLP who first brought this to my attention.
Category 1: You lost my data, now I will sue you.
This first category could be characterized by what is classically considered a data breach: plaintiffs suing a company simply because their personally identifiable information (PII) was lost, stolen, or improperly disposed. For example, Choicepoint, TJX, Hannaford, Heartland, etc. Plaintiffs claim that this disclosure of data has harmed, or will harm them, and that they are justified in seeking relief for actual fraud losses, monitoring costs, future expected loss, or emotional distress. Plaintiffs bring these actions under many kinds of tort and contract theories, but generally lose because they’re unable to prove a harm that’s legally recognized (as we discuss further below). The defining characteristic of this category is that the burden lies with the alleged victims to show they were harmed in a legally meaningful way.
December 13, 2010 at 12:22 pm Posted in: Cyberlaw, Economic Analysis of Law, Empirical Analysis of Law, Legal Theory, Privacy (Consumer Privacy), Privacy (ID Theft), Uncategorized Print This Post 4 Comments
posted by Sasha Romanosky
Thanks so much to Danielle and Concurring Opinions for inviting me to blog. This is an exciting opportunity and I look forward to sharing my thoughts with you. Hopefully you will find these posts interesting.
There are many policy interventions that legislators can impose to reduce harms caused by one party to another. Two that are very often compared are safety regulations (mandated standards) and liability. They lend themselves well to comparison because they’re generally employed on either side of some harmful event (e.g. data breach or toxic spill): ex ante regulations are applied before the harm, and ex post liability is applied after the harm.
A third approach, one that we might consider ‘sitting between’ regulation and liability, is information disclosure (e.g. data breach disclosure (security breach notification) laws). I’d like to take a few paragraphs to compare these alternatives in regards to data breaches and privacy harms.
December 6, 2010 at 11:51 am Posted in: Behavioral Law and Economics, Consumer Protection Law, Cyberlaw, Economic Analysis of Law, Legal Theory, Privacy (Consumer Privacy), Privacy (ID Theft), Uncategorized Print This Post 4 Comments
posted by Daniel Solove
Here’s a list of notable privacy books published in 2010.
This list contains a few books published late in 2009 that I missed on the 2009 list.
|Adam D. Moore, Privacy Rights: Moral and Legal Foundations (Penn. St. U. Press 2010)
My blurb: “Privacy Rights is a lucid and compelling examination of the right to privacy. Adam Moore provides a theoretically rich and trenchant account of how to reconcile privacy with competing interests such as free speech, workplace productivity, and security.”
|Cass Sunstein, On Rumors (Farrar , Strauss and Giroux 2009)
A very short essay on the damage wrought by false online rumors and a discussion of how and why such rumors spiral out of control, such as the phenomena of social cascades and group polarization. The book is worth reading, but quite short for a book (only 88 pages of primary text, in a very tiny book the size of a paperback).
|Stewart Baker, Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism (Hoover Institution Press 2010)
A provocative argument for stronger security protections and a vigorous attack on privacy. The arguments against privacy are often glib and dismissive, but the book is worth reading for Baker’s extensive personal experience dealing with the issues.
|Christena Nippert-Eng, Islands of Privacy (U. Chicago 2010)
A fascinating sociological account of people’s attitudes toward privacy and their behaviors with regards to preserving their privacy. It contains numerous interviews, quoted copiously, of people in their own voices discussing how they conceal their secrets. Engaging and compelling reading.
|Hal Niedzviecki, The Peep Diaries: How We’re Learning to Love Watching Ourselves and Our Neighbors (City Lights Press 2009)
This book is an extended essay on self-exposure online. It is filled with many interesting anecdotes. The book has a journalistic style and raises observations and questions more than it proposes solutions or policies. The “notes” at the end consist only of a brief bibliography for each chapter, and there are no indications of which facts in the book came from which particular sources — a pet peeve of mine.
|Bill Bryson, At Home: A Short History of Private Life (Doubleday 2010)
An extensive history of the home, which as I’ve explored in some of my own writings, plays an important role in the history of privacy. Bryson’s narrative reads well, but he only supplies a bibliography at the end — no endnotes or indications of the sources of particular facts and details. I find this practice to be quite problematic for a work of history.
|Shane Harris, The Watchers: The Rise of America’s Surveillance State (Penguin 2010)
An engaging narrative that chronicles the surveillance and security measures the United States undertook after 9/11. Filled with interesting facts, the book reads like a story.
|Robin D. Barnes, Outrageous Invasions: Celebrites’ Private Lives, Media, and the Law (Oxford 2010)
There are some very interesting parts of this book, but it at times seems like a grab bag of topics relating to celebrities and its central argument could use more development. Nevertheless, it is worth reading because it discusses some interesting cases and explores comparative legal perspectives on the issues.
|David Kirkpatrick, The Facebook Effect (Simon& Schuster 2010)
A fascinating account of the rise of Facebook. There are times when Kirkpatrick seems too sympathetic to Mark Zuckerberg and Facebook, but overall, this book is illuminating and engaging.
|Viktor Mayer-Schonberger, Delete: The Virtue of Forgetting in the Digital Age (Princeton 2009)
An interesting discussion of the “right to be forgotten.” Some of the ground in this book appears to be already well-trodden, but Mayer-Schonberger’s keen insights on data retention and destruction make it a worthy addition to the literature.
December 6, 2010 at 10:33 am Posted in: Articles and Books, Book Reviews, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (National Security) Print This Post 6 Comments
posted by Daniel Solove
People believe that privacy violations should be punished — and quite stringently. There are interesting survey results in a new report by Chris Hoofnagle, Jennifer King, Su Li, and Joseph Turow, How Different are Young Adults from Older Adults When it Comes to Information Privacy Attitudes and Policies?
The report focuses primarily on comparing the attitudes of the young with older people and concluding that there isn’t much of a divergence. I blogged about it here.
There is other data in the report worth talking about that I fear will be lost in the headlines about how the young are similar to the old. And this data is quite interesting:
“If a company purchases or uses someone’s personal information illegally, about how much—if anything—do you think that company should be fined?”
The vast majority of people of all ages (69%) said the fine should be greater than $2500. They were given choices of $100, $500, $1000, $2500, and more than $2500.
“Beyond a fine, companies that use a person’s information illegally might be punished in other ways. Which ONE of the following ways to punish companies do you think is most important?”
“The company should be put out of business.” — 18%
“The company should fund efforts to help people protect privacy.” — 38%
“Executives who are responsible should face jail time.” — 35%
“The company should not be punished in any of those ways.” — 3%
“It depends.” — 2%
“Don’t know/refused.” — 4%
posted by Daniel Solove
Professor James Grimmelmann likes to shop at Kohl’s. So much so that he applied for credit at Kohl’s. And he got it.
The problem is that James Grimmelmann didn’t really apply for anything. It was an identity thief.
Grimmelmann was a participant in Chris Hoofnagle‘s study about identity theft. In a really eye-opening paper, Internalizing Identity Theft, 2010 UCLA J. of L. & Tech (forthcoming), Hoofnagle has concluded that one of the main reasons identity theft happens is because companies let it happen. It is an economic decision.
Back in 1981, in the famous case involving an accident due to a defect in a Ford Pinto, it came to light that Ford knew about the design defect in the car but ignored it because it calculated that paying damages in lawsuits would be less than fixing the design flaw.
Hoofnagle illustrates that the same phenomenon is happening with identity theft. Companies grant credit carelessly because it is cheap to do so. Much of the losses are sloughed off on victims because the companies aren’t forced to internalize them.
To illustrate how sloppy the granting of credit is, Hoofnagle supplies a copy of the Kohl’s credit application Grimmelmann’s identity thief used.
Notice how many errors are on the application. There’s a ton of missing information. And Grimmelmann’s name is even spelled incorrectly — though, in all fairness, it’s got way to many m’s and n’s to keep track of.
In his paper, Hoofnagle examines several case studies in addition to Grimmelmann’s to show how many obvious red flags in credit applications are ignored.
Hoofnagle demonstrates that identity theft is a product not of carelessness on the part of the credit industry, but the product of planned carelessness — more akin to intentional decisions than to foolish blunders.
posted by Daniel Solove
The Wall Street Journal reports the theft of 3.3 million student loan records, including Social Security numbers:
Company and federal officials said they believed last week’s theft of identity data on 3.3 million people with student loans was the largest-ever breach of such information and could affect as many as 5% of all federal student-loan borrowers.
Names, addresses, Social Security numbers and other personal data on borrowers were stolen from the St. Paul, Minn., headquarters of Educational Credit Management Corp., a nonprofit guarantor of federal student loans, during the weekend of March 20-21, according to the company.
ECMC said the stolen information was on a portable media device. “It was simple, old-fashioned theft,” said ECMC spokesman Paul Kelash. “It was not a hacker incident.”
What is particularly frustrating was that the records were stored on a portable media device. Given all the incidents where data was stolen from flash drives and laptops, one would think that companies would learn that storing millions of records on such devices isn’t a wise thing to do.
Since 2005, we’ve been hearing about a barrage of data security breaches. As the WSJ states:
All told, more than 347 million records containing sensitive information have been compromised in the U.S. since 2005, according to the Privacy Rights Clearinghouse, a nonprofit consumer group.
The problem is that despite all the attention data security has been receiving, it’s not getting any better. Skulls remain thick, and we keep learning of data security breaches that really shouldn’t be happening anymore. At some point, the excuse “Oops! We made a blunder” shouldn’t cut it.
It is unfortunate data security isn’t getting much better and the number and extent of data breaches isn’t diminishing. It is really problematic that we see the same types of bad security practices again and again and again. These trends suggest that we need stronger laws against bad data security practices.