Archive for the ‘Privacy (Medical)’ Category
posted by Danielle Citron
As All Things Digital Kara Swisher reports, Living Social experienced a significant hack the other day: over 50 million users’ email, dates of birth, and encrypted passwords were leaked into the hands of Russian hackers (or so it seems). This hack comes on the heels of data breaches at LinkedIn and Zappos. That the passwords were encrypted just means that users better change their passwords and fast because in time the encryption can be broken. A few years ago, I blogged about the 500 million mark of personal data leaked. Hundreds of millions seems like child’s play today.
This raises some important questions about what we mean when we talk about personally identifiable information (PII). Paul Schwartz and my co-blogger Dan Solove have done terrific work helping legislators devise meaningful definitions of PII in a world of reidentification. Paul Ohm is currently working on an important project providing a coherent account of sensitive information in the context of current data protection laws. Is someone’s password and date of birth sensitive information deserving special privacy protection? Beyond the obvious health, credit, and financial information, what other sorts of data do we consider sensitive and why? Answers to these questions are crucial to companies formulating best practices, the FTC as it continues its robust enforcement of privacy promises and pursuing deceptive practices, and legislators considering private sector privacy regulations of data brokers, as in Senator John Rockefeller’s current efforts.
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Taunya Banks
In 1995 Gunther von Hagens presented his Body Worlds exhibit, described as a collection of real human bodies that have been “plastinated” to prevent their decay and make them more malleable. Some of these plastinated bodies were cut open to reveal their inner organs and then positioned in lifelike poses. The exhibit toured the world and was wildly popular.
Body Worlds also generated some criticism. Canadian social scientist, Lawrence Burns, argued that “some aspects of the exhibit violated human dignity.” (7(4): 12-23 Amer. J. Bioethics 2007) Although touted as an educational experience Burns and others worried that the bodies were being used as “resources to make money from the voyeurism of the general public.” A key concern was that the bodies were denied burial and that this was a dignitary affront. Burns conceded, however, that the concept of human dignity as applied to deceased individuals is unclear.
I started to think about whether there is dignity after death and, if so, what are its parameters, when I read a news article from the New Haven Register, about the skeleton of an enslaved man that was being studied by the anthropology faculty and students at Quinnipiac University prior to burial.
The enslaved man who died in the 1798 (slavery was not abolished in Connecticut until 1848), was named Fortune. At the time of his death Fortune was the human chattel of a Waterbury Connecticut physician who upon Fortune’s death boiled his body to remove the flesh keeping his skeleton to study human anatomy. Fortune’s body remained unburied and was on display as late as 1970 at the Mattatuck Museum where until recently it was still housed. Read the rest of this post »
posted by Danielle Citron
Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.
Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad. As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful. For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor. Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean. I think that Psy may have a run for his money on making us smile.
March 8, 2013 at 8:42 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 4 Comments
posted by Gaia Bernstein
Egg and sperm donations are an integral part of the infertility industry. The donors are usually young men and women who donate relying on the promise of anonymity. This is the norm in the United States. But, internationally things are changing. A growing number of countries have prohibited egg and sperm donor anonymity. This usually means that when the child who was conceived by egg or sperm donation reaches the age of eighteen he can receive the identifying information of the donor and meet his genetic parent.
An expanding movement of commentators is advocating a shift in the United States to an open identity model, which will prohibit anonymity. In fact, last year, Washington state adopted the first modified open identity statute in the United States. Faced by calls for the removal of anonymity, an obvious cause for concern is how would prohibitions on anonymity affect people’s willingness to donate egg and sperm. Supporters of prohibitions on anonymity argue that they only cause short-term shortages in egg and sperm supplies. However, in a study I published in 2010, I showed that unfortunately that does not seem to be the case. My study examined three jurisdictions, which prohibited donor gamete anonymity: Sweden, Victoria (an Australian state) and the United Kingdom. It showed that all these jurisdictions share dire shortages in donor gametes accompanied by long wait-lists. The study concluded that although prohibitions on anonymity were not the sole cause of the shortages, these prohibitions definitely played a role in their creation.
In a new article, titled “Unintended Consequences: Prohibitions on Gamete Donor Anonymity and the Fragile Practice of Surrogacy,” I examine the potential effect of the adoption of prohibitions on anonymity in the United States on the practice of surrogacy. Surrogacy has not been part of the international debate on donor gamete anonymity. But the situation in the United States is different. Unlike most foreign jurisdictions that adopted prohibitions on anonymity, the practice of surrogacy in the United States is particularly reliant on donor eggs because of the unique legal regime governing surrogacy here. Generally, there are two types of surrogacy arrangements: traditional surrogacy and gestational surrogacy. In a traditional surrogacy arrangement the surrogate’s eggs are used and she is the genetic mother of the child, while in gestational surrogacy the intended mother’s eggs or a donor’s eggs are used and the surrogate is not the genetic mother of the conceived child. Most U.S. states that expressly allow surrogacy provide legal certainty only to gestational surrogacy, which relies heavily on donor eggs, while leaving traditional surrogacy in a legal limbo. Without legal certainty, the intended parents may not be the legal parents of the conceived child, and instead the surrogate and even her husband may become the legal parents. Infertility practitioners endorse the legal preference for gestational surrogacy also for psychological reasons, believing that a surrogate who is not genetically related to the baby is less likely to change her mind and refuse to hand over the baby.
The adoption of prohibitions on anonymity in the United States could destabilize the practice of surrogacy in a way that did not occur in other countries that adopted these prohibitions. If, as has happened elsewhere, prohibitions on anonymity will play a role in creating shortages in donor egg supplies in the United States, this could affect the practice of surrogacy in two ways. Individuals seeking surrogacy may need to resort to traditional surrogacy, which does not rely on donor eggs, with the accompanying legal uncertainty. Alternatively, those deterred by the uncertainty enveloping traditional surrogacy may refrain from seeking surrogacy altogether, resulting in a significant contraction of the practice of surrogacy in the United States. These potential complications suggest that those supporting the adoption of prohibitions on anonymity in the United States, should consider these changes with great caution and think beyond the traditional debate about the privacy of the donors, the privacy and procreational interests of the intended parents, the best interests of the children and the direct effect on gamete supplies.
December 21, 2012 at 10:42 am Tags: egg donor anonymity, Family Law, Health Law, infertility, reproductive technologies, sperm donor anonymity, surrogacy Posted in: Family Law, Health Law, Privacy, Privacy (Medical), Technology, Uncategorized Print This Post No Comments
posted by UCLA Law Review
Volume 60, Issue 1 (October 2012)
November 2, 2012 at 7:11 pm Posted in: Constitutional Law, Criminal Law, Criminal Procedure, Evidence Law, Intellectual Property, International & Comparative Law, Law Rev (UCLA), Privacy, Privacy (Medical) Print This Post No Comments
posted by Omer Tene
Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.
The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.
Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.
September 20, 2012 at 4:28 am Tags: analytics, big data, data protection, Privacy Posted in: Consumer Protection Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Medical), Technology, Uncategorized Print This Post 3 Comments
posted by Dave Hoffman
My co-author Sasha Romanosky asks me to post the following:
I am involved in a research project that examines state laws affecting the flow of personal information in some way. This information could relate to patients, employees, financial or retail customers, or even just individuals. And by “flow” we are interested in laws that affect the collection, use, storage, sale, sharing, disclosure, or even destruction of this information.
For example, some state laws require that companies notify you when your personal information has been hacked, while other state laws require notice if the firm plans to sell your information. In addition, laws in other
states restrict the sale of personal health information; enable law enforcement to track cell phone usage without a warrant; or prohibit the collection of a customer’s zip code during a credit card purchase.
Given the huge variation among states in their information laws, we would like to ask readers of Concurring Opinions to help us collect examples of such laws. You are welcome to either post a response to this blog entry or
reply to me directly at sromanos at cmu dot edu.
Sasha is a good guy, and a really careful researcher. Let’s help him!
September 10, 2012 at 9:58 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 3 Comments
Re-Identification Risks and Myths, Superusers and Super Stories (Part II: Superusers and Super Stories)
posted by Daniel Barth-Jones
The Myth of Superuser: Toward Accurate Assessment of Unrealized Possibilities
In a recent Concurring Opinion blog post, I provided a critical re-examination of the famous re-identification of Massachusetts Governor William Weld’s health information as accounted by Paul Ohm, in his 2010 paper “Broken Promises of Privacy” and exposed a fatal flaw, the “Myth of the Perfect Population Register” which constitutes a serious challenge to all re-identification attacks.
In part 2 of this essay, I address the broader issues of how privacy law scholars and policy-makers should evaluate various scenarios being presented as motivators for the need for potential privacy regulations. Fortunately, Professor Ohm in earlier work has written another very compelling and astute paper from which we can draw some useful guidance for such approaches. In his paper, Ohm cautions public policy makers to beware of the “Myth of the Superuser”. Ohm’s point with regard to this mythical “Superuser” is not that such Superusers – just substitute “Data Intruders” for our interests here – do not exist. Ohm isn’t even trying to imply that the considerable skills needed to facilitate their attacks are mythical. Rather, Ohm is making the point that by inappropriately conflating the rare and anecdotal accomplishments of notorious hackers with the actions of typical users we unwittingly form highly distorted views of the normative behavior which is under consideration for regulatory control. This misdirected focus leads to poorly constructed public policy and unintended consequences. It’s not hard to see that extremely important parallels exist here with regard to “Myth of the Perfect Population Register”. The inability of most data intruders to construct accurate and complete population registers capable of supporting re-identification attacks has wide-reaching implications. The most important implication is how seriously we should take claims about the “astonishing ease” of re-identification. As I’ve written in a previous paper co-authored with University of Arizona Law Professor, Jane Bambauer Yakowitz, “…de-anonymization attacks do not scale well because of the challenges of determining the characteristics of the general population., Each attack must be customized to the particular de-identified database and to the population as it existed at the time of the data-collection. This is likely to be feasible only for small populations under unusual conditions.”
For this very same reason, oft-repeated apprehensions that evolving re-identification risks arising from new data sources like Facebook or new re-identification technologies will rapidly out-pace our abilities to recognize and appropriately respond with effective de-identification methods are simply unfounded. It is not the case that re-identification methods can be easily automated and rapidly spread via the Internet as some have mistakenly asserted. The Myth of the Perfect Population Register assures us that confident re-identifications will always require labor intensive efforts spent building and confirming high quality, time-specific population registers. Re-identification lacks the easy transmission and transferability associated with computer viruses or other computer security vulnerabilities. It will never become the domain of hacker “script kiddies” because of the competing “limits of human bandwidth” discussed in Ohm’s Superuser paper. Even with considerable computer assistance with the requisite data management, there simply isn’t enough human time and effort as would be needed to track, disambiguate and verify the ocean of messy data required to clearly re-identify individuals in large populations – at least when proper de-identification methods have already made the chance of success very small. Careful consideration of Ohm’s Superuser arguments coupled with the Myth of the Perfect Population Register lead us to the conclusion that re-identification attempts will continue to be expensive and time-consuming to conduct, require serious data management and statistical skills to execute, rarely be successful when data has been properly de-identified, and, most importantly, almost always turn out to be ultimately uncertain as to whether any purported re-identifications have actually been correct.
posted by Daniel Barth-Jones
In a recent Health Affairs blog article, I provide a critical re-examination of the famous re-identification of Massachusetts Governor William Weld’s health information. This famous re-identification attack was popularized by recently appointed FTC Senior Privacy Adviser, Paul Ohm, in his 2010 paper “Broken Promises of Privacy”. Ohm’s paper provides a gripping account of Latanya Sweeney’s famous re-identification of Weld’s health insurance data using a Cambridge, MA voter list. The Weld attack has been frequently cited echoing Ohm’s claim that computer scientists can purportedly identify individuals within de-identified data with “astonishing ease.”
However, the voter list supposedly used to “re-identify” Weld contained only 54,000 residents and Cambridge demographics at the time of the re-identification attempt show that the population was nearly 100,000 persons. So the linkage between the data sources could not have provided definitive evidence of re-identification. The findings from this critical re-examination of the famous Weld re-identification attack indicate that he was quite likely re-identifiable only by virtue of his having been a public figure experiencing a well-publicized hospitalization, rather than there being any actual certainty to his purported re-identification via the Cambridge voter data. His “shooting-fish-in-a-barrel” re-identification had several important advantages which would not have existed for any random re-identification target. It is clear from the statistics for this famous re-identification attack that the purported method of voter list linkage could not have definitively re-identified Weld and, while the odds were somewhat better than a coin-flip, they fell quite short of the certainty that is implied by the term “re-identification”.
The full detail of this methodological flaw underlying the famous Weld/Cambridge re-identification attacks is available in my recently released paper. This fatal flaw, the inability to confirm that Weld was indeed the only man with in his ZIP Code with his birthdate, exposes the critical logic underlying all re-identification attacks. Re-identification attacks require confirmation that purportedly “re-identified” individuals are the only person within both the sample data set being attacked and the larger population possessing a particular set of combined “quasi-identifier” characteristics.
posted by Danielle Citron
In a piece entitled “You for Sale,” Sunday’s New York Times raised important concerns about the data broker industry. Let us add some more perils and seek to reframe the debate about how to regulate Big Data.
Data brokers like Acxiom (and countless others) collect and mine a mind-boggling array of data about us, including Social Security numbers, property records, public-health data, criminal justice sources, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, online musings, browsing habits culled by behavioral advertisers, and the gold mine of drug- and food-store records. They scrape our social network activity, which with a little mining can reveal our undisclosed sexual preferences, religious affiliations, political views, and other sensitive information. They may integrate video footage of our offline shopping. With the help of facial-recognition software, data mining algorithms factor into our dossiers the over-the-counter medicines we pick up, the books we browse, and the pesticides we contemplate buying for our backyards. Our social media influence scores may make their way into the mix. Companies, such as Klout, measure our social media influence, usually on a scale from one to 100. They use variables like the number of our social media followers, frequency of updates, and number of likes, retweets, and shares. What’s being tracked and analyzed about our online and offline behavior is accelerating – with no sign of slowing down and no assured way to find out.
As the Times piece notes, businesses buy data-broker dossiers to classify those consumers worth pursuing and those worth ignoring (so-called “waste”). More often those already in an advantaged position get better deals and gifts while the less advantaged get nothing. The Times piece rightly raised concerns about the growing inequality that such use of Big Data produces. But far more is at stake.
Government is a major client for data brokers. More than 70 fusion centers mine data-broker dossiers to detect crimes, “threats,” and “hazards.” Individuals are routinely flagged as “threats.” Such classifications make their way into the “information-sharing environment,” with access provided to local, state, and federal agencies as well as private-sector partners. Troublingly, data-broker dossiers have no quality assurance. They may include incomplete, misleading, and false data. Let’s suppose a data broker has amassed a profile on Leslie McCann. Social media scraped, information compiled, and videos scanned about “Leslie McCann” might include information about jazz artist “Les McCann” as well as information about criminal with a similar name and age. Inaccurate Big Data has led to individuals’ erroneous inclusion on watch lists, denial of immigration applications, and loss of public benefits. Read the rest of this post »
June 19, 2012 at 5:08 pm Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 2 Comments
posted by Dave Hoffman
Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation. Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version). From the abstract:
In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.
A few thoughts follow after the jump.
February 19, 2012 at 1:33 pm Posted in: Economic Analysis of Law, Empirical Analysis of Law, Privacy, Privacy (Consumer Privacy), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Stanford Law Review
Our 2012 Symposium Issue, The Privacy Paradox: Privacy and Its Conflicting Values, is now available online:
- A Reasonableness Approach to Searches After the Jones GPS Tracking Case by Peter Swire (64 Stan. L. Rev. Online 57);
- Privacy in the Age of Big Data by Omer Tene & Jules Polonetsky (64 Stan. L. Rev. Online 63);
- Yes We Can (Profile You): A Brief Primer on Campaigns and Political Data by Daniel Kreiss (64 Stan. L. Rev. Online 70);
- Paving the Regulatory Road to the “Learning Health Care System” by Deven McGraw (64 Stan. L. Rev. Online 75);
- Famous for Fifteen People: Celebrity, Newsworthiness, and Fraley v. Facebook by Simon J. Frankel, Laura Brookover & Stephen Satterfield (64 Stan. L. Rev. Online 82); and
- The Right to Be Forgotten by Jeffrey Rosen (64 Stan. L. Rev. Online 88).
The text of Chief Judge Alex Kozinski’s keynote is forthcoming.
February 13, 2012 at 1:04 pm Posted in: Law Rev (Stanford), Law Rev Contents, Law School, Law School (Scholarship), Media Law, Military Law, Politics, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security), Social Network Websites, Supreme Court, Technology, Tort Law Print This Post No Comments
posted by Daniel Solove
The tape of the frantic 911 call from actress Demi Moore’s Beverly Hills home Monday night is out and, reports CBS News national correspondent Lee Cowan, the scene sounds a lot more dire than her publicist had let on.
After Moore was rushed to the hospital, a statement said she ‘d be seeking professional help for exhaustion and her overall health.
“The 911 tape really indicates that this is a much more serious situation than we were first led to believe,” says US Weekly’s Melanie Bromley. “We’ve been told it’s exhaustion that she’s suffering from, but you can tell from the tape that there’s a very desperate situation there. She’s having convulsions and she’s almost losing consciousness. It’s a very scary tape to listen to.”
Why is this public? Many 911 calls, like the one with Demi Moore, involve requests for medical treatment. Typically, whenever any doctor, nurse, or healthcare professional learns information about a person, it is stringently protected. A healthcare provider who breaches medical confidentiality can face ethical charges as well as legal liability for the breach of confidentiality tort. In addition, there may be HIPAA violations of the healthcare provider is HIPAA-regulated. 911 call centers are not HIPAA-regulated, but the operators are in a special position of trust and are often providing healthcare advice (and calling for healthcare services such as ambulances). If the call from Demi Moore’s home had been to a hospital or a doctor or any other type of healhcare provider, public disclosure of the call would be forbidden. Why isn’t a 911 call seen in the same light?
As I pointed out in my earlier post about the issue, I believe the release of 911 call transcripts to the public violates the constitutional right to information privacy. The cases generally recognize strong privacy rights whenever health information is involved. States with laws, policies, or practices that infringe upon the constitutional right to information privacy might be liable in a Section 1983 suit. I have not seen one yet, but it is about time something sparks states to rethink their policies about making the calls public.
The rationale for making the calls public is to provide transparency about the responsiveness of 911 call centers. But this can be done in other ways without violating the privacy of individuals. The main use of the Demi Moore call being public is to serve as grist for the media to learn about her problems. This doesn’t make the 911 system safer or better; it just makes the tabloids sell faster.
posted by Daniel Solove
A new report by the Ponemon Institute reveals some startling statistics about data security in healthcare:
The frequency of data breaches among organizations in this study has increased 32 percent from the previous year. In fact, 96 percent of all healthcare providers say they have had at least one data breach in the last two years. Most of these were due to employee mistakes and sloppiness—49 percent of respondents in this study cite lost or stolen computing devices and 41 percent note unintentional employee action. Another disturbing cause is third-party error, including business associates, according to 46 percent of participants.
There is a lot more alarming information in the report.
Self-interest alert: I provide privacy and data security programs to healthcare institutions.
posted by Daniel Solove
An increasing problem is caused when medical personnel post details about patients on their social media websites. From Daily News:
Providence Holy Cross Medical Center officials are investigating an employee who allegedly posted a patient’s medical information on his Facebook page, apparently to make fun of the woman and her medical condition.
According to a printout of the Facebook page obtained by the Daily News, the employee displayed a photo of a medical record listing the woman’s name and the date she was admitted, and posted the comment: “Funny but this patient came in to cure her VD and get birth control.”
Providence officials said the employee was provided by a staffing agency.
An interesting fact in this article is that most healthcare institutions lack policies for employee use of social media:
Only about a third of all hospitals are believed to have specific policies in place regarding patient information and social media sites, such as Facebook and Twitter, according to published reports.
I expect this to change in the next few years.
Hat Tip: Pogo Was Right
posted by Daniel Solove
Here’s a list of notable privacy books published in 2011.
|Saul Levmore & Martha Nussbaum, eds., The Offensive Internet (Harvard 2011)
This is a great collection of essays about the clash of free speech and privacy online. I have a book chapter in this volume along with Martha Nussbaum, Cass Sunstein, Brian Leiter, Danielle Citron, Frank Pasquale, Geoffrey Stone, and many others.
|Daniel J. Solove, Nothing to Hide: The False Tradeoff Between Privacy and Security (Yale 2011)
Nothing to Hide “succinctly and persuasively debunks the arguments that have contributed to privacy’s demise, including the canard that if you have nothing to hide, you have nothing to fear from surveillance. Privacy, he reminds us, is an essential aspect of human existence, and of a healthy liberal democracy—a right that protects the innocent, not just the guilty.” — David Cole, New York Review of Books
|Jeff Jarvis, Public Parts: How Sharing in the Digital Age Improves the Way We Work and Live (Simon & Schuster 2011)
I strongly disagree with a lot of what Jarvis says, but the book is certainly provocative and engaging.
|Daniel J. Solove & Paul M. Schwartz, Privacy Law Fundamentals (IAPP 2011)
“A key resource for busy professional practitioners. Solove and Schwartz have succeeded in distilling the fundamentals of privacy law in a manner accessible to a broad audience.” – Jules Polonetsky, Future of Privacy Forum
|Eli Pariser, The Filter Bubble (Penguin 2011)
An interesting critique of the personalization of the Internet. We often don’t see the Internet directly, but through tinted goggles designed by others who determine what we want to see.
|Siva Vaidhyanathan, The Googlization of Everything (U. California 2011)
A vigorous critique of Google and other companies that shape the Internet. With regard to privacy, Vaidhyanathan explains how social media and other companies encourage people’s sharing of information through their architecture — and often confound people in their ability to control their reputation.
|Susan Landau, Surveillance or Security? The Risk Posed by New Wiretapping Technologies (MIT 2011)
A compelling argument for how designing technologies around surveillance capabilities will undermine rather than promote security.
|Kevin Mitnick, Ghost in the Wires (Little Brown 2011)
A fascinating account of the exploits of Kevin Mitnick, the famous ex-hacker who inspired War Games. His tales are quite engaging, and he demonstrates that hacking is often not just about technical wizardry but old-fashioned con-artistry.
|Matt Ivester, lol . . . OMG! (CreateSpace 2011)
Ivester created Juicy Campus, the notorious college gossip website. After the site’s demise, Ivester changed his views about online gossip, recognizing the problems with Juicy Campus and the harms it caused. In this book, he offers thoughtful advice for students about what they post online.
|Joseph Epstein, Gossip: The Untrivial Pursuit (Houghton Mifflin Harcourt 2011)
A short engaging book that is filled with interesting stories and quotes about gossip. Highly literate, this book aims to expose gossip’s bad and good sides, and how new media are transforming gossip in troublesome ways.
|Anita Allen, Unpopular Privacy (Oxford 2011)
My blurb: “We live in a world of increasing exposure, and privacy is increasingly imperiled by the torrent of information being released online. In this powerful book, Anita Allen examines when the law should mandate privacy and when it shouldn’t. With nuance and thoughtfulness, Allen bravely tackles some of the toughest questions about privacy law — those involving the appropriate level of legal paternalism. Unpopular Privacy is lively, engaging, and provocative. It is filled with vivid examples, complex and fascinating issues, and thought-provoking ideas.”
|Frederick Lane, Cybertraps for the Young (NTI Upstream 2011)
A great overview of the various problems the Internet poses for children such as cyberbullying and sexting. This book is a very accessible overview for parents.
|Clare Sullivan, Digital Identity (University of Adelaide Press 2011)
Australian scholar Clare Sullivan explores the rise of “digital identity,” which is used for engaging in various transactions. Instead of arguing against systematized identification, she sees the future as heading inevitably in that direction and proposes a robust set of rights individuals should have over such identities. This is a thoughtful and pragmatic book, with a great discussion of Australian, UK, and EU law.
December 29, 2011 at 11:12 pm Posted in: Articles and Books, Book Reviews, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Danielle Citron
Bloomberg Businessweek reports on retailers’ use of camera surveillance to glean intelligence from shoppers’ behavior. A company called RetailNext, for instance, runs its software through a store’s security camera video feed to analyze customer behavior. It describes itself as the “leader in real-time in-store monitoring, enabling retailers and manufacturers to collect, analyze and visualize in-store data.” According to the company, it “uses best-in-class video analytics, on-shelf sensors, along with data from point-of-sale and other business systems, to automatically inform retailers about how people engage in their stores.” RetailNext’s software can integrate data from hardware such as RFID chips and motion sensors to track customers’ movements. The company explains that it “tracks more than 20 million shoppers per month by collecting data from more than 15,000 sensors in retail stores.” Its service apparently helps stores figure out where to place certain merchandise to boost sales. T-Mobile uses similar technology from another firm 3VR, whose software tracks how people move around their stores, how long they stand in front of displays, and which phones they pick up and for how long. 3VR is testing facial-recognition software that can identify shoppers’ gender and approximate age. Businessweek explains that the “software would give retailers a better handle on customer demographics and help them tailor promotions.” What we are seeing is, according to 3VR’s CEO, just “scratching the surface as someday “you’ll have the ability to measure every metric imaginable.”
Indeed. Little imagination is needed to predict the future in light of our present. As Joseph Turow‘s important new book The Daily You: How the New Advertising Industry Is Defining Your Identity and Worth (Yale University Press) explores, data collection and analysis of individuals is breathtaking. In the name of better, more relevant advertising and marketing efforts, companies like Acxiom have databases teeming with our demographic data (age, gender, race, ethnicity, address, income, marital status), interests, online and offline spending habits, and heath status based on our purchases and online comments (diabetic, allergy sufferer, and the like). Consumers are sorted into categories such as “Corporate Clout,” “Soccer and SUV,” “Mortgage Woes,” and “On the Edge.” eXelate gathers online data of over 200 million unique individuals per month through deals with hundreds of sites: their demographics, social activities, and social networks. Advertisers can add even more data to eXelate’s cookies– data from Nielsen, which includes Census Bureau data, as well as data brokers’ digital dossiers. Data firms like Lotame track the comments that people leave on sites and categorize them. Now, let’s consider weaving in facial recognition software and retailer cameras of companies like 3VR and RetailNext. And to really top things off, let’s think about linking all of this data to cellphone location information. The surveillance of networked spaces would be totalizing.
Turow’s book exposes important costs of these developments. This post will discuss a few–hopefully, I can have Professor Turow on for a Bright Ideas feature. This sort of targeting and hyper surveillance leaves many with far more narrow options and with social discrimination. Marketers use these databases to determine if Americans are worthy “targets” or not-worth-bothering with “waste.” For the “Soccer and SUV” moms between 35 and 45 who live in the West Coast and want to buy a small car, car companies may offer them serious discounts via online advertisements and e-mail. But their “On the Edge” counterparts get left in the cold with higher prices–why bother trying to attract people who don’t pay their debts? All of this sorting encourages media to offer soft stories designed to meet people’s interests, as secretly determined by those gathering and analyzing our networked lives. This discussion brings to mind to another important read: Julie Cohen‘s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press). As Professor Cohen thoughtfully explores, this sort of surveillance has a profound impact on the creative play of our everyday lives. It creates hierarchies among those watched and systematizes difference. I’ll have lots more to say about Cohen’s take on our networked society more generally, soon. In March, we will be hosting an online symposium on her book–much to look forward to in the new year.
posted by Daniel Solove
The new edition of my casebook, Information Privacy Law (4th edition) (with Paul M. Schwartz) is hot off the presses. And there’s a new edition of my casebook, Privacy, Information, and Technology (3rd edition) (with Paul M. Schwartz). Copies should be sent out to adopters very soon. If you’re interested in adopting the book and are having any difficulties getting a hold of a copy, please let me know.
You also might be interested in my concise guide to privacy law, also with Paul Schwartz, entitled Privacy Law Fundamentals. This short book was published earlier this year. You can order it on Amazon or via IAPP. It might make for a useful reference tool for students.
December 13, 2011 at 1:31 am Posted in: Articles and Books, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post No Comments
posted by Daniel Solove
My article, The PII Problem: Privacy and a New Concept of Personally Identifiable Information (with Professor Paul Schwartz), is now out in print. You can download the final published version from SSRN. Here’s the abstract:
Personally identifiable information (PII) is one of the most central concepts in information privacy regulation. The scope of privacy laws typically turns on whether PII is involved. The basic assumption behind the applicable laws is that if PII is not involved, then there can be no privacy harm. At the same time, there is no uniform definition of PII in information privacy law. Moreover, computer science has shown that in many circumstances non-PII can be linked to individuals, and that de-identified data can be re-identified. PII and non-PII are thus not immutable categories, and there is a risk that information deemed non-PII at one time can be transformed into PII at a later juncture. Due to the malleable nature of what constitutes PII, some commentators have even suggested that PII be abandoned as the mechanism by which to define the boundaries of privacy law.
In this Article, we argue that although the current approaches to PII are flawed, the concept of PII should not be abandoned. We develop a new approach called “PII 2.0,” which accounts for PII’s malleability. Based upon a standard rather than a rule, PII 2.0 utilizes a continuum of risk of identification. PII 2.0 regulates information that relates to either an “identified” or “identifiable” individual, and it establishes different requirements for each category. To illustrate this theory, we use the example of regulating behavioral marketing to adults and children. We show how existing approaches to PII impede the effective regulation of behavioral marketing, and how PII 2.0 would resolve these problems.