Tagged: Privacy

6

Better Stories, Better Laws, Better Culture

I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle.  Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.”  Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.

Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work.  The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory.  Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.”  One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property.  I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.

It’s difficult and even problematic to divine back-story based on a single text.  Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own.  The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous.  I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking.  Between 1996 and 2012 you see the emergence of a confident, postliberal thinker.  This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.

In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more.  More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.

I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself.  I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.

This is more than just a way of saying she takes culture seriously.  Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns.  What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture.  Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:

The four-part Code framework…cannot take us where we need to go.  An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects.  The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal.  (chap. 7, p. 24)

What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres.  This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today.  Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.

The implications of this view are compelling, especially where law and culture are concerned.  The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”  In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument.  Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21).  She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).

It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement.  Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice.  Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.

Indeed, we’re only as good as the stories we tell ourselves.  This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget.  There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design.  We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body.  Lousy stories, in short, make for lousy technologies.

Cohen arrives at an analogous conclusion.  Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets.  They are all in one another and of one another.  And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world.  Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.

The purposes of Configuring the Networked Self are many, no doubt.  For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively.  It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.

0

Stanford Law Review Online: The Drone as Privacy Catalyst

Stanford Law Review

The Stanford Law Review Online has just published a piece by M. Ryan Calo discussing the privacy implications of drone use within the United States. In The Drone as Privacy Catalyst, Calo argues that domestic use of drones for surveillance will go forward largely unimpeded by current privacy law, but that the “visceral jolt” caused by witnessing these drones hovering above our cities might serve as a catalyst and finally “drag privacy law into the twenty-first century.”

Calo writes:

In short, drones like those in widespread military use today will tomorrow be used by police, scientists, newspapers, hobbyists, and others here at home. And privacy law will not have much to say about it. Privacy advocates will. As with previous emerging technologies, advocates will argue that drones threaten our dwindling individual and collective privacy. But unlike the debates of recent decades, I think these arguments will gain serious traction among courts, regulators, and the general public.

Read the full article, The Drone as Privacy Catalyst by M. Ryan Calo, at the Stanford Law Review Online.

4

Unraveling Privacy as Corporate Strategy

The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. They expect to install thousands of the devices, some capable of picking out fifty people per minute even at regular walking speeds. At first the project will focus on law enforcement and improving security checkpoints, but within three years the plan calls for integrating iris scanning into most commercial locations. Entry to stores or malls, access to an ATM, use of public transportation, paying with credit, and many other identity-related transactions will occur through iris-scanning & recognition. (For more details, see Singularity’s post with videos.) Hoyos has the backing to make this happen: on October 12th they also announced new investment of over $40M to fund their growth.

There are obviously lots of interesting privacy- and tech-related issues here. I’ll focus on one: the company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)

In a forthcoming article, I’ve written at length about the unraveling effect and why it now poses a serious threat to privacy. This biometric deployment is one of many examples, but it most explicitly illustrates that unraveling has moved beyond unexpected consequence to become corporate strategy.

Read More

0

On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights

NW-Colloquy-Logo.jpg

This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones.  Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry.  Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms.  Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.

Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.

Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.

Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights.  Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images.  He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.

4

How Useful is Facebook Users’ Information?

A lot has been written on Facebook and its users loss of privacy. In fact, for some, Facebook and loss of privacy have become synonyms. A major fear involves the use of Facebook users’ personal information by information aggregators who will use the data to target the sale of products.  I do not intend to contest here that Facebook users disclose a lot of personal information. But, I want to look at how accurate is the information that Facebook users reveal on Facebook. 

When people surf the Internet their personal information, websites and searches are collected by cookies. As I have written, people tend to disregard these privacy threats at least partly due to their lack of visibility. Even those who know that their information can be collected by cookies, tend to forget it as they use the Internet on a daily basis.  As a result the information collected by cookies reveals relatively true preferences. Cookies will reveal embarrassing or secret facts, such as visits to pornography sites or to  medical sites to investigate a worrying medical condition.

But Facebook is different. Facebook users are constantly aware they are being viewed. True, they may not be thinking about the companies that may eventually aggregate the information. But, for sure they are thinking of the hundreds of friends who will be reading their status updates, examining their favorite books, favorite movies and linked websites. Facebook users “package” themselves. They present themselves to the world the way they want to be perceived. Their real preferences and tastes may be somewhat or even completely different from those they present on Facebook. A criminal law professor may have in her Facebook library collection legal theory books, while in fact in her spare time she is an avid purchaser and reader of chick lit books. A twenty year old college student may want to appear cool placing links to trendy music, although his real passion remains the collection of Star Wars figures.

Some information on Facebook, such as date of birth or marriage status is less likely to be mispresented by users and provides rich ground for data mining.  But Facebook users “packaging”  raises two issues. Companies seeking to target consumers with products they actually want to purchase may find Facebook information less useful than believed. And from a privacy perspective, it is not merely the disclosure of true personal information that we should be concerned about but the creation of false or misleading  individual profiles by data mining companies that can eventually change the information and consumption options available to these Facebook users.

5

The Havasupai Indians, Genetic Research and the Problem of Informed Consent

Researchers can gain significant genetic information by studying indigenous and preferably isolated populations. Although both researchers and indigenous populations can gain from this collaboration, the two  groups often do not see eye to eye.  This was the case of the collaboration between the Havasupai Indians and researchers from Arizona State University, which resulted in a long legal fight. The Havasupai Indians were suffering from high prevalence of diabetes and agreed to give their blood samples for genetic research on Diabetes. The members of the tribe were infuriated when they found out later that their blood samples were used for other purposes, among them genetic research on schizophrenia.

The New York Times reported yesterday that this conflict resulted in a settlement in which Arizona State University agreed to pay $700,000 to the tribe members and also return the blood samples. The Havasupai Indians’ main legal claim was of violation of informed consent. Informed consent requires that patients and research subjects receive full information that will enable them to decide whether to adopt a certain medical treatment plan or participate in research. Here, the Havasupai Indians argued that the informed consent principle was violated because they were told that their blood samples will be used for one purpose while, in fact, they were used for another.

No doubt, the Havasupai Indians informed consent argument resulted in their victorious settlement. But, the harder question is whether informed consent principle can be feasibly applied  in the area of genetics.  Genetic information is not just individual information it also provides information about groups and families. For example, assume there is a tribe in which some members agree to participate in genetic research investigating Manic Depression.  Other members of the tribe refuse because they are concerned that a result showing that there is a prevalent genetic mutation for Manic Depression among them could stigmatize them and even lead to discrimination against the tribe. The researchers collect samples only from the members of the group who agree to the research. But,  the results  still provide genetic information on all members of the tribe even those who refused to participate because of their genetic connection to those who participated. 

The result in the Havasupai settlement cannot be seen then as a victory for the principle of informed consent in the area of genetics. Restricting genetic researchers to use of samples only for the purpose for which they were collected only partly resolves the informed consent problem. The group nature of genetic information makes the application of informed consent to genetic research much more complicated than that.

0

23andMe – Has GINA Failed to Live Up to its Promise?

 23andMe is a genetic testing Internet site, which offers testing for over 100 genetic diseases and traits as well as ancestry testing. Many viewed 23andMe as the vehicle, which will bring genetic testing to the masses. It was promoted by “spit parties”  in which attendees spat into a test tube to have their saliva analyzed to produce their genetic profile. Yet, recently the New York Times reported that two and half years after it commenced service 23andMe has not attained its expected popularity. The report tied 23andMe’s lack of popularity to the limited usefulness of genetic information — genetic science’s inability to predict with certainty that a person is going to get sick.

And true, genetic science is all about probabilities. A genetic test can rarely predict with a 100% certainty that a person will incur a disease. I doubt, however, that this limitation is holding 23andMe back. Unfortunately, people are not very good at understanding the statistical results of genetic testing.  If anything, a woman who is told that she has a 60% of getting breast cancer is likely to dismiss the actual statistics and believe she is going to get sick. It is quite unlikely that people decided not to use 23andMe because of the low probabilities that accompany many genetic tests’ results.

Instead, fears of genetic discrimination likely played an important role in 23andMe’s failure to popularize genetic testing. People are afraid that if they undergo genetic testing and receive positive results they may lose their health insurance or their employment. As I have documented, these fears prevail although empirical data shows that genetic discrimination is in fact rare. Consequently, many individuals are inhibited by genetic discrimination concerns and choose not to undergo genetic testing.

Recently, the government enacted a relatively comprehensive federal law against genetic discrimination – the Genetic Information Nondiscrimination Act of 2008 (GINA). An important goal in legislating GINA was to alleviate fears of genetic discrimination. It was hoped that the enactment of a comprehensive federal law will provide a sense of protection and reduce genetic discrimination anxiety.  The failure of 23andMe to attain widespread popularity indicates that at least so far GINA has not been as successful as was hoped in quieting fears and encouraging the use of genetic testing technology.

0

Seeing With Your Tongue: No Really

Not much law here, yet. Researchers have taken theoretical work begun decades ago and developed a “brain port,” a device that uses technology to allow people to reorganize how they process sensory data. In the example below, blind people are able to see images. The device takes visual input, processes it, sends impulses to a pad that sits on someone’s tongue, and then the person is able to see some images. It takes quite a bit of training and in some cases folks have been able to use the device such that they actually re-train the brain and can reduce use of the device. Yes in a sense they have “rewired” their brain. This advance is just cool. The video also explains that the advances in this field trace to Professor Paul Bach-y-Rita who apparently had to overcome a fair amount of resistance in his fields of neurobiology and rehabilitation, because he was challenging many accepted beliefs regarding the way the brain works and more (all hail Kuhn). Will the law become involved in this area? It probably already is insofar as patents and copyright are being used to govern the technology. In addition, as I have noted before, the advances in embedded or sensory enhancing devices raise numerous questions regarding privacy, the ownership of data, bioethics, and research ethics. So welcome to the future and take a look at the video. It really is amazing and wonderful that scientists have made these breakthroughs. At the very least, anyone questioning how basic research can lead to unforeseen benefits should pause after seeing this work.

3

Cyber Civil Rights vs Privacy in the “Skanks in NYC” case

As Dan rightly notes, the recent court order unmasking the anonymous author of the “Skanks in NYC” blog raises serious privacy concerns. He elaborates on those concerns in his post, arguing that the court used too low of a standard, that the lawsuit may have been frivolous, and that anonymity needs greater protection. Dan links to CyberSLAPP, an EFF project which combats abusive lawsuits that seek to unmask anonymous critics of corporations or public figures.

CyberSLAPP’s site contains a spirited defense of a right of anonymous criticism which reads, in part:

Why is anonymous speech important?

There are a wide variety of reasons why people choose to speak anonymously. Many use anonymity to make criticisms that are difficult to state openly to their boss, for example, or the principal of their children’s school. The Internet has become a place where persons who might otherwise be stigmatized or embarrassed can gather and share information and support victims of violence, cancer patients, AIDS sufferers, child abuse and spousal abuse survivors, for example. They use newsgroups, Web sites, chat rooms, message boards, and other services to share sensitive and personal information anonymously without fear of embarassment or harm. Some police departments run phone services that allow anonymous reporting of crimes; it is only a matter of time before such services are available on the Internet. Anonymity also allows “whistleblowers” reporting on government or company abuses to bring important safety issues to light without fear of stigma or retaliation. And human rights workers and citizens of repressive regimes around the world who want to share information or just tell their stories frequently depend on staying anonymous sometimes for their very lives.

Is anonymous speech a right?

Yes. Anonymous speech is presumptively protected by the First Amendment to the Constitution. Anonymous pamphleteering played an important role for the Founding Fathers, including James Madison, Alexander Hamilton, and John Jay, whose Federalist Papers were first published anonymously. And the Supreme Court has consistently backed up that tradition, ruling, for example, that an Ohio law requiring authors to put their names on campaign literature was a violation of the First Amendment. Indeed, the Supreme Court has ruled that protecting anonymous speech has the same purpose as the First Amendment itself: to “protect unpopular individuals from retaliation and their ideas from suppression.”

Of course, any sensible person would be opposed to silencing today’s James Madisons or Alexander Hamiltons. Is this really the correct analogy here, though? Is Skanks in NYC like the Federalist Papers? Read More

7

New Developments in Cryptography and Privacy

ofb_encryptionAccording to Help Net Security, Craig Gentry, a researcher at IBM, appears to have found a way to allow “the deep and unlimited analysis of encrypted information – data that has been intentionally scrambled – without sacrificing confidentiality.” The solution involves a an “ideal lattice.” I’ll leave the explanation of all the math to the math/computer science folks. As the Help Net article notes, the solution seems to enable some great advantages for anyone providing cloud computing for:

computer vendors storing the confidential, electronic data of others will be able to fully analyze data on their clients’ behalf without expensive interaction with the client, and without seeing any of the private data. With Gentry’s technique, the analysis of encrypted information can yield the same detailed results as if the original data was fully visible to all.

It all sounds wonderful. One could have encrypted data and let others data mine while maintaining anonymity or privacy. Yet, something seemed odd to me. So I did what lawyers do, I called someone who knew more about computer science and asked for some help. That person explained that yes this could mean one could query an encrypted database without decrypting the data. The example to consider is a database of book purchases. One could ask how many people bought both book A and book B and see that result without ever seeing what a specific person purchased. Great, right? Not so fast.

As this person reminded me, with other sources of information one can figure out what a specific person did. That reminded me of the AOL debacle. With a little work, people were able to figure out who the anonymous subjects were.

All of which highlights that privacy is not binary. The cluster of information and the ability to analyze it seems often, if not always, to lead to problems about the use of information. So if this breakthrough allows a company or the government to claim that we should remain calm and all is well, we may want to remain clam but show how all may not be well. A few regulations about the use of the data even if supposedly anonymous, might allow the beneficial aspects of the solution to thrive while limiting the harms that can occur.

Image: WikiCommons
By: Gwenda; License: Public Domain
(My apologies to CS folks if the image does not match the breakthrough’s area of encryption)