Category: Privacy

0

EU and US data privacy rights: six degrees of separation

The EU and the US have often engaged in a “tit for tat” exchange with regard to their respective systems of privacy protection. For example, EU academics have criticized US law as reflecting a “civil rights” approach that only affords data privacy rights to its own citizens, whereas US commentators have argued that privacy protection in the EU is less effective than its status as a fundamental right would suggest.

I am convinced that neither the EU nor the US properly understands each other’s approach to data privacy. This is not surprising, given that a sophisticated understanding of the two legal systems requires language skills and comparative legal knowledge that few people have on either side of the Atlantic. The close cultural and historical ties between the EU and the US may also make mutual understanding more difficult, since concepts that seem similar on the surface seem may actually be quite different in reality.

I like to think of the difference between the EU and US concepts of data privacy rights as reflecting the differing epistemological views of the rationalist philosophers (e.g., Descartes) versus those of the empiricists (e.g., Hume and Locke) who influenced development of the legal systems in Europe and the US. EU data protection law derives normative rules based mainly on reason and deduction (as do the rationalists), while US privacy law bases legal rules more on evidence drawn from experience (like the empiricists). It is thus no surprise that the law and economics approach that is so influential in US jurisprudence is largely unknown in EU data protection law, while the more dogmatic, conceptual approach of EU law would seem strange to many US lawyers. An illustration is provided by the recent judgment of the Court of Justice of the European Union dealing with the “right to be forgotten” (C-131/12 Google Spain v AEPD and Mario Costeja Gonzalez), where the Court’s argumentation was largely self-referential and it took little notice of the practical implications of its judgment.

Here is a brief discussion of six important areas of difference between data privacy law in the EU and US, with a particular focus on their systems of constitutional rights:

Omnibus vs sectoral approach: The EU has an overarching legal framework for data privacy that covers all areas of data processing, based on EU constitutional law (e.g. the EU Charter of Fundamental Rights), the European Convention on Human Rights, the EU Data Protection Directive, national law, and other sources. In the US, there is no single legal source protecting data privacy at all levels, and legal regulation operates more at a sectoral level (e.g., focusing on specific areas such as children’s privacy, bank data etc).

Constitutional rights as the preferred method of protection: The US Supreme Court has interpreted the US Constitution to create a constitutional right to privacy in certain circumstances. However, from a US viewpoint, constitutional rights are only one vehicle to protect data privacy. Commentators have described the strengths of the US system for privacy protection as comprising a myriad of factors, including “an emergent privacy profession replete with a rapidly expanding body of knowledge, training, certification, conferences, publications, web-tools and professional development; self regulatory initiatives; civil society engagement; academic programs with rich, multidisciplinary research agendas; formidable privacy practices in leading law and accounting firms; privacy seals; peaking interest by the national press; robust enforcement by Federal and State regulators, and individual and class litigation”. In contrast, in the EU the key factor underlying data protection is its status as a fundamental right (see, e.g., Article 1 of the EU General Data Protection Regulation proposed by the European Commission in 2012).

Different conceptions of rights: In the US, a constitutional right must by definition derive from the US Constitution, while in the EU, fundamental rights are considered “general principles of law” that apply to all human beings within EU jurisdiction even if they do not derive from a specific constitutional source. The concept of fundamental rights in the EU is thus broader and more universal than that of constitutional rights in the US.

Positive and negative rights: In the US, privacy is generally protected as a “negative” right that obliges the government to refrain from taking actions that would violate constitutional rights. In the EU the state also has a constitutional obligation to affirmatively protect privacy rights (see the next point below).

Requirement of state action: US law protects constitutional rights only against government action, while in the EU the state also has a duty under certain circumstances to protect the privacy of individuals against violations by nongovernmental actors. An example from outside the area of privacy is provided by the decisions of the European Court of Human Rights (ECHR) in Case of Z and Others v. United Kingdom (2001) and the US Supreme Court in DeShaney v. Winnebago County (1989). Both cases involved the issue of whether the state has a duty under constitutional law to protect a child against abuse by its parents; in essence, the ECHR answered “yes” and the US Supreme Court answered “no”.

Requirement of “harm”: In the EU, the processing of personal data is generally prohibited absent a legal basis, and the CJEU has ruled that a data protection violation does not depend on “whether the information communicated is of a sensitive character or whether the persons concerned have been inconvenienced in any way” (para. 75 of the Rechnungshof case of 2003). In the US data processing is generally allowed unless it causes some harm or is otherwise restricted by law.

The EU and US systems of privacy rights have each developed in a democratic system of government based on the rule of law, and have been shaped by unique cultural and historical factors, so there is little point in debating which one is “better”. However, the fact that the two systems are anchored in their constitutional frameworks does not mean that practical measures cannot be found to bridge some of the differences between them; I am part of a group (the EU-US “Privacy Bridges” project) that is trying to do just that. The two systems may also influence each other and grow closer together over time. For example, the call for enactment of a “consumer privacy bill of rights” in the framework for protection of consumer privacy released by the White House in February 2012 seems to have been inspired in part by the status of data protection as a fundamental right in EU law.

The central role played by constitutional factors in the EU and US systems of data privacy rights means it is essential that more attention be given to the study of privacy law from a comparative constitutional perspective. For example, I wonder why there is so little opportunity in US law schools to study EU data protection law, and vice-versa? Efforts must be increased on both sides of the Atlantic to better understand each other’s systems for protecting data privacy rights.

1

The right to be forgotten and the global reach of EU data protection law

It is a pleasure to be a guest blogger on Concurring Opinions during the month of June. I will be discussing issues and developments relating to European data protection and privacy law, from an international perspective.

Let me begin with a recent case of the Court of Justice of the European Union (CJEU) that has received a great deal of attention. In its judgment of May 13 in the case C-131/12 Google Spain v AEPD and Mario Costeja Gonzalez, the Court recognized a “right to be forgotten” with regard to Internet search engine results based on the EU Data Protection Directive 95/46. This judgment by the highest court in the EU demonstrates that, while it is understandable that data protection law be construed broadly so that individuals are not deprived of protection, it is also necessary to specify some boundaries to define when it does not apply, if EU data protection law is not to become a kind of global law applicable to the entire Internet.

I have already summarized the case elsewhere, and here will only deal with its international jurisdictional aspects. It involved a claim brought by an individual in Spain against both the US parent company Google Inc, and its subsidiary Google Spain. The latter company, which has separate legal personality in Spain, acts as a commercial agent for the Google group in that country, in particular with regard to the sale of online advertising on the search engine web site www.google.com operated by Google Inc. via its servers in California.

The CJEU applied EU data protection law to the Google search engine under Article 4(1)(a) of the Directive, based on its finding that Google Spain was “inextricably linked” to the activities of Google Inc. by virtue of its sale of advertising space on the search engine site provided by Google Inc, even though Google Spain had no direct involvement in running the search engine. In short, the Court found that data processing by the search engine was “carried out in the context of the activities of an establishment of the controller” (i.e., Google Spain).

Since the Court applied EU law based on the activities of Google Spain, it did not discuss the circumstances under which EU data protection law can be applied to processing by data controllers established outside the EU under Article 4(1)(c) of the Directive (see paragraph 61 of the judgment), though the Court did emphasize the broad territorial applicability of EU data protection law (paragraph 54). Since the right to be forgotten has effect on search engines operated from computers located outside the EU, I consider this to be a case of extraterritorial jurisdiction (or extraterritorial application of EU law: I am aware of the distinction between applicable law and jurisdiction, but will use “jurisdiction” here as a shorthand to refer to both).

The Court did not limit its holding to claims brought by EU individuals, or to search engines operated under specific domains. An individual seeking to assert a right under the Directive need not be a citizen of an EU Member State, or have any particular connection with the EU, as long as the act of data processing on which his or her claim is based is subject to EU data protection law under Article 4. The Directive states that EU data protection law applies regardless of an individual’s nationality or residence (see Recital 2), and it is widely recognized that it may apply to entities outside the EU.

Thus, it seems that there would be no impediment under EU law, for example, to a Chinese citizen in China who uses a US-based Internet search engine with a subsidiary in the EU asserting the right to be forgotten against the EU subsidiary with regard to results generated by the search engine (note that Article 3(2) of the proposed EU General Data Protection Regulation would limit the possibility of asserting the right to be forgotten by individuals without any connection to the EU, since the application of EU data protection law would be limited to “data subjects residing in the Union”). Since only the US entity running the search engine would have the power to amend the search results, in effect the Chinese individual would be using EU data protection law as a vehicle to bring a claim against the US entity. The judgment therefore potentially applies EU data protection law to the entire Internet, a situation that was not foreseen when the Directive was enacted (as noted by the Court in paragraphs 69-70 of its 2003 Lindqvist judgment). It could lead to forum shopping and “right to be forgotten tourism” by individuals from around the world (much as UK libel laws have lead to criticisms of “libel tourism“).

It is likely that the judgment will be interpreted more restrictively than this. For example, the UK Information Commissioner’s office has announced that it will focus on “concerns linked to clear evidence of damage and distress to individuals” in enforcing the right to be forgotten. However, if one takes the position that Article 16 the Treaty on the Foundation of the European Union (TFEU) has direct effect, then the ability of individual DPAs to limit the judgment to situations where some “damage or distress” has occurred seems legally doubtful (see paragraph 96, where the Court remarked that the right to be forgotten applies regardless of whether inclusion of an individual’s name in search results “causes prejudice”). Google has also recently announced a procedure for individuals to remove their names from search results under certain circumstances, and the way that online services deal with implementation of the judgment will be crucial in determining its territorial scope in practice.

In any event, the Court’s lack of concern with the territorial application of the judgment demonstrates an inward-looking attitude that fails to take into account the global nature of the Internet. It also increases the need for enactment of the proposed Regulation, in order to provide some territorial limits to the right to be forgotten.

0

Tribune of the People

bostonglobe-504oped_tsaiCLRYesterday, the Boston Globe published my piece proposing the creation of a new national office dedicated to the protection of civil and human rights. I wanted to give a little more context to the idea here, beyond what the op-ed format allowed.

The basic idea is that we need a single national figure to instantiate rights and defend them consistently. For a variety of reasons, our existing political-legal structure fails to do this robustly and consistently. Enforcement of civil and human rights is fractured among multiple bodies with narrow mandates (U.S. Department of Justice, U.S. Commission on Civil Rights), all of which are captured by party politics. Those in the trenches know how much a general commitment to rights, along with which rights to promote, can vary wildly depending on which party controls the White House. Amicus briefs offer only an ad hoc solution, because such writings are driven by interest group concerns, which can be quite distorting, and don’t carry the kind of institutional weight that government briefs do (if they are read at all by judges, as opposed to their clerks). All of these factors reinforce the idiosyncratic way in which relevant law, including international and comparative law, is presented to jurists.

Historically, presidential agendas have at times aligned with the goal of promoting civil or human rights. But case study after case study underscores how challenging this can be. The bureaucratic politics, party dynamics, and reputational hurdles can be daunting to navigate for anyone who might want a president to take vigorous action on behalf of individual rights.

The idea I have proposed is adapted from one presented by a group of experts based at the University of Chicago in the immediate post-World War II period. At the time, the group–led by the visionary Robert Maynard Hutchins (Chancellor of the University of Chicago and former Dean of Yale Law School) and the fiery Giuseppe A. Borgese (professor of Italian literature)–hoped to inspire the creation of a world constitution. Many later found the overall project too utopian. But whatever one thinks of such strong internationalist proposals, the project allowed Americans to reflect deeply on what ailed American constitutional self-governnance.

Perhaps the most penetrating critique that emerged from the working group’s many meetings involved separation of powers. They believed Americans had become slavish followers of Montesquieu, by insisting that institutional functions had to be strictly distinguished in the name of ensuring political liberty. But strict separation was a disaster: American politics had been consumed by paralyzing party politics and bureaucratic dysfunction, utterly incapable of dealing with urgent problems. Members of the Chicago group turned separation of powers orthodoxy on its head by offering reforms that retained some measure of institutional distinctiveness, but also dramatically increased the overlap of functions.  For example, they thought it wise to give a president explicit constitutional authority to initiate legislation and to serve as Chief Justice of the Supreme Court.

These mid-century reformers felt comfortable injecting greater energy into government in part because they had a strong belief in rights. The Tribune of the People idea encapsulates that commitment, as it was intended to be an office charged with defending “the natural and civil rights of individuals and groups against violation or neglect” by government. The Chicago group tried to design an office that would “neither be a duplicate or retainer of the President in office, a Vice-President in disguise, nor his systematic heckler and rival.”  A Tribune should be “truly the spokesman for real minorities, not the exponent of a second party.”

In a sense, other countries heeded this call, while Americans have largely forgotten the conversation. Today, there are a number of analogues worth studying. Countries that have a national figure dedicated to the enforcement of rights include Albania, Argentina, Armenia, Azerbaijian, Bulgaria, Columbia, Costa Rica, Estonia, France, Guatemala, Norway, Peru, Poland, Portugal, and Serbia. Each of those countries has a Defender of Rights, Commissioner for Human Rights, or Chancellor of Justice. There exists a U.N. High Commissioner for Human Rights, who recently weighed in on Oklahoma’s bungled execution by lethal injection, but has no real power to influence rights development here.

So it seems it is well past the time to consider whether we are doing all that we can institutionally to protect civil and human rights.

 

P
0

The FTC and the New Common Law of Privacy

I’m pleased to announce that my article with Professor Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 Colum. L. Rev. 583 (2014), is now out in print.  You can download the final published version at SSRN.  Here’s the abstract:

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite over fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States — more so than nearly any privacy statute or any common law tort.

In this Article, we contend that the FTC’s privacy jurisprudence is functionally equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, extends far beyond privacy policies, and involves a full suite of substantive rules that exist independently from a company’s privacy representations.

P
0

FTC v. Wyndham

The case has been quite long in the making. The opinion has been eagerly anticipated in privacy and data security circles. Fifteen years of regulatory actions have been hanging in the balance. We have waited and waited for the decision, and it has finally arrived.

The case is FTC v. Wyndham, and it is round one to the Federal Trade Commission (FTC).

Some Quick Background

For the past 15 years, the FTC has been one of the leading regulators of data security. It has brought actions against companies that fail to provide common security safeguards on personal data. The FTC has claimed that inadequate data security violates the FTC Act which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In many cases, the FTC has alleged that inadequate data security is deceptive because it contradicts promises made in privacy policies that companies will protect people’s data with “good,” “adequate,” or “reasonable” security measures. And in a number of cases, the FTC has charged that inadequate data security is unfair because it creates actual or likely unavoidable harm to consumers which isn’t outweighed by other benefits.

For more background about the FTC’s privacy and data security enforcement, please see my article with Professor Woodrow Hartzog: The FTC and the New Common Law of Privacy, 114 Colum. L. Rev. 583 (2014). The article has just come out in print, and the final published version can be downloaded for free here.

Thus far, when faced with an FTC data security complaint, companies have settled. But finally one company, Wyndham Worldwide Corporation, challenged the FTC. A duel has been waging in court. The battle has been one of gigantic proportions because so much is at stake: Wyndham has raised fundamental challenges the FTC’s power to regulate data security under the FTC Act.

The Court’s Opinion and Some Thoughts

1. The FTC’s Unfairness Authority

Wyndham argued that because Congress enacted several data security laws to regulate specific industries (FCRA, GLBA, HIPAA, COPPA) that Congress did not intend for the FTC to be able to regulate data security more generally under FTC Act unfairness. The court rejected this argument, holding that “subsequent data-security legislation seems to complement—not preclude—the FTC’s authority.”

This holding seems quite reasonable, as the FTC Act was a very broad grant of authority to the FTC to regulate for consumer protection for most industries.

Read More

1

Facebook Privacy Dinosaur

privacy_checkup_1.jpeg.CROP.promovar-mediumlarge

I have yet to see it “in the wild,” but media outlets are reporting that Facebook has created a Privacy Dinosaur—a little helper that checks in on users in real-time to help ensure that they understand who will see their update or post.   Whether you think of this as “visceral notice,” a privacy “nudge,” or “obscurity by design,” suffice it to say that this development will be of interest to many a privacy scholar.

0

Schneier on the NSA, Google, Facebook Connection But What About Phones?

Bruce Schneier argues that we should not be fooled by Google, Facebook, and other companies that decry the recent NSA data grabs, because the nature of the Internet is surveillance; but what about phone companies? The press has jumped on the Obama administration’s forthcoming plan that

would end its systematic collection of data about Americans’ calling habits. The bulk records would stay in the hands of phone companies, which would not be required to retain the data for any longer than they normally would. And the N.S.A. could obtain specific records only with permission from a judge, using a new kind of court order.

The details are to come, but Schneier’s point about the structure of the system applies to phone companies too, “The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.”

There are few things to parse here. OK there are many things to parse, but a blog post has limits. First, Schneier’s point about Internet companies is different than his one about the government. His point is that yes, many companies have stepped up security to prevent some government spying, but because Gooogle, Microsoft, Facebook, Yahoo, Apple and almost any online company needs access to user data to run their businesses and make money, they all have built “massive security vulnerability” “into [their] services by design.” When a company does that, “by extension, the U.S. government, still has access to your communications.” Second, as Schneier points out, even if a company tried to plug the holes, the government won’t let that happen. Microsoft’s Skype service has built in holes. The government has demanded encryption keys. And so it goes. And so we have a line on the phone problems.

The proposed changes may solve little, because so far the government has been able to use procedure and sheer spying outside procedure to grab data. The key will be what procedures are required and what penalties follow for failing to follow procedure. That said, as I argued regarding data security in January 2013, fixing data security (and by extension phone problems) will require several changes:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

And here is the crux of Schneier’s ire: companies that are saying your data is safe, are trying to protect their business, but as he sees it:

A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

In that sense he thinks companies should lean on the government and openly state security is not available for now. Although he knows no company can say that, the idea that we should all acknowledge the problem and go after the government to change the game is correct.

The point is correct for Internet companies and for phone companies. We should not over-focus on phones and forget the other ways we can be watched.

0

Public Service Announcement for Google Glass Team

The Google Glass team has a post about the so-called myths about Google Glass, but the post fails to see what is happening around Glass. That is sad. Instead of addressing the issues head on, the post preaches to the faithful (just read the comments). As Nate Swanner put it “We’re not sure posting something to the tech-centric Google+ crowd is really fixing the issues though.” Google and other tech companies trying to do something new will always face challenges, fear, and distrust. The sad part for me is when all sides line up and fail to engage with the real issues. Some have asked what I did when at Google. Part of the job was to present the technology, address concerns, and then see where all of us saw new, deep issues to come. I loved it, because I knew the technology was driven by high-standards. The problems flowed from not explaining the tech. This post highlights talking past each other. Furthermore the truly wonderful advances that might be possible with Glass are not discussed. That distresses me, as no one really wins in that approach. But I will show what is not great about the post as a possible public service announcement for the Glass Team and others in the tech space.

First, the post sets an absurd tone. It starts with “Mr. Rogers was a Navy SEAL. A tooth placed in soda will dissolve in 24 hours. Gators roam the sewers of big cities and Walt Disney is cryogenically frozen. These are just some of the most common and — let’s admit it — awesome urban myths out there.” Message: Glass critics are crazy people that by into extreme outlying beliefs, not truth. And if you think I am incorrect, just look at this next statement: “Myths can be fun, but they can also be confusing or unsettling. And if spoken enough, they can morph into something that resembles fact. (Side note: did you know that people used to think that traveling too quickly on a train would damage the human body?).” Hah! We must be idiots that fear the future.

That said maybe there are some myths that should be addressed. Having worked at Google, I can say that while I was there, technology was not done on a whim. I love that about the company and yes, the Glass Team fits here too. Furthermore, as those who study technology history know, even electricity faced myths (sometimes propagated by oil barons) as it took hold. Most of the Glass myths seem to turn on cultural fears about further disconnection from the world, always on or plugged in life, and so on. But the post contradicts itself or thinks no one can tell when its myth-busting is self-serving or non-responsive.

On the glass is elitist issue: Google is for everyone, but high priced, and not ready for prime time. Huh? Look if you want to say don’t panic, few people have it, that is OK and may be true. But when you also argue that it is not elitist because a range of people (not just tech-worshiping geeks) use Glass; yet nonetheless the $1500 price tag is not about privilege because “In some cases, their work has paid for it. Others have raised money on Kickstarter and Indiegogo. And for some, it’s been a gift” the argument is absurd. That a few, select people have found creative ways to obtain funds for Glass does not belie the elite pricing; it shows it.

The surveillance and privacy responses reveal a deeper issue. Yes, Glass is designed to signal when it is on. And yes that may limit surveillance, but barely. So too for the privacy issue. Check this one in full:

Myth 10 – Glass marks the end of privacy
When cameras first hit the consumer market in the late 19th century, people declared an end to privacy. Cameras were banned in parks, at national monuments and on beaches. People feared the same when the first cell phone cameras came out. Today, there are more cameras than ever before. In ten years there will be even more cameras, with or without Glass. 150+ years of cameras and eight years of YouTube are a good indicator of the kinds of photos and videos people capture–from our favorite cat videos to dramatic, perspective-changing looks at environmental destruction, government crackdowns, and everyday human miracles. 

ACH!!! Cameras proliferated and we have all sorts of great, new pictures so privacy is not harmed?!?!?! Swanner hits this one dead on:

Google suggests the same privacy fears brought up with Glass have been posed when both regular cameras and cell phone cameras were introduced in their day. What they don’t address is that it’s pretty easy to tell when someone is pointing a device they’re holding up at you; it’s much harder to tell when you’re being video taped while someone looks in your general direction. In a more intimate setting — say a bar — it’s pretty clear when someone is taping you. In an open space? Not so much.

So tech evangelists, I beg you, remember your fans are myriad and smart. Engage us fairly and you will often receive the love and support you seek. Insult people’s intelligence, and you are no-better than those you would call Luddite.

Industrial Policy for Big Data

If you are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan, data brokers are probably going to assume you’re heavier than average. We know that drug companies may use that data to recruit research subjects.  Marketers could utilize the data to target ads for diet aids, or for types of food that research reveals to be particularly favored by people who are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan.

We may also reasonably assume that the data can be put to darker purposes: for example, to offer credit on worse terms to the obese (stereotype-driven assessment of looks and abilities reigns from Silicon Valley to experimental labs).  And perhaps some day it will be put to higher purposes: for example, identifying “obesity clusters” that might be linked to overexposure to some contaminant

To summarize: let’s roughly rank these biosurveillance goals as: 

1) Curing illness or precursors to illness (identifying the obesity cluster; clinical trial recruitment)

2) Helping match those offering products to those wanting them (food marketing)

3) Promoting the classification and de facto punishment of certain groups (identifying a certain class as worse credit risks)

Read More

0

Trust is What Makes an Expectation of Privacy Reasonable

A few weeks ago, I defined trust as a favorable expectations as to the behavior of others. It refers to a behavior that reduces uncertainty about others to levels that us to function alongside them. This is a sociological definition; it refers directly to interpersonal interaction. But how does trust develop between persons? And is that trust sufficiently reasonable to merit society’s and the state’s protection. What follows is part of an ongoing process of developing the theory of privacy-as-trust. It is by no means a final project just yet. I look forward to your comments.

Among intimates, trust may emerge over time as the product of an iterative exchange; this type of trust is relatively simple to understand and generally considered reasonable. Therefore, I will spend little time proving the reasonableness of trust among intimates.

But social scientists have found that trust among strangers can be just as strong and lasting as trust among intimates, even without the option of a repeated game. Trust among strangers emerges from two social bases—sharing a stigmatizing identity and sharing trustworthy friends. When these social elements are part of the context of a sharing incident among relative strangers, that context should be considered trustworthy and, thus, a reasonable place for sharing.

Traditionally, social scientists argued that trust developed rationally over time as part of an ongoing process of engagement with another: if a interacts with b over t=0 to t=99 and b acts in a trustworthy manner during those interactions, a is in a better position to predict that b will act trustworthy at t=100 than if a were basing its prediction for t=10 on interactions between t=0 and t=9. This prediction process is based on past behavior and assumes the trustor’s rationality as a predictor. Given those assumptions, it seems relatively easy to trust people with whom we interact often.

But trust also develops among strangers, none of whom have the benefit of repeated interaction to make fully informed and completely rational decisions about others. In fact, a decision to trust is never wholly rational, it is a probability determination; “trust begins where knowledge ends,” as Niklas Luhmann said. What’s more, trust not only develops earlier than the probability model would suggest; in certain circumstances, trust is also strong early on, something that would seem impossible under a probability approach to trust. Sometimes, that early trust among strangers is the result of a cue of expertise, a medical or law degree, for example. But trust among lay strangers cannot be based on expertise or repeated interaction, and yet, sociologists have observed that such trust is quite common.

I argue that reasonable trust among strangers emerges when one of two things happen: when (1) strangers share a stigmatizing social identity or (2) share a strong interpersonal network. In a sense, we transfer the trust we have in others that are very similar to a stranger to the stranger himself or use the stranger’s friends as a cue to his trustworthiness. Sociologists call this a transference process whereby we take information about a known entity and extend it to an unknown entity. That is why trust via accreditation works: we transfer the trust we have in a degree from Harvard Law School, which we know, to one of its graduates, whom we do not. But transference can also work among persons. The sociologist Mark Granovetter has shown that economic actors transfer trust to an unknown party based on how embedded the new person is in a familiar and trusted social network. That is why networking is so important to getting ahead in any industry and why recommendation letters from senior, well-regarded, or renowned colleagues are often most effective. This is the theory of social embeddedness: someone will do business with you, hire you as an employee, trade with you, or enter into a contract with you not only if you know a lot of the same people, but if you know a lot of the right people, the trustworthy people, the parties with whom others have a long, positive history. So it’s not just how many people you know, it’s who you know.

The same is true outside the economic context. The Pew Internet and American Life Project found that of those teenagers who use online social networks and have online “friends” that they have never met off-line, about 70 % of those “friends” had more than one mutual friend in common. Although Pew did not distinguish between types of mutual friends, the survey found that this was among the strongest factors associated with “friending” strangers online. More research is needed.

The other social factor that creates trust among strangers is sharing a salient in-group identity. But such trust transference is not simply a case of privileging familiarity, at best, or discrimination, at worst. Rather, sharing an identity with a group that may face discrimination or has a long history of fighting for equal rights is a proxy for one of the greatest sources of trust among persons: sharing values. At the outset, sharing an in-group identity is an easy shorthand for common values and, therefore, is a reasonable basis for trust among strangers.

Social scientists call transferring known in-group trust to an unknown member of that group category-driven processing or category-based trust. But I argue that it cannot just be any group and any identity; trust is transferred when a stranger is a member of an in-group, the identity of which is defining or important for the trustor. For example, we do not see greater trust between men and other men perhaps because the identity of manhood is not a salient in-group identity. More likely, the status of being a man is not an adequate cue that a male stranger shares your values. Trust forms and is maintained with persons with similar goals and values and a perceived interest in maintaining the trusting relationship. But it is sharing values you find most important that breed trust.For example, members of the LGBT community are, naturally, more likely to support the freedom to marry for gays and lesbians than any other group. Therefore, sharing an in-group identity that constitutes an important part of a trustor’s persona operates as a cue that the trustee shares values important to that group.

What makes these factors—salient in-group identity and social embeddedness—the right bases for establishing when trust among strangers is reasonable and, therefore, when it should be protected by society, is that the presence of these factors is what justifies our interpersonal actions. We look for these factors, we decide to share on these bases, and our expectations of privacy are based on them.