Category: Privacy (Electronic Surveillance)

Posner
6

On Privacy, Free Speech, & Related Matters – Richard Posner vs David Cole & Others

I’m exaggerating a little, but I think privacy is primarily wanted by people because they want to conceal information to fool others. Richard Posner

Privacy is overratedRichard Posner (2013)

 Much of what passes for the name of privacy is really just trying to conceal the disreputable parts of your conduct. Privacy is mainly about trying to improve your social and business opportunities by concealing the sorts of bad activities that would cause other people not to want to deal with you.Richard Posner (2014)

This is the seventh installment in the “Posner on Posner” series of posts on Seventh Circuit Judge Richard Posner. The first installment can be found here, the second here, the third here, the fourth here, the fifth here, and the sixth one here.

Privacy has been on Richard Posner’s mind for more than three-and-a-half decades. His views, as evidenced by the epigraph quotes above, have sparked debate in a variety of quarters, both academic and policy. In some ways those views seem oddly consistent with his persona – on the one hand, he is a very public man as revealed by his many writings, while on the other hand, he is a very private man about whom we know little of his life outside of the law save for a New Yorker piece on him thirteen years ago.

On the scholarly side of the privacy divide, his writings include:

  1. The Right of Privacy,” 12 Georgia Law Review 393 (1978)
  2. Privacy, Secrecy, and Reputation,” 28 Buffalo Law Review 1 (1979)
  3. The Uncertain Protection of Privacy by the Supreme Court,” 1979 Supreme Court Review 173
  4. The Economics of Privacy,” 71 The American Economic Review 405 (1981)
  5. Privacy,” Big Think (video clip, nd)
  6. Privacy is Overrated,” New York Daily News, April 28, 2014

For a sampling of Judge Posner’s opinion on privacy, go here (and search Privacy)

(Note: Some links will only open in Firefox or Chrome.)

_____________________

Privacy – “What’s the big deal?”

Privacy interests should really have very little weight when you’re talking about national security. The world is in an extremely turbulent state – very dangerous. — Richard Posner (2014)

Recently, Georgetown Law Center held a conference entitled “Cybercrime 2020: The Future of Online Crime and Investigations” (full C-SPAN video here). In the course of that event, Judge Posner joined with others in government, private industry, and in the legal academy to discuss privacy, the Fourth Amendment, and free speech, among other things. A portion of the exchange between Judge Posner and Georgetown law professor David Cole was captured on video.

Judge Richard Posner

Judge Richard Posner

Scene: The Judge sitting in his office, speaking into a video conference camera — As he rubbed his fingers across the page and looked down, Posner began: “I was thinking, listening to Professor Cole, what exactly is the information that he’s worried about?” Posner paused, as if to setup his next point: “I have a cell phone – iPhone 6 – so if someone drained my cell phone, they would find a picture of my cat [laughter], some phone numbers, some e-mail addresses, some e-mail texts – so what’s the big deal?”

He then glanced up from the text he appeared to be reading and spoke with a grin: “Other people must have really exciting stuff. [laughter] Could they narrate their adulteries or something like that?” [laughter] He then waved his hands in the air before posing a question to the Georgetown Professor.

“What is it that you’re worrying about?” Posner asked as if truly puzzled.

At that point, Cole leaned into his microphone and looked up at the video screen bearing the Judge’s image next to case reports on his left and the American flag on his right.

Cole: “That’s a great question, Judge Posner.”

Professor Cole continued, adding his own humor to the mix: “And I, like you, have only pictures of cats on my phone. [laughter] And I’m not worried about anything from myself, but I’m worried for others.”

On a more substantive note, Cole added: “Your question, which goes back to your original statement, . . . value[s] . . . privacy unless you have something to hide. That is a very, very shortsighted way of thinking about the value [of privacy]. I agree with Michael Dreeben: Privacy is critical to a democracy; it is critical to political freedom; [and] it is critical to intimacy.”

The sex video hypothetical

And then with a sparkle in his spectacled eye, Cole stated: “Your question brings to mind a cartoon that was in the New Yorker, just in the last couple of issues, where a couple is sitting in bed and they have video surveillance cameras over each one of them trained down on the bed [Cole holds his hands above his head to illustrate the peering cameras]. And the wife says to the husband: ‘What are you worried about if you’ve got nothing to hide, you’ve got nothing to fear.’”

Using the cartoon as his conceptual springboard, Cole moved on to his main point: “It seems to me that all of us, whether we are engaged in entirely cat-loving behavior, or whether we are going to psychiatrists, or abortion providers, or rape crises centers, or Alcoholics Anonymous, or have an affair – all of us have something to hide. Even if you don’t have anything to hide, if you live a life that could be entirely transparent to the rest of the world, I still think the value of that life would be significantly diminished if it had to be transparent.”

Without missing a beat, Cole circled back to his video theme: “Again you could say, ‘if you’ve got nothing to hide, and you’re not engaged in criminal activity, let’s put video cameras in every person’s bedroom. And let’s just record the video, 24/7, in their bedroom. And we won’t look at it until we have reason to look at it. You shouldn’t be concerned because . . .’”

At this point, Posner interrupted: “Look, that’s a silly argument.”

Cole: “But it’s based on a New Yorker cartoon.”

The Judge was a tad miffed; he waved his right hand up and down in a dismissive way: “The sex video, that’s silly!Waving his index finger to emphasize his point, he added: “What you should be saying, [what] you should be worried about [are] the types of revelation[s] of private conduct [that] discourage people from doing constructive things. You mentioned Alcoholics Anonymous . . .”

Cole: “I find sex to be a constructive thing.”

Obviously frustrated, Posner raised his palms up high in protest: “Let me finish, will you please?”

Cole: “Sure.”

Posner: “Look, that was a good example, right? Because you can have a person who has an alcohol problem, and so he goes to Alcoholics Anonymous, but he doesn’t want this to be known. If he can’t protect that secret,” Posner continued while pointing, “then he’s not going to go to Alcoholics Anonymous. That’s gonna be bad. That’s the sort of thing you should be concerned about rather than with sex videos. . . . [The Alcoholics Anonymous example] is a good example of the kind of privacy that should be protected.”

David Cole

Professor David Cole

Privacy & Politics 

Meanwhile, the audience listened and watched on with its attention now fixed on the Georgetown professor.

Cole: “Well, let me give you an example of sex privacy. I think we all have an interest in keeping our sex lives private. That’s why we close doors into our bedroom, etc. I think that’s a legitimate interest, and it’s a legitimate concern. And it’s not because you have something wrong you want to hide, but because intimacy requires privacy, number one. And number two: think about the government’s use of sex information with respect to Dr. Martin Luther King. They investigated him, intruded on his privacy by bugging his hotel rooms to learn [about his] affair, and then sought to use that – and the threat of disclosing that affair – to change his behavior. Why? Because he was an active, political, dissident fighting for justice.”

“We have a history of that,” he added. “Our country has a history of that; most countries have a history of that; and that’s another reason the government will use information – that doesn’t necessarily concern [it] – to target people who [it is] concerned about . . . – not just because of their alcohol problem [or] not just because of their sexual proclivities – but because they have political views and political ideas that the government doesn’t approve of.”

At this point the moderator invited the Judge to respond.

Posner: “What happened to cell phones? Do you have sex photos on your cell phones?”

Cole: “I imagine if Dr. Martin Luther King was having an affair in 2014, as opposed to the 1960s, his cell phone, his smart phone, would have quite a bit of evidence that would lead the government to that affair. He’d have call logs; he might have texts; he might have e-mails – all of that would be on the phone.”

The discussion then moved onto the other panelists.

Afterwards, and writing on the Volokh Conspiracy blog, Professor Orin Kerr, who was one of the participants in the conference, summed up his views of the exchange this way:

“I score this Cole 1, Posner 0.”

The First Amendment — Enter Glenn Greenwald Read More

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part IV)

In my first three posts, I’ve opened a critical discussion of Article III standing for plaintiffs challenging government surveillance programs by introducing the 1972 Supreme Court case of Laird v. Tatum. In today’s post, I’ll examine the Court’s decision itself, which held that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements.

The Burger Court

It didn’t take long for courts to embrace Laird as a useful tool to dismiss cases where plaintiffs sought to challenge government surveillance programs, especially where the complaints rested on a First Amendment chill from political profiling by law enforcement. Some judges took exception to a broad interpretation of Laird, but objections largely showed up in dissenting opinions. For the most part, early interpretations of Laird sympathized with the government’s view of surveillance claims.

Read More

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part I)

I’m grateful for the opportunity to be a Concurring Opinions guest blogger this month. My posts will largely concentrate on the history of Article III standing for plaintiffs seeking to challenge government surveillance programs, and the flawed foundations upon which our federal standing jurisprudence rests. 


 

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon (Wikimedia Commons)

Plaintiffs seeking to challenge government surveillance programs have faced long odds in federal courts, due mainly to a line of Supreme Court cases that have set a very high bar to Article III standing in these cases. The origins of this jurisprudence can be directly traced to Laird v. Tatum, a 1972 case where the Supreme Court considered the question of who could sue the government over a surveillance program, holding in a 5-4 decision that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements. Federal courts have since relied upon Laird to deny standing to plaintiffs in surveillance cases, including the 2013 Supreme Court decision in Clapper v. Amnesty Int’l USA. But the facts behind Laird illuminate a number of important reasons why it is a weak basis for surveillance standing doctrine. It is therefore a worthwhile endeavor, I think, to reexamine Laird in a post-Snowden context in order to gain a deeper understanding of the Court’s flawed standing doctrine in surveillance cases.

Read More

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

0

Privacy and Data Security Harms

Privacy Harm 01

I recently wrote a series of posts on LinkedIn exploring privacy and data security harms.  I thought I’d share them here, so I am re-posting all four of these posts together in one rather long post.

I. PRIVACY AND DATA SECURITY VIOLATIONS: WHAT’S THE HARM?

“It’s just a flesh wound.”

Monty Python and the Holy Grail

Suppose your personal data is lost, stolen, improperly disclosed, or improperly used. Are you harmed?

Suppose a company violates its privacy policy and improperly shares your data with another company. Does this cause a harm?

In most cases, courts say no. This is the case even when a company is acting negligently or recklessly. No harm, no foul.

Strong Arguments on Both Sides

Some argue that courts are ignoring serious harms caused when data is not properly protected and used.

Yet others view the harm as trivial or non-existent. For example, given the vast number of records compromised in data breaches, the odds that any one instance will result in identity theft or fraud are quite low.

Read More

T
0

The U.S. Supreme Court’s 4th Amendment and Cell Phone Case and Its Implications for the Third Party Doctrine

Today, the U.S. Supreme Court handed down a decision on two cases involving the police searching cell phones incident to arrest. The Court held 9-0 in an opinion written by Chief Justice Roberts that the Fourth Amendment requires a warrant to search a cell phone even after a person is placed under arrest.

The two cases are Riley v. California and United States v. Wurie, and they are decided in the same opinion with the title Riley v. California. The Court must have chosen toname the case after Riley to make things hard for criminal procedure experts, as there is a famous Fourth Amendment case called Florida v. Riley, 488 U,S, 445 (1989), which will now create confusion whenever someone refers to the “Riley case.”

Fourth Amendment Warrants

As a general rule, the government must obtain a warrant before engaging in a search. A warrant is an authorization by an independent judge or magistrate that is given to law enforcement officials after they properly justify their reason for conducting the search. There must be probable cause to search — a reasonable belief that the search will turn up evidence of a crime. The warrant requirement is one of the key protections of privacy because it ensures that the police just can’t search on a whim or a hunch. They must have a justified basis to search, and that must be proven before an independent decisionmaker (the judge or magistrate).

The Search Incident to Arrest Exception

But there are dozens of exceptions where government officials don’t need a warrant to conduct a search. One of these exceptions is a search incident to arrest. This exception allows police officers to search property on or near a person who has been arrested. In Chimel v. California, 395 U.S. 752 (1969), the Supreme Court held that the police could search the area near an arrestee’s immediate control. The rationale was that waiting to get a warrant might put police officers in danger in the event arrestees had hidden dangerous items hidden on them or that arrestees would have time to destroy evidence. In United States v. Robinson, 414 U.S. 218 (1973), the Court held that there doesn’t need to be identifiable danger in any specific case in order to justify searches incident to arrest. Police can just engage in such a search as a categorical rule.

What About Searching Cell Phones Incident to Arrest?

In today’s Riley case, the Court examined whether the police are allowed to search data on a cell phone incident to arrest without first obtaining a warrant. The Court held that cell phone searches should be treated differently from typical searches incident to arrest because cell phones contain so much data and present a greater invasion of privacy than more limited searches for physical objects: “Cell phones, however, place vast quantities of personal information literally in the hands of individuals. A search of the information on a cell phone bears little resemblance to the type of brief physical search considered in Robinson.”

Read More

0

Schneier on the NSA, Google, Facebook Connection But What About Phones?

Bruce Schneier argues that we should not be fooled by Google, Facebook, and other companies that decry the recent NSA data grabs, because the nature of the Internet is surveillance; but what about phone companies? The press has jumped on the Obama administration’s forthcoming plan that

would end its systematic collection of data about Americans’ calling habits. The bulk records would stay in the hands of phone companies, which would not be required to retain the data for any longer than they normally would. And the N.S.A. could obtain specific records only with permission from a judge, using a new kind of court order.

The details are to come, but Schneier’s point about the structure of the system applies to phone companies too, “The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.”

There are few things to parse here. OK there are many things to parse, but a blog post has limits. First, Schneier’s point about Internet companies is different than his one about the government. His point is that yes, many companies have stepped up security to prevent some government spying, but because Gooogle, Microsoft, Facebook, Yahoo, Apple and almost any online company needs access to user data to run their businesses and make money, they all have built “massive security vulnerability” “into [their] services by design.” When a company does that, “by extension, the U.S. government, still has access to your communications.” Second, as Schneier points out, even if a company tried to plug the holes, the government won’t let that happen. Microsoft’s Skype service has built in holes. The government has demanded encryption keys. And so it goes. And so we have a line on the phone problems.

The proposed changes may solve little, because so far the government has been able to use procedure and sheer spying outside procedure to grab data. The key will be what procedures are required and what penalties follow for failing to follow procedure. That said, as I argued regarding data security in January 2013, fixing data security (and by extension phone problems) will require several changes:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

And here is the crux of Schneier’s ire: companies that are saying your data is safe, are trying to protect their business, but as he sees it:

A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

In that sense he thinks companies should lean on the government and openly state security is not available for now. Although he knows no company can say that, the idea that we should all acknowledge the problem and go after the government to change the game is correct.

The point is correct for Internet companies and for phone companies. We should not over-focus on phones and forget the other ways we can be watched.

Industrial Policy for Big Data

If you are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan, data brokers are probably going to assume you’re heavier than average. We know that drug companies may use that data to recruit research subjects.  Marketers could utilize the data to target ads for diet aids, or for types of food that research reveals to be particularly favored by people who are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan.

We may also reasonably assume that the data can be put to darker purposes: for example, to offer credit on worse terms to the obese (stereotype-driven assessment of looks and abilities reigns from Silicon Valley to experimental labs).  And perhaps some day it will be put to higher purposes: for example, identifying “obesity clusters” that might be linked to overexposure to some contaminant

To summarize: let’s roughly rank these biosurveillance goals as: 

1) Curing illness or precursors to illness (identifying the obesity cluster; clinical trial recruitment)

2) Helping match those offering products to those wanting them (food marketing)

3) Promoting the classification and de facto punishment of certain groups (identifying a certain class as worse credit risks)

Read More

6

Protecting the Precursors to Speech and Action

The Constitution cares deeply about the pre-cursors to speech. Calo wondered where my paper, Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding, parts ways with Solove; it does and it doesn’t. On the one hand, I agree with Dan’s work and build it out. I of course look to the First Amendment as part of understanding what associational freedom is. I also want that understanding to inform criminal procedure. On the other hand, I think that the Fourth Amendment on its own has strong protection for associational freedom. I thus argue that we have missed that aspect of the Fourth Amendment. Furthermore, since Solove and after him Kathy Strandburg, wrote about First Amendment connections to privacy, there has been some great work by Ashutosh Bhagwat, Tabatha Abu El-Haj, John Inazu, on the First Amendment and associational freedom. And Jason Mazzone started some of that work in 2002. I draw on that work to show what associational freedom is. Part of the problem is that when we look to how and why we protect associational freedom, we mistake what it is. That mistake means Fourth Amendment becomes too narrow. We are stuck with protection only for speech acts and associations that speak.

As I put it in the paper:

Our current understanding of associational freedom is thin. We over-focus on speech and miss the importance of the precursors to speech—the ability to share, explore, accept, and reject ideas and then choose whether to speak. Recent work has shown, however, that the Constitution protects many activities that are not speech, for example petition and assembly, because the activities enable self-governance and foster the potential for speech. That work has looked to the First Amendment. I show that these concerns also appear in Fourth Amendment jurisprudence and work to protect us from surveillance regardless of whether the acts are speech or whether they are private.

In that sense I give further support to work by Julie Cohen, Neil Richards, Spiros Simitis, and Solove by explaining that all the details that many have identified as needing protection (e.g., our ability to play; protection from surveillance of what we read and watch) align with core ideals of associational freedom. This approach thus offers a foundation for calls to protect us from law enforcement’s ability to probe our reading, meeting, and gathering habits—our associational freedom—even though those acts are not private or speech, and it explains what the constitutional limits on surveillance in the age of data hoarding must be.

1

It’s About Data Hoards – My New Paper Explains Why Data Escrow Won’t Protect Privacy

A core issue in U.S. v. Jones has noting to do with connecting “trivial” bits of data to see a mosaic; it is about the simple ability to have a perfect map of everywhere we go, with whom we meet, what we read, and more. It is about the ability to look backward and see all that information with little to no oversight and in a way forever. That is why calls to shift the vast information grabs to a third party are useless. The move changes little given the way the government already demands information from private data hoards. Yes, not having immediate access to the information is a start. That might mitigate mischief. But clear procedures are needed before that separation can be meaningful. That is why telecom and tech giants should be wary of “The central pillar of Obama’s plan to overhaul the surveillance programs [which] calls for shifting storage of Americans’ phone data from the government to telecom companies or an independent third party.” It does not solve the problem of data hoards.

As I argue in my new article Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding:

Put differently, the tremendous power of the state to compel action combined with what the state can do with technology and data creates a moral hazard. It is too easy to harvest, analyze, and hoard data and then step far beyond law enforcement goals into acts that threaten civil liberties. The amount of data available to law enforcement creates a type of honey pot—a trap that lures and tempts government to use data without limits. Once the government has obtained data, it is easy and inexpensive to store and search when compared to storing the same data in an analog format. The data is not deleted or destroyed; it is hoarded. That vat of temptation never goes away. The lack of rules on law enforcement’s use of the data explains why it has an incentive to gather data, keep it, and increase its stores. After government has its data hoard, the barriers to dragnet and general searches—ordinarily unconstitutional—are gone. If someone wishes to dive into the data and see whether embarrassing, or even blackmail worthy, data is available, they can do so at its discretion; and in some cases law enforcement has said they should pursue such tactics. These temptations are precisely why we must rethink how we protect associational freedom in the age of data hoarding. By understanding what associational freedom is, what threatens it, and how we have protected it in the past, we will find that there is a way to protect it now and in the future.