Category: Privacy (Electronic Surveillance)

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part IV)

In my first three posts, I’ve opened a critical discussion of Article III standing for plaintiffs challenging government surveillance programs by introducing the 1972 Supreme Court case of Laird v. Tatum. In today’s post, I’ll examine the Court’s decision itself, which held that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements.

The Burger Court

It didn’t take long for courts to embrace Laird as a useful tool to dismiss cases where plaintiffs sought to challenge government surveillance programs, especially where the complaints rested on a First Amendment chill from political profiling by law enforcement. Some judges took exception to a broad interpretation of Laird, but objections largely showed up in dissenting opinions. For the most part, early interpretations of Laird sympathized with the government’s view of surveillance claims.

Read More

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part I)

I’m grateful for the opportunity to be a Concurring Opinions guest blogger this month. My posts will largely concentrate on the history of Article III standing for plaintiffs seeking to challenge government surveillance programs, and the flawed foundations upon which our federal standing jurisprudence rests. 


 

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon (Wikimedia Commons)

Plaintiffs seeking to challenge government surveillance programs have faced long odds in federal courts, due mainly to a line of Supreme Court cases that have set a very high bar to Article III standing in these cases. The origins of this jurisprudence can be directly traced to Laird v. Tatum, a 1972 case where the Supreme Court considered the question of who could sue the government over a surveillance program, holding in a 5-4 decision that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements. Federal courts have since relied upon Laird to deny standing to plaintiffs in surveillance cases, including the 2013 Supreme Court decision in Clapper v. Amnesty Int’l USA. But the facts behind Laird illuminate a number of important reasons why it is a weak basis for surveillance standing doctrine. It is therefore a worthwhile endeavor, I think, to reexamine Laird in a post-Snowden context in order to gain a deeper understanding of the Court’s flawed standing doctrine in surveillance cases.

Read More

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

0

Privacy and Data Security Harms

Privacy Harm 01

I recently wrote a series of posts on LinkedIn exploring privacy and data security harms.  I thought I’d share them here, so I am re-posting all four of these posts together in one rather long post.

I. PRIVACY AND DATA SECURITY VIOLATIONS: WHAT’S THE HARM?

“It’s just a flesh wound.”

Monty Python and the Holy Grail

Suppose your personal data is lost, stolen, improperly disclosed, or improperly used. Are you harmed?

Suppose a company violates its privacy policy and improperly shares your data with another company. Does this cause a harm?

In most cases, courts say no. This is the case even when a company is acting negligently or recklessly. No harm, no foul.

Strong Arguments on Both Sides

Some argue that courts are ignoring serious harms caused when data is not properly protected and used.

Yet others view the harm as trivial or non-existent. For example, given the vast number of records compromised in data breaches, the odds that any one instance will result in identity theft or fraud are quite low.

Read More

T
0

The U.S. Supreme Court’s 4th Amendment and Cell Phone Case and Its Implications for the Third Party Doctrine

Today, the U.S. Supreme Court handed down a decision on two cases involving the police searching cell phones incident to arrest. The Court held 9-0 in an opinion written by Chief Justice Roberts that the Fourth Amendment requires a warrant to search a cell phone even after a person is placed under arrest.

The two cases are Riley v. California and United States v. Wurie, and they are decided in the same opinion with the title Riley v. California. The Court must have chosen toname the case after Riley to make things hard for criminal procedure experts, as there is a famous Fourth Amendment case called Florida v. Riley, 488 U,S, 445 (1989), which will now create confusion whenever someone refers to the “Riley case.”

Fourth Amendment Warrants

As a general rule, the government must obtain a warrant before engaging in a search. A warrant is an authorization by an independent judge or magistrate that is given to law enforcement officials after they properly justify their reason for conducting the search. There must be probable cause to search — a reasonable belief that the search will turn up evidence of a crime. The warrant requirement is one of the key protections of privacy because it ensures that the police just can’t search on a whim or a hunch. They must have a justified basis to search, and that must be proven before an independent decisionmaker (the judge or magistrate).

The Search Incident to Arrest Exception

But there are dozens of exceptions where government officials don’t need a warrant to conduct a search. One of these exceptions is a search incident to arrest. This exception allows police officers to search property on or near a person who has been arrested. In Chimel v. California, 395 U.S. 752 (1969), the Supreme Court held that the police could search the area near an arrestee’s immediate control. The rationale was that waiting to get a warrant might put police officers in danger in the event arrestees had hidden dangerous items hidden on them or that arrestees would have time to destroy evidence. In United States v. Robinson, 414 U.S. 218 (1973), the Court held that there doesn’t need to be identifiable danger in any specific case in order to justify searches incident to arrest. Police can just engage in such a search as a categorical rule.

What About Searching Cell Phones Incident to Arrest?

In today’s Riley case, the Court examined whether the police are allowed to search data on a cell phone incident to arrest without first obtaining a warrant. The Court held that cell phone searches should be treated differently from typical searches incident to arrest because cell phones contain so much data and present a greater invasion of privacy than more limited searches for physical objects: “Cell phones, however, place vast quantities of personal information literally in the hands of individuals. A search of the information on a cell phone bears little resemblance to the type of brief physical search considered in Robinson.”

Read More

0

Schneier on the NSA, Google, Facebook Connection But What About Phones?

Bruce Schneier argues that we should not be fooled by Google, Facebook, and other companies that decry the recent NSA data grabs, because the nature of the Internet is surveillance; but what about phone companies? The press has jumped on the Obama administration’s forthcoming plan that

would end its systematic collection of data about Americans’ calling habits. The bulk records would stay in the hands of phone companies, which would not be required to retain the data for any longer than they normally would. And the N.S.A. could obtain specific records only with permission from a judge, using a new kind of court order.

The details are to come, but Schneier’s point about the structure of the system applies to phone companies too, “The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.”

There are few things to parse here. OK there are many things to parse, but a blog post has limits. First, Schneier’s point about Internet companies is different than his one about the government. His point is that yes, many companies have stepped up security to prevent some government spying, but because Gooogle, Microsoft, Facebook, Yahoo, Apple and almost any online company needs access to user data to run their businesses and make money, they all have built “massive security vulnerability” “into [their] services by design.” When a company does that, “by extension, the U.S. government, still has access to your communications.” Second, as Schneier points out, even if a company tried to plug the holes, the government won’t let that happen. Microsoft’s Skype service has built in holes. The government has demanded encryption keys. And so it goes. And so we have a line on the phone problems.

The proposed changes may solve little, because so far the government has been able to use procedure and sheer spying outside procedure to grab data. The key will be what procedures are required and what penalties follow for failing to follow procedure. That said, as I argued regarding data security in January 2013, fixing data security (and by extension phone problems) will require several changes:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

And here is the crux of Schneier’s ire: companies that are saying your data is safe, are trying to protect their business, but as he sees it:

A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

In that sense he thinks companies should lean on the government and openly state security is not available for now. Although he knows no company can say that, the idea that we should all acknowledge the problem and go after the government to change the game is correct.

The point is correct for Internet companies and for phone companies. We should not over-focus on phones and forget the other ways we can be watched.

Industrial Policy for Big Data

If you are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan, data brokers are probably going to assume you’re heavier than average. We know that drug companies may use that data to recruit research subjects.  Marketers could utilize the data to target ads for diet aids, or for types of food that research reveals to be particularly favored by people who are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan.

We may also reasonably assume that the data can be put to darker purposes: for example, to offer credit on worse terms to the obese (stereotype-driven assessment of looks and abilities reigns from Silicon Valley to experimental labs).  And perhaps some day it will be put to higher purposes: for example, identifying “obesity clusters” that might be linked to overexposure to some contaminant

To summarize: let’s roughly rank these biosurveillance goals as: 

1) Curing illness or precursors to illness (identifying the obesity cluster; clinical trial recruitment)

2) Helping match those offering products to those wanting them (food marketing)

3) Promoting the classification and de facto punishment of certain groups (identifying a certain class as worse credit risks)

Read More

6

Protecting the Precursors to Speech and Action

The Constitution cares deeply about the pre-cursors to speech. Calo wondered where my paper, Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding, parts ways with Solove; it does and it doesn’t. On the one hand, I agree with Dan’s work and build it out. I of course look to the First Amendment as part of understanding what associational freedom is. I also want that understanding to inform criminal procedure. On the other hand, I think that the Fourth Amendment on its own has strong protection for associational freedom. I thus argue that we have missed that aspect of the Fourth Amendment. Furthermore, since Solove and after him Kathy Strandburg, wrote about First Amendment connections to privacy, there has been some great work by Ashutosh Bhagwat, Tabatha Abu El-Haj, John Inazu, on the First Amendment and associational freedom. And Jason Mazzone started some of that work in 2002. I draw on that work to show what associational freedom is. Part of the problem is that when we look to how and why we protect associational freedom, we mistake what it is. That mistake means Fourth Amendment becomes too narrow. We are stuck with protection only for speech acts and associations that speak.

As I put it in the paper:

Our current understanding of associational freedom is thin. We over-focus on speech and miss the importance of the precursors to speech—the ability to share, explore, accept, and reject ideas and then choose whether to speak. Recent work has shown, however, that the Constitution protects many activities that are not speech, for example petition and assembly, because the activities enable self-governance and foster the potential for speech. That work has looked to the First Amendment. I show that these concerns also appear in Fourth Amendment jurisprudence and work to protect us from surveillance regardless of whether the acts are speech or whether they are private.

In that sense I give further support to work by Julie Cohen, Neil Richards, Spiros Simitis, and Solove by explaining that all the details that many have identified as needing protection (e.g., our ability to play; protection from surveillance of what we read and watch) align with core ideals of associational freedom. This approach thus offers a foundation for calls to protect us from law enforcement’s ability to probe our reading, meeting, and gathering habits—our associational freedom—even though those acts are not private or speech, and it explains what the constitutional limits on surveillance in the age of data hoarding must be.

1

It’s About Data Hoards – My New Paper Explains Why Data Escrow Won’t Protect Privacy

A core issue in U.S. v. Jones has noting to do with connecting “trivial” bits of data to see a mosaic; it is about the simple ability to have a perfect map of everywhere we go, with whom we meet, what we read, and more. It is about the ability to look backward and see all that information with little to no oversight and in a way forever. That is why calls to shift the vast information grabs to a third party are useless. The move changes little given the way the government already demands information from private data hoards. Yes, not having immediate access to the information is a start. That might mitigate mischief. But clear procedures are needed before that separation can be meaningful. That is why telecom and tech giants should be wary of “The central pillar of Obama’s plan to overhaul the surveillance programs [which] calls for shifting storage of Americans’ phone data from the government to telecom companies or an independent third party.” It does not solve the problem of data hoards.

As I argue in my new article Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding:

Put differently, the tremendous power of the state to compel action combined with what the state can do with technology and data creates a moral hazard. It is too easy to harvest, analyze, and hoard data and then step far beyond law enforcement goals into acts that threaten civil liberties. The amount of data available to law enforcement creates a type of honey pot—a trap that lures and tempts government to use data without limits. Once the government has obtained data, it is easy and inexpensive to store and search when compared to storing the same data in an analog format. The data is not deleted or destroyed; it is hoarded. That vat of temptation never goes away. The lack of rules on law enforcement’s use of the data explains why it has an incentive to gather data, keep it, and increase its stores. After government has its data hoard, the barriers to dragnet and general searches—ordinarily unconstitutional—are gone. If someone wishes to dive into the data and see whether embarrassing, or even blackmail worthy, data is available, they can do so at its discretion; and in some cases law enforcement has said they should pursue such tactics. These temptations are precisely why we must rethink how we protect associational freedom in the age of data hoarding. By understanding what associational freedom is, what threatens it, and how we have protected it in the past, we will find that there is a way to protect it now and in the future.

1

Atrocious Privacy Invasion: Non-Consensual Videotaping of Sex Indicted in NY

Criminalizing privacy invasions has a long history. In their ground-break article The Right to Privacy published in 1890, Samuel Warren and Louis Brandeis argued that “[i]t would doubtless be desirable that the privacy of the individual should receive the added protection of the criminal law.” Since that time, lawmakers have banned the non-consensual recording of individuals in a state of undress in contexts where they have reasonable expectation of privacy. New York’s unlawful surveillance law, for instance, prohibits use of an imaging device to secretly record or to broadcast another person undressing or having sex for the purpose of degrading that person in cases where the person had a reasonable expectation of privacy.

In November 2013, a New York former private wealth adviser was indicted for nineteen counts of unlawful surveillance and attempted unlawful surveillance for secretly taping himself having sex with different women without their consent. The illegal tapings allegedly occurred over a year’s time and apparently were many.

The New York Post talked to one of the victim’s attorney, Daniel Parker, who explained that the man posted the illegal videos on Internet sites. According to Parker, the man “used an elaborate system of surveillance using multiple devices in both his bedroom and their homes.” In other words, the man not only had various cameras in his own bedroom to tape himself having sex with women who had no idea and never consented but he also secretly taped himself having sex with the women in their homes. Parker explained that the man “left a trail and it was on YouTube and Vimeo.” What were those hidden devices? The man apparently used a hidden camera, a web cam and a stealth phone app to film the women engaged in various sexual acts. According to Parker, the man installed a hidden camera in the bookshelf of his East 69th Street apartment.

The victims delivered the video footage to the Manhattan District Attorney’s Office prompting the investigation. Kudos to prosecutor Siobahn Carty for bringing the case, though my sense is that it took the victims considerable energy and time to convince law enforcement to take their case seriously and to understand the technology used to perpetrated the egregious privacy violations. Technical ignorance is common amongst law enforcement, well, and common for may people. Troubling cultural attitudes and “I don’t get the tech” response are notorious responses to different forms of harassment, including non-consensual taping of individuals in their most intimate moments. I will report more on the case as I get a hold of the indictment.