Category: Privacy (National Security)

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part IV)

In my first three posts, I’ve opened a critical discussion of Article III standing for plaintiffs challenging government surveillance programs by introducing the 1972 Supreme Court case of Laird v. Tatum. In today’s post, I’ll examine the Court’s decision itself, which held that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements.

The Burger Court

It didn’t take long for courts to embrace Laird as a useful tool to dismiss cases where plaintiffs sought to challenge government surveillance programs, especially where the complaints rested on a First Amendment chill from political profiling by law enforcement. Some judges took exception to a broad interpretation of Laird, but objections largely showed up in dissenting opinions. For the most part, early interpretations of Laird sympathized with the government’s view of surveillance claims.

Read More

0

The Flawed Foundations of Article III Standing in Surveillance Cases (Part III)

In my first two posts, I’ve opened a critical discussion of Article III standing for plaintiffs challenging government surveillance programs by introducing the 1972 Supreme Court case of Laird v. Tatum. In today’s post, I’ll examine the Court’s decision itself, which held that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements.

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon

Read More

1

The Flawed Foundations of Article III Standing in Surveillance Cases (Part I)

I’m grateful for the opportunity to be a Concurring Opinions guest blogger this month. My posts will largely concentrate on the history of Article III standing for plaintiffs seeking to challenge government surveillance programs, and the flawed foundations upon which our federal standing jurisprudence rests. 


 

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon

Then-Secretary of Defense Melvin Laird Sharing a Light Moment With President Nixon (Wikimedia Commons)

Plaintiffs seeking to challenge government surveillance programs have faced long odds in federal courts, due mainly to a line of Supreme Court cases that have set a very high bar to Article III standing in these cases. The origins of this jurisprudence can be directly traced to Laird v. Tatum, a 1972 case where the Supreme Court considered the question of who could sue the government over a surveillance program, holding in a 5-4 decision that chilling effects arising “merely from the individual’s knowledge” of likely government surveillance did not constitute adequate injury to meet Article III standing requirements. Federal courts have since relied upon Laird to deny standing to plaintiffs in surveillance cases, including the 2013 Supreme Court decision in Clapper v. Amnesty Int’l USA. But the facts behind Laird illuminate a number of important reasons why it is a weak basis for surveillance standing doctrine. It is therefore a worthwhile endeavor, I think, to reexamine Laird in a post-Snowden context in order to gain a deeper understanding of the Court’s flawed standing doctrine in surveillance cases.

Read More

0

The data retention judgment, the Irish Facebook case, and the future of EU data transfer regulation

On April 8 the Court of Justice of the European Union (CJEU) announced its judgment in the case C-293/12 and C-594/12 Digital Rights Ireland. Based on EU fundamental rights law, the Court invalidated the EU Data Retention Directive, which obliged telecommunications service providers and Internet service providers in the EU to retain telecommunications metadata and make it available to European law enforcement authorities under certain circumstances. The case illustrates both the key role that the EU Charter of Fundamental Rights plays in EU data protection law, and the CJEU’s seeming disinterest in the impact of its recent data protection rulings on other fundamental rights. In addition, the recent referral to the CJEU by an Irish court of a case involving data transfers by Facebook under the EU-US Safe Harbor holds the potential to further tighten EU rules for data transfers, and to reduce the possibility of EU-wide harmonization in this area.

In considering the implications of Digital Rights Ireland for the regulation of international data transfers, I would like to focus on a passage occurring towards the end of the judgment, where the Court criticizes the Data Retention Directive as follows (paragraph 68):

“[I]t should be added that that directive does not require the data in question to be retained within the European Union, with the result that it cannot be held that the control, explicitly required by Article 8(3) of the Charter, by an independent authority of compliance with the requirements of protection and security, as referred to in the two previous paragraphs, is fully ensured. Such a control, carried out on the basis of EU law, is an essential component of the protection of individuals with regard to the processing of personal data…”

This statement caught many observers by surprise. The CJEU is famous for the concise and self-referential style of its opinions, and the case revolved around the legality of the Directive in general, not around whether data stored under it could be transferred outside the EU. This issue was also not raised in the submission of the case to the Court, and first surfaced in the advisory opinion issued by one of the Court’s advocates-general prior to the judgment (see paragraph 78 of that Opinion).

In US constitutional law, the question “does the constitution follow the flag?” generally arises in the context of whether the Fourth Amendment to the US Constitution applies to government activity overseas (e.g., when US law enforcement abducts a fugitive abroad and brings him back to the US). In the context discussed here, the question is rather whether EU data protection law applies to personal data as they are transferred outside the EU, i.e., “whether the EU flag follows EU data”. As I explained in my book on the regulation of transborder data flows that was published last year by Oxford University Press, in many cases EU data protection law remains applicable to personal data transferred to other regions. For example, in introducing its proposed reform of EU data protection law, the European Commission stated in 2012 that one of its key purposes is to “ensure a level of protection for data transferred out of the EU similar to that within the EU”.

EU data protection law is based on constitutional provisions protecting fundamental rights (e.g., Article 8 of the EU Charter of Fundamental Rights), and the CJEU has emphasized in cases involving the independence of the data protection authorities (DPAs) in Austria, Germany, and Hungary that control of data processing by an independent DPA is an essential element of the fundamental right to data protection (without ever discussing independent supervision in the context of data processing outside the EU). In light of those previous cases, the logical consequence of the Court’s statement in Digital Rights Ireland would seem to be that fundamental rights law requires oversight of data processing by the DPAs also with regard to the data of EU individuals that are transferred to other regions.

This conclusion raises a number of questions. For example, how can it be reconciled with the fact that the enforcement jurisdiction of the DPAs ends at the borders of their respective EU Member States (see Article 28 of the EU Data Protection Directive 95/46)? If supervision by the EU DPAs extends already by operation of law to the storage of EU data in other regions, then why do certain EU legal mechanisms in addition force the parties to data transfers to explicitly accept the extraterritorial regulatory authority of the DPAs (e.g., Clause 5(e) of the EU standard contractual clauses of 2010)? And how does the Court’s statement fit with its 2003 Lindqvist judgment, where it held that EU data protection law should not be interpreted to apply to the entire Internet (see paragraph 69 of that judgment)? The offhand way in which the Court referred to DPA supervision over data processing outside the EU in the Digital Rights Ireland judgment gives the impression that it was unaware of, or disinterested in, such questions.

On June 18 the Irish High Court referred a case to the CJEU that may develop further its line of thought in the Digital Rights Ireland judgment. The High Court’s judgment in Schrems v. Data Protection Commissioner involved a challenge by Austrian student Max Schrems to the transfer of personal data to the US by Facebook under the Safe Harbor. The High Court announced that it would refer to the CJEU the questions of whether the European Commission’s adequacy decision of 2000 creating the Safe Harbor should be re-evaluated in light of the Charter of Fundamental Rights and widespread access to data by US law enforcement, and of whether the individual DPAs should be allowed to determine whether the Safe Harbor provides adequate protection (see paragraphs 71 and 84). The linkage between the two cases is evidenced by the Irish High Court’s frequent citation of Digital Rights Ireland, and by the CJEU’s conclusion that interference with the right to data protection caused by widespread data retention for law enforcement purposes without notice being given to individuals was “particularly serious” (see paragraph 37 of Digital Rights Ireland and paragraph 44 of Schrems v. Data Protection Commissioner). The High Court also criticized the Safe Harbor and the system of oversight of law enforcement data access in the US as failing to provide oversight “carried out on European soil” (paragraph 62), which seems inspired by paragraph 68 of the Digital Rights Ireland judgment.

The Irish referral to the CJEU also holds implications for the possibility of harmonized EU rules regarding international data transfers. If each DPA is allowed to override Commission adequacy decisions based on its individual view of what the Charter of Fundamental Rights requires, then there would be no point to such decisions in the first place (and the current disagreement over the “one stop shop” in the context of the proposed EU General Data Protection Regulation shows the difficulty of reaching agreement on pan-European rules where fundamental rights are at stake). Also, one wonders if other data transfer mechanisms beyond the Safe Harbor could also be at risk (e.g., standard contractual clauses, binding corporate rules, etc.), given that they also allow data to be turned over to non-EU law enforcement authorities. The proposed EU General Data Protection Regulation could eliminate some of these risks, but its passage is still uncertain, and the interpretation by the Court of the role of the Charter of Fundamental Rights would still be relevant under it. Whatever the CJEU eventually decides, it seems inevitable that the case will result in a tightening of EU rules on international data transfers.

The referral by the Irish High Court also raises the question (which the High Court did not address) of how other important fundamental rights, such as freedom of expression and the right to communicate internationally (meaning, in essence, the freedom to communicate on the Internet), should be balanced with the right to data protection. In its recent jurisprudence, the CJEU seems to regard data protection as a “super right” that has preference over other ones; thus, in its recent judgment in the case C-131/12 Google Spain v. AEPD and Mario Costeja Gonzalez involving the “right to be forgotten”, the Court never even refers to Article 11 of the Charter of Fundamental Rights that protects freedom of expression and the right to “receive and impart information and ideas without interference by public authority and regardless of frontiers”. In its zeal to protect personal data transferred outside the EU, it is important that the CJEU not forget that, as it has stated in the past, data protection is not an absolute right, and must be considered in relation to its function in society (see, for example, Joined Cases C-92/09 and C-93/09 Volker und Markus Schecke, paragraph 48), and that there must be some territorial limit to EU data protection law, if it is not to become a system of universal application that applies to the entire world (as the Court held in Lindqvist). Thus, there is an urgent need for an authoritative and dispassionate analysis of the territorial limits to EU data protection law, and of how a balance can be struck between data protection and other fundamental rights, guidance which unfortunately the CJEU seems unwilling to provide.

Surveillance, Capture, and the Endless Replay

Global opposition to surveillance may be coalescing around the NSA revelations. But the domestic fusion centers ought to be as big a story here in the US, because they exemplify politicized law enforcement. Consider, for instance, this recent story on the “threat” of “Buy Nothing Day:”

Fusion Centers and their personnel even conflate their anti-terrorism mission with a need for intelligence gathering on a possible consumer boycott during the holiday season. There are multiple documents from across the country referencing concerns about negative impacts on retail sales.

The Executive Director of the Intelligence Fusion Division, also the Joint Terrorism Task Force Director, for the D.C. Metropolitan Police Department circulated a 30-page report tracking the Occupy Movement in towns and cities across the country created by the trade association the International Council of Shopping Centers (ICSC).

Yes, police were briefed on the grave threat of fake shoppers bringing lots of products to the till and then pretending they’d forgotten their wallets. Perhaps the long game here is to detain members of the Church of Stop Shopping to force them to make Elves on the Shelf for $1 an hour.

More seriously: no one should be surprised by the classification of anti-consumerist activists as a threat, given what Danielle Keats Citron & I documented, and what the ACLU continues to report on. But we do need more surprising, more arresting, characterizations of this surveillance. Fortunately, social theory provides numerous models and metaphors to counter the ideology of “nothing to hide.”
Read More

0

Schneier on the NSA, Google, Facebook Connection But What About Phones?

Bruce Schneier argues that we should not be fooled by Google, Facebook, and other companies that decry the recent NSA data grabs, because the nature of the Internet is surveillance; but what about phone companies? The press has jumped on the Obama administration’s forthcoming plan that

would end its systematic collection of data about Americans’ calling habits. The bulk records would stay in the hands of phone companies, which would not be required to retain the data for any longer than they normally would. And the N.S.A. could obtain specific records only with permission from a judge, using a new kind of court order.

The details are to come, but Schneier’s point about the structure of the system applies to phone companies too, “The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.”

There are few things to parse here. OK there are many things to parse, but a blog post has limits. First, Schneier’s point about Internet companies is different than his one about the government. His point is that yes, many companies have stepped up security to prevent some government spying, but because Gooogle, Microsoft, Facebook, Yahoo, Apple and almost any online company needs access to user data to run their businesses and make money, they all have built “massive security vulnerability” “into [their] services by design.” When a company does that, “by extension, the U.S. government, still has access to your communications.” Second, as Schneier points out, even if a company tried to plug the holes, the government won’t let that happen. Microsoft’s Skype service has built in holes. The government has demanded encryption keys. And so it goes. And so we have a line on the phone problems.

The proposed changes may solve little, because so far the government has been able to use procedure and sheer spying outside procedure to grab data. The key will be what procedures are required and what penalties follow for failing to follow procedure. That said, as I argued regarding data security in January 2013, fixing data security (and by extension phone problems) will require several changes:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

And here is the crux of Schneier’s ire: companies that are saying your data is safe, are trying to protect their business, but as he sees it:

A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

In that sense he thinks companies should lean on the government and openly state security is not available for now. Although he knows no company can say that, the idea that we should all acknowledge the problem and go after the government to change the game is correct.

The point is correct for Internet companies and for phone companies. We should not over-focus on phones and forget the other ways we can be watched.

Brad A. Greenberg on the Free Flow of Information Act of 2013

Brad A. Greenberg is Intellectual Property Fellow at Columbia Law School’s Kernochan Center for Law, Media and the Arts. He writes primarily about laws that encourage, restrict, or regulate speech and technological development, with an emphasis on legal questions raised by new technologies; it at times draws on his previous career as a newspaper reporter. Recent publications include “Copyright Trolls and Presumptively Fair Uses,” 85 U. Colo. L. Rev. 53 (2014); “The Federal Media Shield Folly,” 91 Wash. U. L. Rev. 437 (2013); and “More Than Just a Formality: Instant Authorship and Copyright’s Opt-Out Future in the Digital Age,” 59 UCLA L. Rev. 1028 (2012). He offers the following thoughts on recent developments in media shield policy: 

At the New York Times’ Sources + Secrets conference Friday, one panel took up a perennially popular piece of legislation among news organizations and industry groups: a so-called media shield law.

Numerous media shield bills have been proposed in the 42 years since the Supreme Court ruled that the Constitution does not protect reporters from being compelled to testify; all proposals have failed. But the Free Flow of Information Act of 2013 appears different. The bill has bipartisan support, the endorsement of President Obama, and has already moved out of Senate committee. It has also been overwhelmingly supported by major news organizations and industry groups – reflected again at Sources + Secrets.

But there are at least three substantial challenges to the bill’s efficacy. Read More

6

Protecting the Precursors to Speech and Action

The Constitution cares deeply about the pre-cursors to speech. Calo wondered where my paper, Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding, parts ways with Solove; it does and it doesn’t. On the one hand, I agree with Dan’s work and build it out. I of course look to the First Amendment as part of understanding what associational freedom is. I also want that understanding to inform criminal procedure. On the other hand, I think that the Fourth Amendment on its own has strong protection for associational freedom. I thus argue that we have missed that aspect of the Fourth Amendment. Furthermore, since Solove and after him Kathy Strandburg, wrote about First Amendment connections to privacy, there has been some great work by Ashutosh Bhagwat, Tabatha Abu El-Haj, John Inazu, on the First Amendment and associational freedom. And Jason Mazzone started some of that work in 2002. I draw on that work to show what associational freedom is. Part of the problem is that when we look to how and why we protect associational freedom, we mistake what it is. That mistake means Fourth Amendment becomes too narrow. We are stuck with protection only for speech acts and associations that speak.

As I put it in the paper:

Our current understanding of associational freedom is thin. We over-focus on speech and miss the importance of the precursors to speech—the ability to share, explore, accept, and reject ideas and then choose whether to speak. Recent work has shown, however, that the Constitution protects many activities that are not speech, for example petition and assembly, because the activities enable self-governance and foster the potential for speech. That work has looked to the First Amendment. I show that these concerns also appear in Fourth Amendment jurisprudence and work to protect us from surveillance regardless of whether the acts are speech or whether they are private.

In that sense I give further support to work by Julie Cohen, Neil Richards, Spiros Simitis, and Solove by explaining that all the details that many have identified as needing protection (e.g., our ability to play; protection from surveillance of what we read and watch) align with core ideals of associational freedom. This approach thus offers a foundation for calls to protect us from law enforcement’s ability to probe our reading, meeting, and gathering habits—our associational freedom—even though those acts are not private or speech, and it explains what the constitutional limits on surveillance in the age of data hoarding must be.

1

It’s About Data Hoards – My New Paper Explains Why Data Escrow Won’t Protect Privacy

A core issue in U.S. v. Jones has noting to do with connecting “trivial” bits of data to see a mosaic; it is about the simple ability to have a perfect map of everywhere we go, with whom we meet, what we read, and more. It is about the ability to look backward and see all that information with little to no oversight and in a way forever. That is why calls to shift the vast information grabs to a third party are useless. The move changes little given the way the government already demands information from private data hoards. Yes, not having immediate access to the information is a start. That might mitigate mischief. But clear procedures are needed before that separation can be meaningful. That is why telecom and tech giants should be wary of “The central pillar of Obama’s plan to overhaul the surveillance programs [which] calls for shifting storage of Americans’ phone data from the government to telecom companies or an independent third party.” It does not solve the problem of data hoards.

As I argue in my new article Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding:

Put differently, the tremendous power of the state to compel action combined with what the state can do with technology and data creates a moral hazard. It is too easy to harvest, analyze, and hoard data and then step far beyond law enforcement goals into acts that threaten civil liberties. The amount of data available to law enforcement creates a type of honey pot—a trap that lures and tempts government to use data without limits. Once the government has obtained data, it is easy and inexpensive to store and search when compared to storing the same data in an analog format. The data is not deleted or destroyed; it is hoarded. That vat of temptation never goes away. The lack of rules on law enforcement’s use of the data explains why it has an incentive to gather data, keep it, and increase its stores. After government has its data hoard, the barriers to dragnet and general searches—ordinarily unconstitutional—are gone. If someone wishes to dive into the data and see whether embarrassing, or even blackmail worthy, data is available, they can do so at its discretion; and in some cases law enforcement has said they should pursue such tactics. These temptations are precisely why we must rethink how we protect associational freedom in the age of data hoarding. By understanding what associational freedom is, what threatens it, and how we have protected it in the past, we will find that there is a way to protect it now and in the future.

CoreHarms

What President Obama’s Surveillance Speech Should Have Addressed

In his recent speech on surveillance, President Obama treated the misuse of intelligence gathering as a relic of American history. It was something done in the bad old days of J. Edgar Hoover, and never countenanced by recent administrations. But the accumulation of menacing stories—from fusion centers to “joint terrorism task forces” to a New York “demographics unit” targeting Muslims—is impossible to ignore. The American Civil Liberties Union has now collected instances of police surveillance and obstruction of First Amendment‐protected activity in over half the states. From Alaska (where military intelligence spied on an anti-war group) to Florida (where Quakers and anti-globalization activists were put on watchlists), protesters have been considered threats, rather than citizens exercising core constitutional rights. Political dissent is a routine target for surveillance by the FBI.

Admittedly, I am unaware of the NSA itself engaging in politically driven spying on American citizens. Charles Krauthammer says there has not been a “single case” of abuse.* But the NSA is only one part of the larger story of intelligence gathering in the US, which involves over 1,000 agencies and nearly 2,000 private companies. Moreover, we have little idea of exactly how information and requests flow between agencies. Consider the Orwellian practice of “parallel construction.” Reuters has reported that the NSA gave “tips” to the Special Operations Division (SOD) of the Drug Enforcement Administration, which also shared them with the Internal Revenue Service.
Read More