Site Meter

Category: Privacy (Electronic Surveillance)

Focusing on the Core Harms of Surveillance

CoreHarmsThe “summer of NSA revelations” rolls along, with a blockbuster finale today. In June, Jennifer Granick and Christopher Sprigman flatly declared the NSA criminal. Now the agency’s own internal documents (leaked by Snowden) appear to confirm thousands of legal violations.

Legal scholars will not be surprised by the day’s revelations, just as few surveillance experts were all that shocked by the breadth and depth of PRISM, PINWALE, MARINA, and other programs. Ray Ku called warrantless surveillance unconstitutional in 2010. Civil liberties groups and legal scholars warned us repeatedly about where Bush-era executive power theories would lead. As anyone familiar with Bruce Ackerman’s work might guess, pliable attorneys have rubber-stamped the telephony metadata program with a “white paper” that “fails to confront counterarguments and address contrary caselaw” and “cites cases that [are] relatively weak authority for its position.” There are no meaningful penalties in sight (perhaps because the OLC has prepared documents that function as a “get out of jail free” card for those involved).
Read More

0

Brave New World of Biometric Identification

120px-Fingerprint_scanner_identificationProfessor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:

a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.

Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.

Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:

FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.

Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’

Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.

Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered.  In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.

 

5

Letting the Air Out

The NSA and the rest of our surveillance state apparatus is shrouded in secrecy. As captured in Frank Pasquale’s superb forthcoming book, governmental surveillance is a black box. Gag orders prevent Internet companies from talking about their participation in PRISM; nearly everything revealing is classified; the Executive Branch is telling us half truths or no truths. To counter massive governmental overreach, Bradley Manning, Edward Snowden, and others have exposed some sunlight on our surveillance state. That sunlight isn’t coming from those who are betraying the country, but those who are trying to save it, at least that’s what many registered voters think. According to a Quinnipiac poll released today, American voters say “55 – 34 percent” that NSA consultant Edward Snowden is a “whistleblower rather than a traitor.” According to the assistant director of the Quinnipiac University Polling Institute, “Most American voters think positively of Edward Snowden,” at least they did before he accepted asylum in Russia. From July 28 to July 31, 1,468 registered voters were surveyed on the phone. These sorts of leaks seem inevitable, at least culturally given our so-called commitment to openness and transparency. The leakers/whistleblowers are trying to nudge the Executive Branch to honor its commitments to the Fourth Amendment, the sentiments of the Church Report, and the Administration’s 2009 Openness and Transparency memo. Let’s see if letting the air out moves us closer to the kind of country we say we are.

H/T: Yale ISP’s Christina Spiesel for the Quinnipiac Poll

What Should be the Penalties for Misuse of Surveillance Data?

The Privacy and Civil Liberties Oversight Board (PCLOB) is holding a “Workshop Regarding Surveillance Programs Operated Pursuant to Section 215 of the USA PATRIOT Act and Section 702 of the Foreign Intelligence Surveillance Act.” Many luminaries in the privacy community are participating. I’m sure they will have great ideas about rendering PRISM, PINWALE, MARINA, et al. more subject to oversight.

But I have heard very little on what the appropriate penalties should be for misuse of surveillance data. In the health care world, we have some pretty clear precedents. For instance, a researcher served four months in prison for snooping into medical records in 2003. Imagine a very similar incident happened in the NSA context—say, an analyst abused his or her access to the data to learn details about an acquaintance who exhibited no suspicious characteristics. What should be the penalty? Feel free to comment below, or to submit ideas directly to the PCLOB.

0

Prism and Its Relationship to Clouds, Security, Jurisdiction, and Privacy

In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:

A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.

Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.

3

Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More

3

Overturning the Third-Party Doctrine by Statute: Hard and Harder

Privacy advocates have disliked the third-party doctrine at least from the day in 1976 when the Supreme Court decided U.S. v. Miller.  Anyone who remembers the Privacy Protection Study Commission knows that its report was heavily influenced by Miller.  My first task in my long stint as a congressional staffer was to organize a hearing to receive the report of the Commission in 1977.  In the introduction to the report, the Commission called the date of the decision “a fateful day for personal privacy.”

Last year, privacy advocates cheered when Justice Sonia Sotomayor’s concurrence in U.S. v. Jones asked if it was time to reconsider the third-party doctrine.  Yet it is likely that it would take a long time before the Supreme Court revisits and overturns the third-party doctrine, if ever.  Sotomayor’s opinion didn’t attract a single other Justice.

Can we draft a statute to overturn the third-party doctrine?  That is not an easy task, and it may be an unattainable goal politically.  Nevertheless, the discussion has to start somewhere.  I acknowledge that not everyone wants to overturn Miller.  See Orin Kerr’s The Case For the Third-party Doctrine.  I’m certainly not the first person to ask the how-to-do-it question.  Dan Solove wrestled with the problem in Digital Dossiers and the Dissipation of Fourth Amendment Privacy.

I’m going at the problem as if I were still a congressional staffer tasked with drafting a bill.  I see right away that there is precedent.  Somewhat remarkably, Congress partly overturned the Miller decision in 1978 when it enacted The Right to Financial Privacy Act, 12 U.S.C. § 3401 et seq.  The RFPA says that if the federal government wants to obtain records of a bank customer, it must notify the customer and allow the customer to challenge the request.

The RFPA is remarkable too for its exemptions and weak standards.  The law only applies to the federal government and not to state and local governments.  (States may have their own laws applicable to state agencies.)  Bank supervisory agencies are largely exempt.  The IRS is exempt.  Disclosures required by federal law are exempt.  Disclosures for government loan programs are exempt.  Disclosures for grand jury subpoenas are exempt.  That effectively exempts a lot of criminal law enforcement activity.  Disclosures to GAO and the CFPB are exempt.  Disclosures for investigations of crimes against financial institutions by insiders are exempt.  Disclosures to intelligence agencies are exempt.  This long – and incomplete – list is the first hint that overturning the third-party doctrine won’t be easy.

We’re not done with the weaknesses in the RFPA.  A customer who receives notice of a government request has ten days to challenge the request in federal court.  The customer must argue that the records sought are not relevant to the legitimate law enforcement inquiry identified by the government in the notice.  The customer loses if there is a demonstrable reason to believe that the law enforcement is legitimate and a reasonable belief that the records sought are relevant to that inquiry.  Relevance and legitimacy are weak standards, to say the least.  Good luck winning your case.

Who should get the protection of our bill?  The RFPA gives rights to “customers” of a financial institution.  A customer is an individual or partnership of five or fewer individuals (how would anyone know?).  If legal persons also receive protection, a bill might actually attract corporate support, along with major opposition from every regulatory agency in town.  It will be hard enough to pass a bill limited to individuals.  The great advantage of playing staffer is that you can apply political criteria to solve knotty policy problems.  I’d be inclined to stick to individuals.

Read More

Privacy & Information Monopolies

First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:

A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .

I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.

Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read More

1

“Brain Spyware”

As if we don’t have enough to worry about, now there’s spyware for your brain.  Or, there could be.  Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read More

14

Bartelt’s Dog and the Continuing Vitality of the Supreme Court’s Tacit Distinction between Sense Enhancement and Sense Creation

Last Term, in an amicus brief in United States v. Jones, 565 U.S. __, several colleagues and I highlighted the Supreme Court’s long, albeit not always clearly stated, history of distinguishing between sense-enhancing and sense-creating technologies for Fourth Amendment purposes.  As a practical matter, the Court has consistently subjected technologies in the latter category to closer scrutiny than technologies that merely bolster natural human senses.  Thus, the use of searchlights, field glasses, and (to some extent) beepers and airplane-mounted cameras was not found to implicate the Fourth Amendment.  As the Court explained, “[n]othing in the Fourth Amendment prohibit[s] the police from augmenting the sensory faculties bestowed upon them at birth with such enhancement as science and technology” may afford.  460 U.S. at 282 (emphasis added).  In contrast, the Court has held that technologies that create a new capacity altogether, including movie projectors, wiretaps, ultrasound devices, radar flashlights, directional microphones, thermal imagers, and (as of Jones) GPS tracking devices, do trigger the Fourth Amendment.  To hold otherwise, as the Court has stated, would “shrink the realm of guaranteed privacy,” leaving citizens “at the mercy of advancing technology.”  533 U.S. at 34-36.

In fact, of the landmark cases involving technology and the Fourth Amendment during the past 85 years (from United States v. Lee, 274 U.S. 559, in 1927 to Jones in 2012), only in one instance did the Supreme Court appear to deviate from this distinction between sense enhancement and sense creation.  In that case, United States v. Place, 462 U.S. 696, and its successors, City of Indianapolis v. Edmond, 531 U.S. 32, and Illinois v. Caballes, 543 U.S. 405, the Court held that the use of trained narcotics-detection dogs (more apparently similar to using a new capacity than merely enhancing a natural human sense) did not implicate the Fourth Amendment.  In our amicus brief in Jones, we rationalized Place, Edmond, and Caballes by arguing that dogs were unique, being natural biological creatures that had long been used by the police, even in the time of the Framers.  Further, we argued, a canine sniff, unlike the use of, say, a wiretap or a thermal imager, “discloses only the presence or absence of narcotics, a contraband item.”  462 U.S. at 707 (emphasis added).  Still, the apparent ‘dog exception’ was rankling. Read More