Archive for the ‘Privacy (Electronic Surveillance)’ Category
posted by Daniel Solove
I was able to obtain the latest National Security Agency (NSA) memo leaked by Edward Snowden. I reprint it in full below.
TOP SECRET AND CLASSIFIED
THE NATIONAL SECURITY AGENCY
SANTA SURVEILLANCE PROGRAM (SSP)
Intelligence reports have indicated an alarming amount of chatter between citizens of the United States and a foreign organization with unknown whereabouts somewhere near the North Pole. The organization is led by an elderly bearded cleric with the alias, “Santa.”
We have probable cause to believe that this “Santa” organization is providing material support to terrorist cells in the United States. On numerous occasions, “Santa” has reportedly entered the country illegally by flying across the border in a stealth aircraft. He delivers contraband to various enemy combatants who request weapons and other military vehicles and aircraft.
For example, the intercepted letter below is from an enemy combatant by the name of “Johnny Smith”:
Another letter, written by enemy combatant “Mikey Brown” – an alias for Michael Brown – indicates a desire for a weapon of mass destruction called “the Death Star.” Mikey is now being questioned at an unidentified secure location.
Santa has an army of followers who call themselves “elves” and who train in Santa’s camp. We fear that these elves are highly radicalized.
Based upon a recent dramatic increase in chatter between the Santa organization and enemy combatants in the U.S., we will initiate a new surveillance program caked the “Santa Surveillance Program” (SSP).
We will monitor all communications by all people everywhere. For minimization standards, we will limit our surveillance to human beings only and not include other life forms.
The SSP will be ongoing until “Santa” is terminated by a drone attack.
Cross-posted at LinkedIn
posted by Pierluigi Perri
In a sentence, Anupam Chander’s The Electronic Silk Road contains the good, the bad and the ugly of the modern interconnected and globalized world.
How many times do we use terms like “network” and “global”? In Professor Chander’s book you may find not only the meanings, but also the possible legal, economical and ethical implications that these terms may include today.
It’s well known that we are facing a revolution, despite of recent Bill Gates’ words that “The internet is not going to save the world”. I partly agree with Mr. Gates. Probably the internet will not save the world, but for sure it has already changed the world as we know it, making possible the opportunities that are well described in The Electronic Silk Road.
However, I would like to use my spot in this Symposium not to write about the wonders of the Trade 2.0, but to share some concerns that , as a privacy scholar, I have.
The problem is well known and is connected to the risk of the big data companies, that base their business model on consumer-profiling for selling advertisement or additional services to the companies.
“[T]he more the network provider knows about you, the more it can earn” writes Chander, and as noted by V. Mayer-Schönberger and K. Cukier in their recent book Big Data, the risks that could be related with the “dark side” of the big data are not just about the privacy of individuals, but also about the processing of those data, with the “possibility of using big data predictions about people to judge and punish them even before they’ve acted.”.
This is, probably, the good and the bad of big data companies as modern caravans of the electronic silk road: they bring a lot of information, and the information can be used, or better processed, for so many different purposes that we can’t imagine what will happen tomorrow, and not only the risk of a global surveillance is around the corner (on this topic I suggest to read the great post by D. K. Citron and D. Gray Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards), but also the risk of a dictatorship of data.
This possible circumstance, as Professor Solove write in the book Nothing To Hide “[…] not only frustate the individual by creating a sense of helpness and powerlessness, they also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.”
Thus, I guess that the privacy and data protection ground could be the real challenge for the electronic silk road.
Professor Chander’s book is full of examples about the misuse of data (see the Paragraph Yahoo! in China), the problem of protection of sensitive data shared across the world (see the Paragraph Boston Brahmins and Bangalore Doctors), the problem about users’ privacy posed by social networks (see Chapter 5 Facebookistan).
But Professor Chander was able also to see the possible benefits of big data analysis (see the Paragraph Predictions and Predilections), for example in healthcare, thus is important to find a way to regulate the unstoppable flowing of data across the world.
In a so complex debate about a right that is subject to different senses and definitions across the world (what is “privacy” or “personal data” is different between USA, Canada, Europe and China for example), I find very interesting the recipe suggested by Anupam Chander.
First of all, we have to embrace some ground principles that are good both for providers and for law and policy makers: 1) do no evil; 2) technology is neutral; 3) the cyberspace need a dematerialized architecture.
Using these principles, it will be easy to follow Professor Chander’s fundamental rule: “harmonization where possible, glocalization where necessary”.
A practical implementation of this rule, as described in Chapter 8, will satisfy the different view of data privacy in a highly liberal regimes and in a highly repressive regime, pushing the glocalization (global services adapt to local rules) against the deregulation in the highly liberal regimes and the “do no evil” principle against the oppression in the highly repressive regime.
This seems reasonable to me, and at the end of my “journey” in Professor Chander’s book, I want to thank him for giving us some fascinating, but above all usable, theories for the forthcoming international cyberlaw.
posted by Albert Wong
By Albert Wong and Valerie Belair-Gagnon, Information Society Project at Yale Law School
In a recent article in the Columbia Journalism Review, we reported that major US newspapers exhibited a net pro-surveillance bias in their “post-Edward Snowden” coverage of the NSA. Our results ran counter to the general perception that major media outlets lean “traditionally liberal” on social issues. Given our findings, we decided to extend our analysis to see if the same bias was present in “traditionally conservative” and international newspapers.
Using the same methods described in our previous study, we examined total press coverage in the Washington Times, one of the top “traditionally conservative” newspapers in the US. We found that the Washington Times used pro-surveillance terms such as security or counterterrorism 45.5% more frequently than anti-surveillance terms like liberty or rights. This is comparable to USA Today‘s 36% bias and quantitatively greater than The New York Times‘ 14.1% or the Washington Post‘s 11.1%. The Washington Times, a “traditionally conservative” newspaper, had the same, if not stronger, pro-surveillance bias in its coverage as neutral/”traditionally liberal”-leaning newspapers.
In contrast, The Guardian, the major UK newspaper where Glenn Greenwald has reported most of Snowden’s disclosures, did not exhibit such a bias. Unlike any of the US newspapers we examined, The Guardian actually used anti-surveillance terms slightly (3.2%) more frequently than pro-surveillance terms. Despite the UK government’s pro-surveillance position (similar to and perhaps even more uncompromising than that of the US government), the Guardian‘s coverage has remained neutral overall. (Neutral as far as keyword frequency analysis goes, anyway; the use of other methods, such as qualitative analysis of article tone, may also be helpful in building a comprehensive picture.)
Our extended results provide additional context for our earlier report and demonstrate that our analysis is “capturing a meaningful divide.”
On a further note, as several commenters suggested in response to our original report, the US media’s pro-surveillance bias may be a manifestation of a broader “pro-state” bias. This theory may be correct, but it would be difficult to confirm conclusively. On many, even most, issues, the US government does not speak with one voice. Whose position should be taken as the “state” position? The opinion of the President? The Speaker of the House? The Chief Justice? Administration allies in Congress? In the context of the Affordable Care Act, is there no “pro-state” position at all, since the President, the Speaker, and the Chief Justice each have different, largely irreconcilable views?
November 1, 2013 at 11:02 am Posted in: Anonymity, Civil Rights, Culture, Current Events, Cyber Civil Rights, Government Secrecy, Politics, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Technology, Uncategorized Print This Post 10 Comments
posted by Anupam Chander
Last week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.
International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.
But this trade also brought communication—with ideas and critiques circulated around the world. The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.
October 28, 2013 at 5:46 pm Posted in: Consumer Protection Law, Cyberlaw, First Amendment, Intellectual Property, International & Comparative Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Symposium (The Electronic Silk Road) Print This Post No Comments
posted by Frank Pasquale
Gabriella Coleman’s Coding Freedom is a beautifully written book, offering deep insight into communities of hackers. By immersing herself in the culture of free and open source software devotees, she helps us understand the motivations, goals, frustrations, and aesthetics of a frequently misunderstood movement. The stakes are high, both for those inside and outside the hacker community. Some want the term hacker to primarily denote playful creativity; others emphasize subversion of oppressive power centers; still others embrace an identity of unreasoned disruption.
Outsiders stray into such debates at their peril, and Coleman took significant risks to write the book. As an academic, she defied conventional anthropological career paths by launching an investigation of a digitally connected enclave within an advanced society.* As an observer, she risked that sub-subcultures would try to exact revenge on her for saying something they disagreed with. (It’s not just the obvious targets who get hacked.) But the gambles have paid off, both within the academic community and in the broader ambit of Internet intellectuals.
Hackers are frequently misunderstood, both when praised and when damned. In the popular imagination, the computer hacker can pop up as a digital Bonnie or Clyde, fighting “the system” of opaque automation. On the other hand, former NSA Chief Michael Hayden wrote off hacker fans of Edward Snowden as “nihilists, anarchists, activists, Lulzsec, Anonymous, twentysomethings who haven’t talked to the opposite sex in five or six years.” The hero/villain narratives are easy to sell to Wired or Fox. Coleman gives us a much richer story.
Read the rest of this post »
posted by Frank Pasquale
Interesting to see how the three topics converge. First, an excerpt from King’s December 1961 speech to the AFL-CIO Convention:
Less than a century ago, the laborer had no rights, little or no respect, and led a life that was socially submerged and barren. . . . American industry organized misery into sweatshops and proclaimed the right of capital to act without restraints and without conscience. . . . The children of workers had no childhood and no future. They, too, worked for pennies an hour and by the time they reached their teens they were worn-out old men, devoid of spirit, devoid of hope and devoid of self-respect.
Second, from Tom Geoghegan’s analysis of King as a labor leader: “It is said that just after this speech, J. Edgar Hoover was more determined to wiretap King.”
Treating someone working for the betterment of the many, as an enemy of the state, is a core harm of politicized surveillance.
posted by Zvi Triger
The thought of hiring a private detective in this age of relatively accessible electronic surveillance seems a bit retro, like a black-and-white scene from a smoky film noire. But it has been enjoying a surprising comeback in recent years, with parents who hire private investigators to spy on their children.
In an article titled Over-Parenting, my co-author Gaia Bernstein and I identified a trend of legal adoption of intensive parenting norms. We cautioned against society legally sanctioning a single parenting style – namely, intensive parenting – while deeming potentially neglectful other parenting styles which could be perfectly legitimate. We also pointed out that involved parenting is class-biased, since it is costly, and not all parents can afford the technology that would enable them to be intensive parents, such as purchasing GPS enabled smartphones for their kids. We argued that when intensive parenting is used for children who do not need it, it becomes over-parenting. Not all children need the same level of involvement in their lives; one of the most important roles of parents is to prepare their children for independent life, and over-parenting might thwart that role. Finally, we speculated that the cultural model for intensive parenting originates in media depictions of upper-middle class families, and that how these families are portrayed in movies and TV shows influences real-life parents.
Well, I’m sad to report that over-parenting is not a unique American phenomenon. Last year, for example, a Chinese newspaper reported that parents in china are increasingly becoming more involved in their children’s lives by hiring private investigators to check whether the children use drugs, drink alcohol or have sex. In Israel some parents are doing the same, especially during the long summer break, during which bored teenagers, many parent fear, are prone to engage in such activities (if you read Hebrew, you can read the story here). I am sure that some American parents do the same.
Leaving aside the class question (are parents who cannot afford a private eye neglectful?), what does this say about parents’ role as educators? Or about the level of trust (or distrust) between those parents and their children? It used to be that a spouse would hire a private investigator because they thought that their partner was having an affair. Nowadays, a growing chunk of a private investigator’s work involved parents spying on their children. Can’t we say that the fact that parents feel that they need to spy on their children already testifies to their limited parental skills?
August 29, 2013 at 5:05 pm Tags: comparative law, intensive parenting, law & technology, over-parenting, Privacy, private detectives Posted in: Culture, Family Law, Privacy, Privacy (Electronic Surveillance) Print This Post No Comments
posted by Frank Pasquale
Rep. Mike Fitzpatrick (R-Pa.) proposed legislation . . . that would cut National Security Agency (NSA) funding if it violates new surveillance rules aimed at preventing broad data collection on millions of people.
Fitzpatrick has also offered language to restrict the term “relevant” when it comes to data collection. On the one hand, it seems odd for Congress to micromanage a spy agency. On the other hand, no one has adequately explained how present safeguards keep the integrated Information Sharing Environment from engaging in the harms catalogued here and here. So we’re likely to see many blunt efforts to cut off its ability to collect and analyze data, even if data misuse is really the core problem.
August 22, 2013 at 9:44 am Posted in: Criminal Law, Current Events, Google & Search Engines, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Technology Print This Post No Comments
posted by Daniel Solove
One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States – more so than nearly any privacy statute and any common law tort.
In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. The article explores the following issues:
- Why did the FTC, and not contract law, come to dominate the enforcement of privacy policies?
- Why, despite more than 15 years of FTC enforcement, have there been hardly any resulting judicial decisions?
- Why has FTC enforcement had such a profound effect on company behavior given the very small penalties?
- Can FTC jurisprudence evolve into a comprehensive regulatory regime for privacy?
The claims we make in this article include:
- The common view of FTC jurisprudence as thin — as merely enforcing privacy promises — is misguided. The FTC’s privacy jurisprudence is actually quite thick, and it has come to serve as the functional equivalent to a body of common law.
- The foundations exist in FTC jurisprudence to develop a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves substantive rules that exist independently from a company’s privacy representations.
August 20, 2013 at 12:02 pm Posted in: Administrative Law, Articles and Books, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Technology Print This Post No Comments
posted by Frank Pasquale
The “summer of NSA revelations” rolls along, with a blockbuster finale today. In June, Jennifer Granick and Christopher Sprigman flatly declared the NSA criminal. Now the agency’s own internal documents (leaked by Snowden) appear to confirm thousands of legal violations.
Legal scholars will not be surprised by the day’s revelations, just as few surveillance experts were all that shocked by the breadth and depth of PRISM, PINWALE, MARINA, and other programs. Ray Ku called warrantless surveillance unconstitutional in 2010. Civil liberties groups and legal scholars warned us repeatedly about where Bush-era executive power theories would lead. As anyone familiar with Bruce Ackerman’s work might guess, pliable attorneys have rubber-stamped the telephony metadata program with a “white paper” that “fails to confront counterarguments and address contrary caselaw” and “cites cases that [are] relatively weak authority for its position.” There are no meaningful penalties in sight (perhaps because the OLC has prepared documents that function as a “get out of jail free” card for those involved).
Read the rest of this post »
posted by Danielle Citron
Professor Margaret Hu’s important new article, “Biometric ID Cybersurveillance” (Indiana Law Journal), carefully and chillingly lays out federal and state government’s increasing use of biometrics for identification and other purposes. These efforts are poised to lead to a national biometric ID with centralized databases of our iris, face, and fingerprints. Such multimodal biometric IDs ostensibly provide greater security from fraud than our current de facto identifier, the social security number. As Professor Hu lays out, biometrics are, and soon will be, gatekeepers to the right to vote, work, fly, drive, and cross into our borders. Professor Hu explains that the FBI’s Next Generation Identification project will institute:
a comprehensive, centralized, and technologically interoperable biometric database that spans across military and national security agencies, as well as all other state and federal government agencies.Once complete, NGI will strive to centralize whatever biometric data is available on all citizens and noncitizens in the United States and abroad, including information on fingerprints, DNA, iris scans, voice recognition, and facial recognition data captured through digitalized photos, such as U.S. passport photos and REAL ID driver’s licenses.The NGI Interstate Photo System, for instance, aims to aggregate digital photos from not only federal, state, and local law enforcement, but also digital photos from private businesses, social networking sites, government agencies, and foreign and international entities, as well as acquaintances, friends, and family members.
Such a comprehensive biometric database would surely be accessed and used by our network of fusion centers and other hubs of our domestic surveillance apparatus that Frank Pasquale and I wrote about here.
Biometric ID cybersurveillance might be used to assign risk assessment scores and to take action based on those scores. In a chilling passage, Professor Hu describes one such proposed program:
FAST is currently under testing by DHS and has been described in press reports as a “precrime” program. If implemented, FAST will purportedly rely upon complex statistical algorithms that can aggregate data from multiple databases in an attempt to “predict” future criminal or terrorist acts, most likely through stealth cybersurveillance and covert data monitoring of ordinary citizens. The FAST program purports to assess whether an individual might pose a “precrime” threat through the capture of a range of data, including biometric data. In other words, FAST attempts to infer the security threat risk of future criminals and terrorists through data analysis.
Under FAST, biometric-based physiological and behavioral cues are captured through the following types of biometric data: body and eye movements, eye blink rate and pupil variation, body heat changes, and breathing patterns. Biometric- based linguistic cues include the capture of the following types of biometric data: voice pitch changes, alterations in rhythm, and changes in intonations of speech.Documents released by DHS indicate that individuals could be arrested and face other serious consequences based upon statistical algorithms and predictive analytical assessments. Specifically, projected consequences of FAST ‘can range from none to being temporarily detained to deportation, prison, or death.’
Data mining of our biometrics to predict criminal and terrorist activity, which is then used as a basis for government decision making about our liberty? If this comes to fruition, technological due process would certainly be required.
Professor Hu calls for the Fourth Amendment to evolve to meet the challenge of 24/7 biometric surveillance technologies. David Gray and I hopefully answer Professor Hu’s request in our article “The Right to Quantitative Privacy” (forthcoming Minnesota Law Review). Rather than asking how much information is gathered in a particular case, we argue that Fourth Amendment interests in quantitative privacy demand that we focus on how information is gathered. In our view, the threshold Fourth Amendment question should be whether a technology has the capacity to facilitate broad and indiscriminate surveillance that intrudes upon reasonable expectations of quantitative privacy by raising the specter of a surveillance state if deployment and use of that technology is left to the unfettered discretion of government. If it does not, then the Fourth Amendment imposes no limitations on law enforcement’s use of that technology, regardless of how much information officers gather against a particular target in a particular case. By contrast, if it does threaten reasonable expectations of quantitative privacy, then the government’s use of that technology amounts to a “search,” and must be subjected to the crucible of Fourth Amendment reasonableness, including judicially enforced constraints on law enforcement’s discretion.
posted by Danielle Citron
The NSA and the rest of our surveillance state apparatus is shrouded in secrecy. As captured in Frank Pasquale’s superb forthcoming book, governmental surveillance is a black box. Gag orders prevent Internet companies from talking about their participation in PRISM; nearly everything revealing is classified; the Executive Branch is telling us half truths or no truths. To counter massive governmental overreach, Bradley Manning, Edward Snowden, and others have exposed some sunlight on our surveillance state. That sunlight isn’t coming from those who are betraying the country, but those who are trying to save it, at least that’s what many registered voters think. According to a Quinnipiac poll released today, American voters say “55 – 34 percent” that NSA consultant Edward Snowden is a “whistleblower rather than a traitor.” According to the assistant director of the Quinnipiac University Polling Institute, “Most American voters think positively of Edward Snowden,” at least they did before he accepted asylum in Russia. From July 28 to July 31, 1,468 registered voters were surveyed on the phone. These sorts of leaks seem inevitable, at least culturally given our so-called commitment to openness and transparency. The leakers/whistleblowers are trying to nudge the Executive Branch to honor its commitments to the Fourth Amendment, the sentiments of the Church Report, and the Administration’s 2009 Openness and Transparency memo. Let’s see if letting the air out moves us closer to the kind of country we say we are.
H/T: Yale ISP’s Christina Spiesel for the Quinnipiac Poll
posted by Frank Pasquale
The Privacy and Civil Liberties Oversight Board (PCLOB) is holding a “Workshop Regarding Surveillance Programs Operated Pursuant to Section 215 of the USA PATRIOT Act and Section 702 of the Foreign Intelligence Surveillance Act.” Many luminaries in the privacy community are participating. I’m sure they will have great ideas about rendering PRISM, PINWALE, MARINA, et al. more subject to oversight.
But I have heard very little on what the appropriate penalties should be for misuse of surveillance data. In the health care world, we have some pretty clear precedents. For instance, a researcher served four months in prison for snooping into medical records in 2003. Imagine a very similar incident happened in the NSA context—say, an analyst abused his or her access to the data to learn details about an acquaintance who exhibited no suspicious characteristics. What should be the penalty? Feel free to comment below, or to submit ideas directly to the PCLOB.
posted by Deven Desai
In January I wrote a piece, “Beyond Data Location: Data Security in the 21st Century,” for Communications of the ACM. I went into the current facts about data security (basic point: data moving often helps security) and how they clash with jurisdiction needs and interests. As part of that essay I wrote:
A key hurdle is identifying when any government may demand data. Transparent policies and possibly treaties could help better identify and govern under what circumstances a country may demand data from another. Countries might work with local industry to create data security and data breach laws with real teeth as a way to signal that poor data security has consequences. Countries should also provide more room for companies to challenge requests and reveal them so the global market has a better sense of what is being sought, which countries respect data protection laws, and which do not. Such changes would allow companies to compete based not only on their security systems but their willingness to defend customer interests. In return companies and computer scientists will likely have to design systems with an eye toward the ability to respond to government requests when those requests are proper. Such solutions may involve ways to tag data as coming from a citizen of a particular country. Here, issues of privacy and freedom arise, because the more one can tag and trace data, the more one can use it for surveillance. This possibility shows why increased transparency is needed, for at the very least it would allow citizens to object to pacts between governments and companies that tread on individual rights.
Prism shows just how much a new balance is needed. There are many areas to sort to reach that balance. They are too many to explore in blog post. But as I argued in the essay, I think that pulling in engineers (not just industry ones), law enforcement, civil society groups, and oh yes, lawyers to look at what can be done to address the current imbalance is the way to proceed.
June 24, 2013 at 1:44 pm Posted in: Intellectual Property, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Technology Print This Post No Comments
posted by Daniel Solove
In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.
I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.
But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.
The Widespread Hunger for Access
Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”
Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”
In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”
June 3, 2013 at 10:51 am Posted in: Constitutional Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Social Network Websites Print This Post 3 Comments
posted by Robert Gellman
Privacy advocates have disliked the third-party doctrine at least from the day in 1976 when the Supreme Court decided U.S. v. Miller. Anyone who remembers the Privacy Protection Study Commission knows that its report was heavily influenced by Miller. My first task in my long stint as a congressional staffer was to organize a hearing to receive the report of the Commission in 1977. In the introduction to the report, the Commission called the date of the decision “a fateful day for personal privacy.”
Last year, privacy advocates cheered when Justice Sonia Sotomayor’s concurrence in U.S. v. Jones asked if it was time to reconsider the third-party doctrine. Yet it is likely that it would take a long time before the Supreme Court revisits and overturns the third-party doctrine, if ever. Sotomayor’s opinion didn’t attract a single other Justice.
Can we draft a statute to overturn the third-party doctrine? That is not an easy task, and it may be an unattainable goal politically. Nevertheless, the discussion has to start somewhere. I acknowledge that not everyone wants to overturn Miller. See Orin Kerr’s The Case For the Third-party Doctrine. I’m certainly not the first person to ask the how-to-do-it question. Dan Solove wrestled with the problem in Digital Dossiers and the Dissipation of Fourth Amendment Privacy.
I’m going at the problem as if I were still a congressional staffer tasked with drafting a bill. I see right away that there is precedent. Somewhat remarkably, Congress partly overturned the Miller decision in 1978 when it enacted The Right to Financial Privacy Act, 12 U.S.C. § 3401 et seq. The RFPA says that if the federal government wants to obtain records of a bank customer, it must notify the customer and allow the customer to challenge the request.
The RFPA is remarkable too for its exemptions and weak standards. The law only applies to the federal government and not to state and local governments. (States may have their own laws applicable to state agencies.) Bank supervisory agencies are largely exempt. The IRS is exempt. Disclosures required by federal law are exempt. Disclosures for government loan programs are exempt. Disclosures for grand jury subpoenas are exempt. That effectively exempts a lot of criminal law enforcement activity. Disclosures to GAO and the CFPB are exempt. Disclosures for investigations of crimes against financial institutions by insiders are exempt. Disclosures to intelligence agencies are exempt. This long – and incomplete – list is the first hint that overturning the third-party doctrine won’t be easy.
We’re not done with the weaknesses in the RFPA. A customer who receives notice of a government request has ten days to challenge the request in federal court. The customer must argue that the records sought are not relevant to the legitimate law enforcement inquiry identified by the government in the notice. The customer loses if there is a demonstrable reason to believe that the law enforcement is legitimate and a reasonable belief that the records sought are relevant to that inquiry. Relevance and legitimacy are weak standards, to say the least. Good luck winning your case.
Who should get the protection of our bill? The RFPA gives rights to “customers” of a financial institution. A customer is an individual or partnership of five or fewer individuals (how would anyone know?). If legal persons also receive protection, a bill might actually attract corporate support, along with major opposition from every regulatory agency in town. It will be hard enough to pass a bill limited to individuals. The great advantage of playing staffer is that you can apply political criteria to solve knotty policy problems. I’d be inclined to stick to individuals.
posted by Frank Pasquale
First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:
A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .
I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.
Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read the rest of this post »
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
Bartelt’s Dog and the Continuing Vitality of the Supreme Court’s Tacit Distinction between Sense Enhancement and Sense Creation
posted by Albert Wong
Last Term, in an amicus brief in United States v. Jones, 565 U.S. __, several colleagues and I highlighted the Supreme Court’s long, albeit not always clearly stated, history of distinguishing between sense-enhancing and sense-creating technologies for Fourth Amendment purposes. As a practical matter, the Court has consistently subjected technologies in the latter category to closer scrutiny than technologies that merely bolster natural human senses. Thus, the use of searchlights, field glasses, and (to some extent) beepers and airplane-mounted cameras was not found to implicate the Fourth Amendment. As the Court explained, “[n]othing in the Fourth Amendment prohibit[s] the police from augmenting the sensory faculties bestowed upon them at birth with such enhancement as science and technology” may afford. 460 U.S. at 282 (emphasis added). In contrast, the Court has held that technologies that create a new capacity altogether, including movie projectors, wiretaps, ultrasound devices, radar flashlights, directional microphones, thermal imagers, and (as of Jones) GPS tracking devices, do trigger the Fourth Amendment. To hold otherwise, as the Court has stated, would “shrink the realm of guaranteed privacy,” leaving citizens “at the mercy of advancing technology.” 533 U.S. at 34-36.
In fact, of the landmark cases involving technology and the Fourth Amendment during the past 85 years (from United States v. Lee, 274 U.S. 559, in 1927 to Jones in 2012), only in one instance did the Supreme Court appear to deviate from this distinction between sense enhancement and sense creation. In that case, United States v. Place, 462 U.S. 696, and its successors, City of Indianapolis v. Edmond, 531 U.S. 32, and Illinois v. Caballes, 543 U.S. 405, the Court held that the use of trained narcotics-detection dogs (more apparently similar to using a new capacity than merely enhancing a natural human sense) did not implicate the Fourth Amendment. In our amicus brief in Jones, we rationalized Place, Edmond, and Caballes by arguing that dogs were unique, being natural biological creatures that had long been used by the police, even in the time of the Framers. Further, we argued, a canine sniff, unlike the use of, say, a wiretap or a thermal imager, “discloses only the presence or absence of narcotics, a contraband item.” 462 U.S. at 707 (emphasis added). Still, the apparent ‘dog exception’ was rankling. Read the rest of this post »
March 31, 2013 at 11:35 am Posted in: Anonymity, Constitutional Law, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Supreme Court, Technology, Uncategorized Print This Post 14 Comments
posted by Ryan Calo
Amidst all of the discussion of gay marriage at One First Street NW today, you may have missed that the Supreme Court decided Florida v. Jardines. In a five-four opinion by Justice Scalia, the Court held that bringing a police dog within the curtilage (in this case, the front porch) of the home to sniff for drugs constitutes a search for purposes of the Fourth Amendment. As Orin Kerr predicted, the opinion turned on the lack of implied consent to approach with a dog, which converted the detectives’ action into a trespass. Justices Thomas, Ginsburg, Sotomayor, and Kagan joined Justice Scalia’s opinion. Justice Alito wrote for the dissent, joined by Justices Kennedy, Breyer, and the Chief Justice. Justice Kagan, joined by Justices Ginsburg and Sotomayor, wrote separately to note that they “could just as happily have decided [the case] by looking to Jardines’ privacy interests.” Read the rest of this post »